AI Matters: our blog
HCAI for Policymakers
“Human-Centered AI” by Ben Shneiderman was recently published in Issues in Science and Technology 37, no. 2 (Winter 2021): 56–61. A timely observation is that Artificial Intelligence is clearly expanding to include human-centered issues from ethics, explainability, and trust to applications such as user interfaces for self-driving cars. The importance of the HCAI fresh approach, which can enable more widespread use of AI in safe ways that promote human control, is acknowledged by the article’s appearance in NAS Issues in Science and Technology. An implication of the article is that computer scientists should build devices to enhance and empower—not replace—humans.
HCAI as described by Prof. Shneiderman represents a radically different approach to systems design by imagining a different role for machines. Envisioning AI systems as comprising machines and people working together is a much different starting point than the assumption and goal of autonomous AI. In fact, a design process with this kind of forethought might even lead to a product not being developed, thus preventing future harm. One of the many interesting points in the NAS Issues article is the observation about the philosophical clash between two approaches to gaining knowledge about the world—Aristotle’s rationalism and Leonardo da Vinci’s empiricism—and the connection with the current perspective of AI developers: “The rationalist viewpoint, however, is dominant in the AI community. It leads researchers and developers to emphasize data-driven solutions based on algorithms.” Data science unfortunately often focuses on the rationalist approach without including the contributions from, and protection of, the human experience.
From the NAS article, HCAI is aligned with “the rise of the concept of design thinking, an approach to innovation that begins with empathy for users and pushes forward with humility about the limits of machines and people. Empathy enables designers to be sensitive to the confusion and frustration that users might have and the dangers to people when AI systems fail. Humility leads designers to recognize the inevitability of failure and inspires them to be always on the lookout for what wrongs are preventable.”
Policymakers need to “understand HCAI’s promise not only for our machines but for our lives. A good starting place is an appreciation of the two competing philosophies that have shaped the development of AI, and what those imply for the design of new technologies … comprehending these competing imperatives can provide a foundation for navigating the vast thicket of ethical dilemmas now arising in the machine-learning space.” An HCAI approach can incorporate creativity and innovation into AI systems by understanding and incorporating human insights about complexity into the design of AI systems and using machines to prepare data for taking advantage of human insight and experience. For many more details and enjoyable reading, go to https://issues.org/human-centered-ai/.
NSCAI Final Report
The National Security Commission on Artificial Intelligence (NSCAI) issued a final report. This bipartisan commission of 15 technologists, national security professionals, business executives, and academic leaders delivered an “uncomfortable message: America is not prepared to defend or compete in the AI era.” They discuss a “reality that demands comprehensive, whole-of-nation action.” The final report presents a strategy to “defend against AI threats, responsibly employ AI for national security, and win the broader technology competition for the sake of our prosperity, security, and welfare.”
The mandate of the National Security Commission on Artificial Intelligence (NSCAI) is to make recommendations to the President and Congress to “advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.” The 16 chapters in the Main Report contain many conclusions and recommendations, including a “Blueprints for Action” with detailed steps for implementing the recommendations.
I recently spoke with Dr. Eric Daimler about how we can build on the framework he and his colleagues established during his tenure as a contributor to issues of AI policy in the White House during the Obama administration. Eric is the CEO of the MIT-spinout Conexus.com and holds a PhD in Computer Science from Carnegie Mellon University. Here are the interesting results of my interview with him. His ideas are important as part of the basis for ACM SIGAI Public Policy recommendations.
LRM: What are the main ways we should be addressing this issue of data for AI?
EAD: To me there is one big re-framing from which we can approach this collection of issues, prioritizing data interoperability within a larger frame of AI as a total system. In the strict definition of AI, it is a learning algorithm. Most people know of subsets such as Machine Learning and subsets of that called Deep Learning. That doesn’t help the 99% who are not AI researchers. When I have spoken to non-researchers or even researchers who want to better appreciate the sensibilities of those needing to adopt their technology, I think of AI as the interactions that it has. There is the collection of the data, the transportation of the data, the analysis or planning (the traditional domain in which the definition most strictly fits), and the acting on the conclusions. That sense, plan, act framework works pretty well for most people.
LRM: Before you explain just how we can do that, can you go ahead and define some of your important terms for our readers?
EAD: AI is often described as the economic engine of the future. But to realize that growth, we must think beyond AI to the whole system of data, and the rules and context that surround it: our data infrastructure (DI). Our DI supports not only our AI technology, but also our technical leadership more generally; it underpins COVID reporting, airline ticket bookings, social networking, and most if not all activity on the internet. From the unsuccessful launch of healthcare.gov, to the recent failure of Haven, to the months-long hack into hundreds of government databases, we have seen the consequences faulty DI can have. More data does not lead to better outcomes; improved DI does.
Fortunately, we have the technology and foresight to prevent future disasters, if we act now. Because AI is fundamentally limited by the data that feeds it, to win the AI race, we must build the best DI. The new presidential administration can play a helpful role here, by defining standards and funding research into data technologies. Attention to the need for better DI will speed responsiveness to future crises (consider COVID data delays) and establish global technology leadership via standards and commerce. Investing in more robust DI will ensure that anomalies, like ones that would have helped us identify the Russia hack much sooner, will be evident, so we can prevent future malfeasance by foreign actors. The US needs to build better data infrastructure to remain competitive in AI.
LRM: So how might we go about prioritizing data interoperability?
EAD: In 2016, the Department of Commerce (DOC) discovered that on average, it took six months to onboard new suppliers to a midsize trucking company—because of issues with data interoperability. The entire American economy would benefit from encouraging more companies to establish semantic standards, internally and between companies, so that data can speak to other data. According to a DOC report in early 2020, the technology now exists for mismatched data to communicate more easily and data integrity to be guaranteed, thanks to a new area of math called Applied Category Theory (ACT). This should be made widely available.
LRM: And what about enforcing data provenance?
EAD: As data is transformed across platforms—including trendy cloud migrations—its lineage often gets lost. A decision denying your small business loan can and should be traceable back to the precise data the loan officer had at that time. There are traceability laws on the books, but they have been rarely enforced because up until now, the technology hasn’t been available to comply. That’s no longer an excuse. The fidelity of data and the models on top of them should be proven—down to the level of math—to have maintained integrity.
LRM: Speaking more generally, how can we start to lay the groundwork to reap the benefits of these advancements in data infrastructure?
EAD: We need to formalize. When we built 20th century assembly lines, we established in advance where and how screws would be made; we did not ask the village blacksmith to fashion custom screws for every home repair. With AI, once we know what we want to have automated (and there are good reasons to not to automate everything!), we should then define in advance how we want it to behave. As you read this, 18 million programmers are already formalizing rules across every aspect of technology. As an automated car approaches a crosswalk, should it slow down every time, or only if it senses a pedestrian? Questions like this one—across the whole economy—are best answered in a uniform way across manufacturers, based on standardized, formal, and socially accepted definitions of risk.
LRM: In previous posts, I have discussed roles and responsibilities for change in the use of AI. Government regulation is of course important, but what roles do you see for AI tech companies, professional societies, and other entities in making the changes you recommend for DI and other aspects of data for AI?
What is different this time is the abruptness of change. When automation technologies work, they can be wildly disruptive. Sometimes this is very healthy (see: Schumpeter). I find that the “go fast and…” framework has its place, but in AI it can be destructive and invite resistance. That is what we have to watch out for. Only with responsible coordinated action do we encourage adoption of these fantastic and magical technologies. Automation in software can be powerful. These processes need not be linked into sequences just because they can. That is, just because some system can be automated does not mean that it should. Too often there is absolutism in AI deployments when what is called for in these discussions is nuance and context. For example, in digital advertising my concerns are around privacy, not physical safety. When I am subject to a plane’s autopilot, my priorities are reversed.
With my work in the US Federal Government, my bias remains against regulation as a first-step. Shortly after my time with the Obama Whitehouse, I am grateful to have participated with a diverse group for a couple of days at the Halcyon House in Washington D.C. We created some principles for deploying AI to maximize adoption. We can build on these and rally around a sort of LEED-like standard for AI deployment.
Dr. Eric Daimler is CEO & Founder of Conexus and Board Member of Petuum and WelWaze. He was a Presidential Innovation Fellow, Artificial Intelligence and Robotics. Eric is a leading authority in robotics and artificial intelligence with over 20 years of experience as an entrepreneur, investor, technologist, and policymaker. Eric served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of President, as the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI & Robotics. Eric has incubated, built and led several technology companies recognized as pioneers in their fields ranging from software systems to statistical arbitrage. Currently, he serves on the boards of WelWaze and Petuum, the largest AI investment by Softbank’s Vision Fund. His newest venture, Conexus, is a groundbreaking solution for what is perhaps today’s biggest information technology problem — data deluge. Eric’s extensive career across business, academics and policy gives him a rare perspective on the next generation of AI. Eric believes information technology can dramatically improve our world. However, it demands our engagement. Neither a utopia nor dystopia is inevitable. What matters is how we shape and react to, its development. As a successful entrepreneur, Eric is looking towards the next generation of AI as a system that creates a multi-tiered platform for fueling the development and adoption of emerging technology for industries that have traditionally been slow to adapt. As founder and CEO of Conexus, Eric is leading CQL a patent-pending platform founded upon category theory — a revolution in mathematics — to help companies manage the overwhelming challenge of data integration and migration. A frequent speaker, lecturer, and commentator, Eric works to empower communities and citizens to leverage robotics and AI to build a more sustainable, secure, and prosperous future. His academic research has been at the intersection of AI, Computational Linguistics, and Network Science (Graph Theory). His work has expanded to include economics and public policy. He served as Assistant Professor and Assistant Dean at Carnegie Mellon’s School of Computer Science where he founded the university’s Entrepreneurial Management program and helped to launch Carnegie Mellon’s Silicon Valley Campus. He has studied at the University of Washington-Seattle, Stanford University, and Carnegie Mellon University, where he earned his Ph.D. in Computer Science.
FR and Bad Science: Should some research not be done?
Facial recognition issues continue to appear in the news, as well as in scholarly journal articles, while FR systems are being banned and some research is shown to be bad science. AI system researchers who try to associate facial technology output with human characteristics are sometimes referred to as machine-assisted phrenologists. Problems with FR research have been demonstrated in machine learning research such as work by Steed and Caliskan in “A set of distinct facial traits learned by machines is not predictive of appearance bias in the wild.” Meanwhile many examples of harmful products and misuses have been identified in areas such as criminality, video interviewing, and many others. Some communities have considered bans on FR products.
Yet, journals and conferences continue to publish bad science in facial recognition.
Some people say the choice of research topics is up to the researchers – the public can choose not to use the products of their research. However, areas such as genetic, biomedical, and cyber security R&D do have limits. Our professional computing societies can choose to disapprove research areas that cause harm. Sources of mitigating and preventing irresponsible research being introduced into the public space include:
– Peer pressure on academic and corporate research and development
– Public policy through laws and regulations
– Corporate and academic self-interest – organizations’ bottom lines can
suffer from bad publicity
– Vigilance by journals about publishing papers that promulgate the misuse
A recent article by Matthew Hutson in The New Yorker discusses “Who should stop unethical AI.” He remarks that “Many kinds of researchers—biologists, psychologists, anthropologists, and so on—encounter checkpoints at which they are asked about the ethics of their research. This doesn’t happen as much in computer science. Funding agencies might inquire about a project’s potential applications, but not its risks. University research that involves human subjects is typically scrutinized by an I.R.B., but most computer science doesn’t rely on people in the same way. In any case, the Department of Health and Human Services explicitly asks I.R.B.s not to evaluate the “possible long-range effects of applying knowledge gained in the research,” lest approval processes get bogged down in political debate. At journals, peer reviewers are expected to look out for methodological issues, such as plagiarism and conflicts of interest; they haven’t traditionally been called upon to consider how a new invention might rend the social fabric.”
OSTP Launches National AI Initiative Office
The White House Office of Science and Technology Policy announced the establishment of the National Artificial Intelligence Initiative Office. As outlined in legislation, this Office will serve as the point of contact on Federal AI activities across the interagency, as well as with private sector, academia, and other stakeholders. The Select Committee on Artificial Intelligence will oversee the National AI Initiative Office, and Dr. Lynne E. Parker, Deputy United States Chief Technology Officer, will serve as the Founding Director. As explained in Inside Tech Media, the newly enacted National Defense Authorization Act contains important provisions regarding the development and deployment of AI technologies, many of which build upon previous legislation introduced in the 116th Congress, including the establishment of the National AI Initiative Office.
White House Science Team
On January 15, key members of President-Elect Biden’s were announced. The press release says “These diverse, deeply experienced scientists and experts will play a key role in shaping America’s future — and will prepare us to lead the world in the 21st century and beyond.” President-elect Joe Biden said, “Science will always be at the forefront of my administration — and these world-renowned scientists will ensure everything we do is grounded in science, facts, and the truth. Their trusted guidance will be essential as we come together to end this pandemic, bring our economy back, and pursue new breakthroughs to improve the quality of life of all Americans.”
He will nominate Dr. Eric Lander (photo) as Director of the OSTP and to serve as the Presidential Science Advisor. “The president-elect is elevating the role of science within the White House, including by designating the Presidential Science Advisor as a member of the Cabinet for the first time in history.”
Other key members are
Alondra Nelson, Ph.D., OSTP Deputy Director for Science and Society (photo)
Frances H. Arnold, Ph.D., Co-Chair of the President’s Council of Advisors on Science and Technology (photo)
Maria Zuber, Ph.D., Co-Chair of the President’s Council of Advisors on Science and Technology (photo)
Francis S. Collins, M.D., Ph.D., Director of the National Institutes of Health (photo)
Kei Koizumi, OSTP Chief of Staff (photo)
Narda Jones, OSTP Legislative Affairs Director (photo)
Policy-Related Article from AI and Ethics
Stix, C., Maas, M.M. Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy. AI Ethics (2021). https://doi.org/10.1007/s43681-020-00037-w
Big Tobacco, Big Oil, Big Banks … and Big Tech
A larger discussion is growing out of the recent news about Timnit Gebru and Google. Big Tech is having a huge impact on individuals and society both for the many products and services we enjoy and for the current and potential cases of detrimental effects of unethical behavior or naiveté regarding AI ethics issues. How do we achieve AI ethics responsibility in all organizations, big and small? And, not just in corporations, but governmental and academic research organizations?
Some concerned people focus on regulation, but for a variety of reasons public and community pressure may be quicker and more acceptable. This includes corporations earning reputations for ethical actions in the design and development of AI products and systems. An article in MIT Technology Review by Karen Hao discusses a letter signed by nine members of Congress that “sends an important signal about how regulators will scrutinize tech giants.” Ideally our Public Policy goal is strong AI Ethics national and global communities that self-regulate on AI ethical issues, comparable to other professional disciplines in medical science and cybersecurity. Our AI Ethics community, as guidelines evolve, could provide a supportive and guiding presence in the implementation of ethical norms in the research and development in AI. The idea of a global community is reflected also in a recent speech by European Union President Ursula von der Leyen at the World Leader for Peace and Security Award ceremony. She advocates for transatlantic agreements on AI.
AI Centre of Excellence (AICE)
AICE conducted an inaugural celebration in December, 2020. Director John Kamara founded the AI Centre of Excellence in Kenya and is passionate about creating value and long term impact of AI and ML in Africa. The Centre aims to accomplish this by providing expert training to create skilled and employable AI and ML engineers. The Centre dives into creating sustainable impact through Research and Development. AI research and products are estimated to contribute over $13 trillion to the global economy by 2030. This offers the Centre an opportunity to carry out research in selected sectors and build products based on the research. The world has around 40K AI experts in the world, with nearly half in the US and less than 5% in Africa. Oxford Insights estimates that Kenya ranks first in Africa, and AICE aims to leverage this potential and transform AICE into a full blown Artificial Intelligence Centre of Excellence. Please keep your eyes on Africa and ways our public policy can assist efforts there to grow AI in emerging education and research.