AI Future

HCAI for Policymakers

“Human-Centered AI” by Ben Shneiderman was recently published in Issues in Science and Technology 37, no. 2 (Winter 2021): 56–61. A timely observation is that Artificial Intelligence is clearly expanding to include human-centered issues from ethics, explainability, and trust to applications such as user interfaces for self-driving cars. The importance of the HCAI fresh approach, which can enable more widespread use of AI in safe ways that promote human control, is acknowledged by the article’s appearance in NAS Issues in Science and Technology. An implication of the article is that computer scientists should build devices to enhance and empower—not replace—humans.

HCAI as described by Prof. Shneiderman represents a radically different approach to systems design by imagining a different role for machines. Envisioning AI systems as comprising machines and people working together is a much different starting point than the assumption and goal of autonomous AI. In fact, a design process with this kind of forethought might even lead to a product not being developed, thus preventing future harm. One of the many interesting points in the NAS Issues article is the observation about the philosophical clash between two approaches to gaining knowledge about the world—Aristotle’s rationalism and Leonardo da Vinci’s empiricism—and the connection with the current perspective of AI developers: “The rationalist viewpoint, however, is dominant in the AI community. It leads researchers and developers to emphasize data-driven solutions based on algorithms.” Data science unfortunately often focuses on the rationalist approach without including the contributions from, and protection of, the human experience.

From the NAS article, HCAI is aligned with “the rise of the concept of design thinking, an approach to innovation that begins with empathy for users and pushes forward with humility about the limits of machines and people. Empathy enables designers to be sensitive to the confusion and frustration that users might have and the dangers to people when AI systems fail. Humility leads designers to recognize the inevitability of failure and inspires them to be always on the lookout for what wrongs are preventable.”

Policymakers need to “understand HCAI’s promise not only for our machines but for our lives. A good starting place is an appreciation of the two competing philosophies that have shaped the development of AI, and what those imply for the design of new technologies … comprehending these competing imperatives can provide a foundation for navigating the vast thicket of ethical dilemmas now arising in the machine-learning space.” An HCAI approach can incorporate creativity and innovation into AI systems by understanding and incorporating human insights about complexity into the design of AI systems and using machines to prepare data for taking advantage of human insight and experience. For many more details and enjoyable reading, go to https://issues.org/human-centered-ai/.

NSCAI Final Report

The National Security Commission on Artificial Intelligence (NSCAI) issued a final report. This bipartisan commission of 15 technologists, national security professionals, business executives, and academic leaders delivered an “uncomfortable message: America is not prepared to defend or compete in the AI era.” They discuss a “reality that demands comprehensive, whole-of-nation action.” The final report presents a strategy to “defend against AI threats, responsibly employ AI for national security, and win the broader technology competition for the sake of our prosperity, security, and welfare.”

The mandate of the National Security Commission on Artificial Intelligence (NSCAI) is to make recommendations to the President and Congress to “advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.” The 16 chapters in the Main Report contain many conclusions and recommendations, including a “Blueprints for Action” with detailed steps for implementing the recommendations.

Leave a Reply

Your email address will not be published.