AI Matters: our blog
Maria Gini is the Winner of 2022 ACM/SIGAI Autonomous Agents Research Award
The selection committee for the ACM/SIGAI Autonomous Agents Research Award is pleased to announce that Professor Maria Gini is the recipient of the 2022 award. Professor Maria Gini is Professor of Computer Science and Engineering at the University of Minnesota.
For details click here.
ACM SIGAI Webinar: Enlichenment: Insights Towards AI Impact in Education through a Mycelial Partnership between Research, Policy, and Practice
Title: ACM SIGAI Webinar: Enlichenment: Insights Towards AI Impact in Education through a Mycelial Partnership between Research, Policy, and Practice
For Event Registration Please see the ACM Webinar Site: https://webinars.on24.com/acm/rose
Date: Thursday, June 24, 2021
Time: 12:00 PM Eastern Daylight Time
Duration: 1 hour
Summary: As we begin to emerge from COVID-19, in the face of tremendous learning loss and widening achievement gaps, we, as a society, are grappling with envisioning the future of education. In the field of Artificial Intelligence, we ask what our role might be in this emerging reality. This ACM SIGAI Learning Webinar will engage the audience in consideration of these issues in light of insights gained from recent research. Since the early 70s, the field of Artificial Intelligence and the fields of Human Learning and Teaching have partnered together to study how to use technology to understand and support human learning. Nevertheless, despite tremendous growth in these fields over the decades, and notable large-scale success, the emergency move to universal online learning at all levels over the past year has exposed gaps and breakdowns in the path from basic research into practice.
As the new administration reacts by committing to invest substantial research dollars into addressing the “COVID Melt,” or learning loss, we must ask ourselves how to prepare for potentially future emergencies so that such tremendous and inequitable learning loss can be prevented from happening again. The International Alliance to Advance Learning in a Digital Era (IAALDE) is partnering with the American Academy for the Advancement of Science (AAAS) to foster productive synergy between the worlds of research, policy, and practice, beginning with a recent kickoff event. Administrators and policy makers/implementors of policy were invited to engage with world class leading researchers across a broad spectrum of research in technology enhanced learning to accelerate the path from research into real educational impact through practice. The goal is that the work going forward would benefit tremendously from increased grounding from the lived experiences of administrators and implementors of policy in schools. At the same time, that greater awareness of research findings might offer opportunities to reflect and reconsider practices on the ground in schools. This discussion, involving over 100 delegates, was meant to lay the foundation for documents, resources, and activities to move the conversation forward. Find out more about insights learned, next steps, and how you can get involved on June 3!
Speaker: Carolyn P. Rose, Professor, Language Technologies and Human-Computer Interaction, Carnegie Mellon University
Carolyn Rose is a Professor of Language Technologies and Human-Computer Interaction in the School of Computer Science at Carnegie Mellon University. Her research program focuses on computational modeling of discourse to enable scientific understanding of the social and pragmatic nature of conversational interaction of all forms, and using this understanding to build intelligent computational systems for improving collaborative interactions. Her research group’s highly interdisciplinary work, published in over 270 peer reviewed publications, is represented in the top venues of 5 fields: namely, Language Technologies, Learning Sciences, Cognitive Science, Educational Technology, and Human-Computer Interaction, with awards in 3 of these fields. She is a Past President and Inaugural Fellow of the International Society of the Learning Sciences, Senior Member of IEEE, Founding Chair of the International Alliance to Advance Learning in the Digital Era, and Co-Editor-in-Chief of the International Journal of Computer-Supported Collaborative Learning. She also serves as a 2020-2021 AAAS Fellow under the Leshner Institute for Public Engagement with Science, with a focus on public engagement with Artificial Intelligence.
Moderator: Todd W. Neller Professor, Computer Science, Gettysburg College
Todd W. Neller is a Professor of Computer Science at Gettysburg College, and was the recipient of the 2018 AAAI/EAAI Outstanding Educator Award. A Cornell University Merrill Presidential Scholar, he received a B.S. in Computer Science with distinction in 1993. In 2000, he received his Ph.D. with distinction in teaching at Stanford University, where he was awarded a Stanford University Lieberman Fellowship, and the George E. Forsythe Memorial Award for excellence in teaching. His dissertation concerned extensions of artificial intelligence (AI) search algorithms to hybrid dynamical systems, and the refutation of hybrid system properties through simulation and information-based optimization. A game enthusiast, Neller has enjoyed pursuing game AI challenges, computing optimal play for jeopardy dice games such as Pass the Pigs and bluffing dice games such as Dudo, creating new reasoning algorithms for Clue/Cluedo, analyzing optimal Risk attack and defense policies, and designing games and puzzles.
Recent and Upcoming Events
Brookings Webinar: Should the Government Play a Role in Reducing Algorithmic Bias?
On March 12, the Center for Technology Innovation at Brookings hosted a webinar on the role of government in identifying and reducing algorithmic biases (see video). Speakers discussed what is needed to prioritize fairness in machine-learning models and how to weed out artificial intelligence models that perpetuate discrimination. Questions included
How do the European Union, U.K., and U.S. differ in their approaches to bias and discrimination?
What lessons can they learn from each other?
Should approaches to AI bias be universally applied to ensure civil and human rights for protected groups?
They observe that “policymakers and researchers throughout the world are considering strategies for reducing biased decisions made by machine-learning algorithms. To date, the U.K. has been the most forward in outlining a role for government in identifying and mitigating biases and their unintended consequences, especially decisions that impact marginalized populations. In the U.S., legislators and policymakers have focused on algorithmic accountability and the explanation of models to ensure fairness in predictive decision making.”
The moderator was Alex Engler, Rubenstein Fellow – Governance Studies.
Speakers and discussants were
Lara Macdonald and Ghazi Ahamat, Senior Policy Advisors – UK Centre for Data Ethics and Innovation;
Nicol Turner Lee, Brookings Senior Fellow – Governance Studies and Director, Center for Technology Innovation; and
Adrian Weller, Programme Director for AI at the Alan Turing Institute
Algo2021 Conference to Be Held on April 29, 2021
The University College London (Online) will present The Algo2021 Conference: Ecosystems of Excellence & Trust, building upon the success of their 2020 inaugural conference. They will platform all major stakeholders – academia, civil service, and industry – by showcasing the cutting-edge developments, contemporary debates, and perspectives of major players. The 2021 conference theme reflects the desire to promote public good innovation. Sessions and topics include the following:
Machine Learning in Healthcare,
Trust and the Human-on-the-Loop,
Artificial Intelligence and Predictive Policing,
AI and Innovation in Healthcare Technologies,
AI in Learning and Education Technologies,
Building Communities of Excellence in AI, and
Human-AI and Ethics Issues.
Politico’s AI Online Summit on May 31, 2021
The 2021 Summit plans to dissect Europe’s AI legislative package, along with the impact of geopolitical tensions and tech regulations, on topics such as data and privacy concerns. The summit will convene top EU and national decision makers, opinion formers, and tech industry leaders.
“The European Commission will soon introduce legislation to govern the use of AI, acting on its aim to draw up rules for the technology sector over the next five years and on its legacy as the world’s leading regulator of digital privacy. At the heart of the issue is the will to balance the need for rules with the desire to boost innovation, allowing the old continent to assert its digital sovereignty. On where the needle should be, opinions are divided – and the publication of the Commission’s draft proposal will not be the end of the discussion.”
Issues to be addressed are the following:
How rules may fit broader plans to build European tech platforms that compete globally with other regions;
How new requirements on algorithmic transparency might be viewed by regular people; and
What kind of implementation efforts will be required for startups, mid-size companies and big tech.
The Politico 4th edition of the AI Summit will address important questions in panel discussions, exclusive interviews, and interactive roundtable discussions. Top regulators, tech leaders, startups, and civil society stakeholders will examine the EU’s legislative framework on AI and data flow while tackling uncomfortable questions about people’s fundamental rights, misinformation, and international cooperation that will determine the future of AI in Europe and worldwide.
HCAI for Policymakers
“Human-Centered AI” by Ben Shneiderman was recently published in Issues in Science and Technology 37, no. 2 (Winter 2021): 56–61. A timely observation is that Artificial Intelligence is clearly expanding to include human-centered issues from ethics, explainability, and trust to applications such as user interfaces for self-driving cars. The importance of the HCAI fresh approach, which can enable more widespread use of AI in safe ways that promote human control, is acknowledged by the article’s appearance in NAS Issues in Science and Technology. An implication of the article is that computer scientists should build devices to enhance and empower—not replace—humans.
HCAI as described by Prof. Shneiderman represents a radically different approach to systems design by imagining a different role for machines. Envisioning AI systems as comprising machines and people working together is a much different starting point than the assumption and goal of autonomous AI. In fact, a design process with this kind of forethought might even lead to a product not being developed, thus preventing future harm. One of the many interesting points in the NAS Issues article is the observation about the philosophical clash between two approaches to gaining knowledge about the world—Aristotle’s rationalism and Leonardo da Vinci’s empiricism—and the connection with the current perspective of AI developers: “The rationalist viewpoint, however, is dominant in the AI community. It leads researchers and developers to emphasize data-driven solutions based on algorithms.” Data science unfortunately often focuses on the rationalist approach without including the contributions from, and protection of, the human experience.
From the NAS article, HCAI is aligned with “the rise of the concept of design thinking, an approach to innovation that begins with empathy for users and pushes forward with humility about the limits of machines and people. Empathy enables designers to be sensitive to the confusion and frustration that users might have and the dangers to people when AI systems fail. Humility leads designers to recognize the inevitability of failure and inspires them to be always on the lookout for what wrongs are preventable.”
Policymakers need to “understand HCAI’s promise not only for our machines but for our lives. A good starting place is an appreciation of the two competing philosophies that have shaped the development of AI, and what those imply for the design of new technologies … comprehending these competing imperatives can provide a foundation for navigating the vast thicket of ethical dilemmas now arising in the machine-learning space.” An HCAI approach can incorporate creativity and innovation into AI systems by understanding and incorporating human insights about complexity into the design of AI systems and using machines to prepare data for taking advantage of human insight and experience. For many more details and enjoyable reading, go to https://issues.org/human-centered-ai/.
NSCAI Final Report
The National Security Commission on Artificial Intelligence (NSCAI) issued a final report. This bipartisan commission of 15 technologists, national security professionals, business executives, and academic leaders delivered an “uncomfortable message: America is not prepared to defend or compete in the AI era.” They discuss a “reality that demands comprehensive, whole-of-nation action.” The final report presents a strategy to “defend against AI threats, responsibly employ AI for national security, and win the broader technology competition for the sake of our prosperity, security, and welfare.”
The mandate of the National Security Commission on Artificial Intelligence (NSCAI) is to make recommendations to the President and Congress to “advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.” The 16 chapters in the Main Report contain many conclusions and recommendations, including a “Blueprints for Action” with detailed steps for implementing the recommendations.
Data for AI: Interview with Eric Daimler
I recently spoke with Dr. Eric Daimler about how we can build on the framework he and his colleagues established during his tenure as a contributor to issues of AI policy in the White House during the Obama administration. Eric is the CEO of the MIT-spinout Conexus.com and holds a PhD in Computer Science from Carnegie Mellon University. Here are the interesting results of my interview with him. His ideas are important as part of the basis for ACM SIGAI Public Policy recommendations.
LRM: What are the main ways we should be addressing this issue of data for AI?
EAD: To me there is one big re-framing from which we can approach this collection of issues, prioritizing data interoperability within a larger frame of AI as a total system. In the strict definition of AI, it is a learning algorithm. Most people know of subsets such as Machine Learning and subsets of that called Deep Learning. That doesn’t help the 99% who are not AI researchers. When I have spoken to non-researchers or even researchers who want to better appreciate the sensibilities of those needing to adopt their technology, I think of AI as the interactions that it has. There is the collection of the data, the transportation of the data, the analysis or planning (the traditional domain in which the definition most strictly fits), and the acting on the conclusions. That sense, plan, act framework works pretty well for most people.
LRM: Before you explain just how we can do that, can you go ahead and define some of your important terms for our readers?
EAD: AI is often described as the economic engine of the future. But to realize that growth, we must think beyond AI to the whole system of data, and the rules and context that surround it: our data infrastructure (DI). Our DI supports not only our AI technology, but also our technical leadership more generally; it underpins COVID reporting, airline ticket bookings, social networking, and most if not all activity on the internet. From the unsuccessful launch of healthcare.gov, to the recent failure of Haven, to the months-long hack into hundreds of government databases, we have seen the consequences faulty DI can have. More data does not lead to better outcomes; improved DI does.
Fortunately, we have the technology and foresight to prevent future disasters, if we act now. Because AI is fundamentally limited by the data that feeds it, to win the AI race, we must build the best DI. The new presidential administration can play a helpful role here, by defining standards and funding research into data technologies. Attention to the need for better DI will speed responsiveness to future crises (consider COVID data delays) and establish global technology leadership via standards and commerce. Investing in more robust DI will ensure that anomalies, like ones that would have helped us identify the Russia hack much sooner, will be evident, so we can prevent future malfeasance by foreign actors. The US needs to build better data infrastructure to remain competitive in AI.
LRM: So how might we go about prioritizing data interoperability?
EAD: In 2016, the Department of Commerce (DOC) discovered that on average, it took six months to onboard new suppliers to a midsize trucking company—because of issues with data interoperability. The entire American economy would benefit from encouraging more companies to establish semantic standards, internally and between companies, so that data can speak to other data. According to a DOC report in early 2020, the technology now exists for mismatched data to communicate more easily and data integrity to be guaranteed, thanks to a new area of math called Applied Category Theory (ACT). This should be made widely available.
LRM: And what about enforcing data provenance?
EAD: As data is transformed across platforms—including trendy cloud migrations—its lineage often gets lost. A decision denying your small business loan can and should be traceable back to the precise data the loan officer had at that time. There are traceability laws on the books, but they have been rarely enforced because up until now, the technology hasn’t been available to comply. That’s no longer an excuse. The fidelity of data and the models on top of them should be proven—down to the level of math—to have maintained integrity.
LRM: Speaking more generally, how can we start to lay the groundwork to reap the benefits of these advancements in data infrastructure?
EAD: We need to formalize. When we built 20th century assembly lines, we established in advance where and how screws would be made; we did not ask the village blacksmith to fashion custom screws for every home repair. With AI, once we know what we want to have automated (and there are good reasons to not to automate everything!), we should then define in advance how we want it to behave. As you read this, 18 million programmers are already formalizing rules across every aspect of technology. As an automated car approaches a crosswalk, should it slow down every time, or only if it senses a pedestrian? Questions like this one—across the whole economy—are best answered in a uniform way across manufacturers, based on standardized, formal, and socially accepted definitions of risk.
LRM: In previous posts, I have discussed roles and responsibilities for change in the use of AI. Government regulation is of course important, but what roles do you see for AI tech companies, professional societies, and other entities in making the changes you recommend for DI and other aspects of data for AI?
What is different this time is the abruptness of change. When automation technologies work, they can be wildly disruptive. Sometimes this is very healthy (see: Schumpeter). I find that the “go fast and…” framework has its place, but in AI it can be destructive and invite resistance. That is what we have to watch out for. Only with responsible coordinated action do we encourage adoption of these fantastic and magical technologies. Automation in software can be powerful. These processes need not be linked into sequences just because they can. That is, just because some system can be automated does not mean that it should. Too often there is absolutism in AI deployments when what is called for in these discussions is nuance and context. For example, in digital advertising my concerns are around privacy, not physical safety. When I am subject to a plane’s autopilot, my priorities are reversed.
With my work in the US Federal Government, my bias remains against regulation as a first-step. Shortly after my time with the Obama Whitehouse, I am grateful to have participated with a diverse group for a couple of days at the Halcyon House in Washington D.C. We created some principles for deploying AI to maximize adoption. We can build on these and rally around a sort of LEED-like standard for AI deployment.
Dr. Eric Daimler is CEO & Founder of Conexus and Board Member of Petuum and WelWaze. He was a Presidential Innovation Fellow, Artificial Intelligence and Robotics. Eric is a leading authority in robotics and artificial intelligence with over 20 years of experience as an entrepreneur, investor, technologist, and policymaker. Eric served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of President, as the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI & Robotics. Eric has incubated, built and led several technology companies recognized as pioneers in their fields ranging from software systems to statistical arbitrage. Currently, he serves on the boards of WelWaze and Petuum, the largest AI investment by Softbank’s Vision Fund. His newest venture, Conexus, is a groundbreaking solution for what is perhaps today’s biggest information technology problem — data deluge. Eric’s extensive career across business, academics and policy gives him a rare perspective on the next generation of AI. Eric believes information technology can dramatically improve our world. However, it demands our engagement. Neither a utopia nor dystopia is inevitable. What matters is how we shape and react to, its development. As a successful entrepreneur, Eric is looking towards the next generation of AI as a system that creates a multi-tiered platform for fueling the development and adoption of emerging technology for industries that have traditionally been slow to adapt. As founder and CEO of Conexus, Eric is leading CQL a patent-pending platform founded upon category theory — a revolution in mathematics — to help companies manage the overwhelming challenge of data integration and migration. A frequent speaker, lecturer, and commentator, Eric works to empower communities and citizens to leverage robotics and AI to build a more sustainable, secure, and prosperous future. His academic research has been at the intersection of AI, Computational Linguistics, and Network Science (Graph Theory). His work has expanded to include economics and public policy. He served as Assistant Professor and Assistant Dean at Carnegie Mellon’s School of Computer Science where he founded the university’s Entrepreneurial Management program and helped to launch Carnegie Mellon’s Silicon Valley Campus. He has studied at the University of Washington-Seattle, Stanford University, and Carnegie Mellon University, where he earned his Ph.D. in Computer Science.