ACM SIGAI Webinar: Enlichenment: Insights Towards AI Impact in Education through a Mycelial Partnership between Research, Policy, and Practice

Title: ACM SIGAI Webinar: Enlichenment: Insights Towards AI Impact in Education through a Mycelial Partnership between Research, Policy, and Practice

For Event Registration Please see the ACM Webinar Site: https://webinars.on24.com/acm/rose

Date: Thursday, June 24, 2021

Time: 12:00 PM Eastern Daylight Time

Duration: 1 hour

Summary: As we begin to emerge from COVID-19, in the face of tremendous learning loss and widening achievement gaps, we, as a society, are grappling with envisioning the future of education. In the field of Artificial Intelligence, we ask what our role might be in this emerging reality. This ACM SIGAI Learning Webinar will engage the audience in consideration of these issues in light of insights gained from recent research. Since the early 70s, the field of Artificial Intelligence and the fields of Human Learning and Teaching have partnered together to study how to use technology to understand and support human learning. Nevertheless, despite tremendous growth in these fields over the decades, and notable large-scale success, the emergency move to universal online learning at all levels over the past year has exposed gaps and breakdowns in the path from basic research into practice. 

As the new administration reacts by committing to invest substantial research dollars into addressing the “COVID Melt,” or learning loss, we must ask ourselves how to prepare for potentially future emergencies so that such tremendous and inequitable learning loss can be prevented from happening again. The International Alliance to Advance Learning in a Digital Era (IAALDE) is partnering with the American Academy for the Advancement of Science (AAAS) to foster productive synergy between the worlds of research, policy, and practice, beginning with a recent kickoff event.  Administrators and policy makers/implementors of policy were invited to engage with world class leading researchers across a broad spectrum of research in technology enhanced learning to accelerate the path from research into real educational impact through practice. The goal is that the work going forward would benefit tremendously from increased grounding from the lived experiences of administrators and implementors of policy in schools. At the same time, that greater awareness of research findings might offer opportunities to reflect and reconsider practices on the ground in schools. This discussion, involving over 100 delegates, was meant to lay the foundation for documents, resources, and activities to move the conversation forward. Find out more about insights learned, next steps, and how you can get involved on June 3!

Speaker: Carolyn P. Rose, Professor, Language Technologies and Human-Computer Interaction, Carnegie Mellon University

Carolyn Rose is a Professor of Language Technologies and Human-Computer Interaction in the School of Computer Science at Carnegie Mellon University. Her research program focuses on computational modeling of discourse to enable scientific understanding of the social and pragmatic nature of conversational interaction of all forms, and using this understanding to build intelligent computational systems for improving collaborative interactions. Her research group’s highly interdisciplinary work, published in over 270 peer reviewed publications, is represented in the top venues of 5 fields: namely, Language Technologies, Learning Sciences, Cognitive Science, Educational Technology, and Human-Computer Interaction, with awards in 3 of these fields. She is a Past President and Inaugural Fellow of the International Society of the Learning Sciences, Senior Member of IEEE, Founding Chair of the International Alliance to Advance Learning in the Digital Era, and Co-Editor-in-Chief of the International Journal of Computer-Supported Collaborative Learning. She also serves as a 2020-2021 AAAS Fellow under the Leshner Institute for Public Engagement with Science, with a focus on public engagement with Artificial Intelligence.

Moderator: Todd W. Neller Professor, Computer Science, Gettysburg College

Todd W. Neller is a Professor of Computer Science at Gettysburg College, and was the recipient of the 2018 AAAI/EAAI Outstanding Educator Award. A Cornell University Merrill Presidential Scholar, he received a B.S. in Computer Science with distinction in 1993. In 2000, he received his Ph.D. with distinction in teaching at Stanford University, where he was awarded a Stanford University Lieberman Fellowship, and the George E. Forsythe Memorial Award for excellence in teaching. His dissertation concerned extensions of artificial intelligence (AI) search algorithms to hybrid dynamical systems, and the refutation of hybrid system properties through simulation and information-based optimization. A game enthusiast, Neller has enjoyed pursuing game AI challenges, computing optimal play for jeopardy dice games such as Pass the Pigs and bluffing dice games such as Dudo, creating new reasoning algorithms for Clue/Cluedo, analyzing optimal Risk attack and defense policies, and designing games and puzzles.

Data for AI: Interview with Eric Daimler

I recently spoke with Dr. Eric Daimler about how we can build on the framework he and his colleagues established during his tenure as a contributor to issues of AI policy in the White House during the Obama administration. Eric is the CEO of the MIT-spinout Conexus.com and holds a PhD in Computer Science from Carnegie Mellon University. Here are the interesting results of my interview with him. His ideas are important as part of the basis for ACM SIGAI Public Policy recommendations.

LRM: What are the main ways we should be addressing this issue of data for AI? 

EAD: To me there is one big re-framing from which we can approach this collection of issues, prioritizing data interoperability within a larger frame of AI as a total system. In the strict definition of AI, it is a learning algorithm. Most people know of subsets such as Machine Learning and subsets of that called Deep Learning. That doesn’t help the 99% who are not AI researchers. When I have spoken to non-researchers or even researchers who want to better appreciate the sensibilities of those needing to adopt their technology, I think of AI as the interactions that it has. There is the collection of the data, the transportation of the data, the analysis or planning (the traditional domain in which the definition most strictly fits), and the acting on the conclusions. That sense, plan, act framework works pretty well for most people.

LRM: Before you explain just how we can do that, can you go ahead and define some of your important terms for our readers?

EAD: AI is often described as the economic engine of the future. But to realize that growth, we must think beyond AI to the whole system of data, and the rules and context that surround it: our data infrastructure (DI). Our DI supports not only our AI technology, but also our technical leadership more generally; it underpins COVID reporting, airline ticket bookings, social networking, and most if not all activity on the internet. From the unsuccessful launch of healthcare.gov, to the recent failure of Haven, to the months-long hack into hundreds of government databases, we have seen the consequences faulty DI can have. More data does not lead to better outcomes; improved DI does. 

Fortunately, we have the technology and foresight to prevent future disasters, if we act now. Because AI is fundamentally limited by the data that feeds it, to win the AI race, we must build the best DI. The new presidential administration can play a helpful role here, by defining standards and funding research into data technologies. Attention to the need for better DI will speed responsiveness to future crises (consider COVID data delays) and establish global technology leadership via standards and commerce. Investing in more robust DI will ensure that anomalies, like ones that would have helped us identify the Russia hack much sooner, will be evident, so we can prevent future malfeasance by foreign actors. The US needs to build better data infrastructure to remain competitive in AI.

LRM: So how might we go about prioritizing data interoperability?

EAD: In 2016, the Department of Commerce (DOC) discovered that on average, it took six months to onboard new suppliers to a midsize trucking company—because of issues with data interoperability. The entire American economy would benefit from encouraging more companies to establish semantic standards, internally and between companies, so that data can speak to other data. According to a DOC report in early 2020, the technology now exists for mismatched data to communicate more easily and data integrity to be guaranteed, thanks to a new area of math called Applied Category Theory (ACT). This should be made widely available.

LRM: And what about enforcing data provenance? 

EAD: As data is transformed across platforms—including trendy cloud migrations—its lineage often gets lost. A decision denying your small business loan can and should be traceable back to the precise data the loan officer had at that time. There are traceability laws on the books, but they have been rarely enforced because up until now, the technology hasn’t been available to comply. That’s no longer an excuse. The fidelity of data and the models on top of them should be proven—down to the level of math—to have maintained integrity.

LRM: Speaking more generally, how can we start to lay the groundwork to reap the benefits of these advancements in data infrastructure? 

EAD: We need to formalize. When we built 20th century assembly lines, we established in advance where and how screws would be made; we did not ask the village blacksmith to fashion custom screws for every home repair. With AI, once we know what we want to have automated (and there are good reasons to not to automate everything!), we should then define in advance how we want it to behave. As you read this, 18 million programmers are already formalizing rules across every aspect of technology. As an automated car approaches a crosswalk, should it slow down every time, or only if it senses a pedestrian? Questions like this one—across the whole economy—are best answered in a uniform way across manufacturers, based on standardized, formal, and socially accepted definitions of risk.

LRM: In previous posts, I have discussed roles and responsibilities for change in the use of AI. Government regulation is of course important, but what roles do you see for AI tech companies, professional societies, and other entities in making the changes you recommend for DI and other aspects of data for AI?

What is different this time is the abruptness of change. When automation technologies work, they can be wildly disruptive. Sometimes this is very healthy (see: Schumpeter). I find that the “go fast and…” framework has its place, but in AI it can be destructive and invite resistance. That is what we have to watch out for. Only with responsible coordinated action do we encourage adoption of these fantastic and magical technologies. Automation in software can be powerful. These processes need not be linked into sequences just because they can. That is, just because some system can be automated does not mean that it should. Too often there is absolutism in AI deployments when what is called for in these discussions is nuance and context. For example, in digital advertising my concerns are around privacy, not physical safety. When I am subject to a plane’s autopilot, my priorities are reversed.

With my work in the US Federal Government, my bias remains against regulation as a first-step. Shortly after my time with the Obama Whitehouse, I am grateful to have participated with a diverse group for a couple of days at the Halcyon House in Washington D.C. We created some principles for deploying AI to maximize adoption. We can build on these and rally around a sort of LEED-like standard for AI deployment.

Dr. Eric Daimler is CEO & Founder of Conexus and Board Member of Petuum and WelWaze. He was a Presidential Innovation Fellow, Artificial Intelligence and Robotics. Eric is a leading authority in robotics and artificial intelligence with over 20 years of experience as an entrepreneur, investor, technologist, and policymaker.  Eric served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of President, as the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI & Robotics. Eric has incubated, built and led several technology companies recognized as pioneers in their fields ranging from software systems to statistical arbitrage. Currently, he serves on the boards of WelWaze and Petuum, the largest AI investment by Softbank’s Vision Fund. His newest venture, Conexus, is a groundbreaking solution for what is perhaps today’s biggest information technology problem — data deluge. Eric’s extensive career across business, academics and policy gives him a rare perspective on the next generation of AI.  Eric believes information technology can dramatically improve our world.  However, it demands our engagement. Neither a utopia nor dystopia is inevitable. What matters is how we shape and react to, its development. As a successful entrepreneur, Eric is looking towards the next generation of AI as a system that creates a multi-tiered platform for fueling the development and adoption of emerging technology for industries that have traditionally been slow to adapt.  As founder and CEO of Conexus, Eric is leading CQL a patent-pending platform founded upon category theory — a revolution in mathematics — to help companies manage the overwhelming challenge of data integration and migration. A frequent speaker, lecturer, and commentator, Eric works to empower communities and citizens to leverage robotics and AI to build a more sustainable, secure, and prosperous future. His academic research has been at the intersection of AI, Computational Linguistics, and Network Science (Graph Theory). His work has expanded to include economics and public policy. He served as Assistant Professor and Assistant Dean at Carnegie Mellon’s School of Computer Science where he founded the university’s Entrepreneurial Management program and helped to launch Carnegie Mellon’s Silicon Valley Campus.  He has studied at the University of Washington-Seattle, Stanford University, and Carnegie Mellon University, where he earned his Ph.D. in Computer Science.

Interview with Thomas Dietterich

Introduction

Welcome to the eighth interview in our se- ries profiling senior AI researchers. This month we are especially happy to interview our SIGAI advisory board member, Thomas Dietterich, Director of Intelligent Systems at the Institute for Collaborative Robotics and In- telligence Systems (CoRIS) at Oregon State University.

Tom Dietterich

Biography

Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; PhD Stan- ford University 1984) is Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University, where he joined the faculty in 1985. Dietterich is one of the pioneers of the field of Machine Learning and has authored more than 200 refereed publications and two books. His research is motivated by challenging real world problems with a special focus on ecological science, ecosystem management, and sustainable development. He is best known for his work on ensemble methods in machine learning including the development of error- correcting output coding. Dietterich has also invented important reinforcement learning algorithms including the MAXQ method for hierarchical reinforcement learning. Dietterich has devoted many years of service to the research community. He served as President of the Association for the Advancement of Artificial Intelligence (2014-2016) and as the founding president of the International Machine Learning Society (2001-2008). Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and Program Chair of AAAI 1990 and NIPS 2000. Dietterich is a Fellow of the ACM, AAAI, and AAAS.

Getting to Know Tom Dietterich

When and how did you become interested in CS and AI?

I learned to program in Basic in my early teens; I had an uncle who worked for GE on their time-sharing system. I learned Fortran in high school. I tried to build my own adding machine out of TTL chips around that time too. However, despite this interest, I didn’t really know what CS was until I reached graduate school at the University of Illinois. I first engaged with AI when I took a graduate assistant position with Ryszard Michalski on what became machine learning, and I took an AI class from Dave Waltz. I had also studied phi- losophy of science in college, so I had already thought a bit about how we acquire knowledge from data and experiment.

What would you have chosen as your career if you hadn’t gone into CS?

I had considered going into foreign service, and I have always been interested in policy issues. I might also have gone into technical management. Both of my brothers have been successful technical managers.

What do you wish you had known as a Ph.D. student or early researcher?

I wish I had understood the importance of strong math skills for CS research. I was a software engineer before I was a computer science researcher, and it took me a while to understand the difference. I still struggle with the difference between making an incremental advance within an existing paradigm versus asking fundamental questions that lead to new research paradigms.

What professional achievement are you most proud of?

Developing the MAXQ formalism for hierarchical reinforcement learning.

What is the most interesting project you are currently involved with?

I’m fascinated by the question of how machine learning predictors can have models of their own competence. This is important for mak- ing safe and robust AI systems. Today, we have ML methods that give accurate predictions in aggregate, but we struggle to provide point-wise quantification of uncertainty. Related to these questions are algorithms for anomaly detection and open category detection. In general, we need AI systems that can work well even in the presence of “unknown unknowns”.

Recent advances in AI led to many success stories of AI technology undertaking real-world problems. What are the challenges of deploying AI systems?

AI systems are software systems, so the main challenges are the same as with any soft- ware system. First, are we building the right system? Do we correctly understand the users’ needs? Have we correctly expressed user preferences in our reward functions, constraints, and loss functions? Have we done so in a way that respects ethical standards? Second, have we built the system we intended to build? How can we test software com- ponents created using machine learning? If the system is adapting online, how can we achieve continuous testing and quality assurance? Third, when ML is employed, the re- sulting software components (classifiers and similar predictive models) will fail if the input data distribution changes. So we must mon- itor the data distribution and model the pro- cess by which the data are being generated. This is sometimes known as the problem of “model management”. Fourth, how is the deployed system affecting the surrounding social and technical system? Are there unintended side-effects? Is user or institutional behavior changing as a result of the deployment?

One promising approach is combining humans and AI into a collaborative team. How can we design such a system to successfully tackle challenging high-risk applications? Who should be in charge, the human or the AI?

I have addressed this in a recent short paper (Robust Artificial Intelligence and Robust Human Organizations. Frontiers of Computer Science, 13(1): 1-3). To work well in high- risk applications, human teams must function as so-called “High reliability organizations” or HROs. When we add AI technology to such teams, we must ensure that it contributes to their high reliability rather than disrupting and degrading it. According to organizational researchers, HROs share five main practices: (a) continuous attention to anomalous and near-miss events, (b) seeking diverse explanations for such events, (c) maintaining continuous situational awareness, (d) practicing improvisational problem solving, and (e) delegating decision making authority to the team member who has the most expertise about the specific decision regardless of rank. AI systems in HROs must implement these five practices as well. They must be constantly watch- ing for anomalies and near misses. They must seek multiple explanations for such events (e.g., via ensemble methods). They must maintain situational awareness. They must support joint human-machine improvisational problem solving, such as mixed-initiative plan- ning. And they must build models of the expertise of each team member (including them- selves) to know which team member should make the final decision in any situation.

You ask “Who is in charge?” I’m not sure that is the right question. Our goal is to create human-machine teams that are highly reliable as a team. In an important sense, this means every member of the team has responsibil- ity for robust team performance. However, from an ethical standpoint, I think the human team leader should have ultimate responsibil- ity. That task of taking action in a specific situation could be delegated to the AI system, but the team leader has the moral responsibility for that action.

Moving towards transforming AI systems into high-reliable organizations, how can diversity help to achieve this goal?

Diversity is important for generating multiple hypotheses to explain anomalies and near misses. Experience in hospital operating rooms is that often it is the nurses who first detect a problem or have the right solution. The same has been noted in nuclear power plant operations. Conversely, teams often fail when the engage in “group think” and fixate on an incorrect explanation for a problem.

How do you balance being involved in so many different aspects of the AI community?

I try to stay very organized and manage my time carefully. I use a machine learning system called TAPE (Tagging Assistant for Productive Email) developed by my collaborator and student Michael Slater to automatically tag and organize my email. I also take copi- ous notes in OneNote. Oh, and I work long hours…

What was your most difficult professional decision and why?

The most difficult decision is to tell a PhD student that they are not going to succeed in completing their degree. All teachers and mentors are optimistic people. When we meet a new student, we hope they will be very successful. But when it is clear that a student isn’t going to succeed, that is a deep disappointment for the student (of course) but also for the professor.

What is your favorite AI-related movie or book and why?

I really don’t know much of the science fiction literature (in books or films). My favorite is 2001: A Space Odyssey because I think it depicts most accurately how AI could lead to bad outcomes. Unlike in many other stories, HAL doesn’t “go rogue”. Instead, HAL creatively achieves the objective programmed by its creators, unfortunately as a side effect, it kills the crew.

Interview with Iolanda Leite

Introduction

This column is the seventh in our series pro- filing senior AI researchers. This month we are happy to interview Iolanda Leite, Assistant Professor at the School of Computer Science and Electrical Engineering at the KTH Royal Institute of Technology in Sweden. This is a great opportunity to get to know Iolanda, the new AI Matters co-editor in chief. Welcome on board!

Biography

Iolanda Leite is an Assistant Professor at the School of Computer Science and Electri- cal Engineering at the KTH Royal Institute of Technology in Sweden. She holds a PhD in Information Systems and Computer Engineer- ing from IST, University of Lisbon. Prior to join- ing KTH, she was a Research Assistant at the Intelligent Agents and Synthetic Characters Group at INESC-ID Lisbon, a Postdoctoral As- sociate at the Yale Social Robotics Lab and an Associate Research Scientist at Disney Re- search Pittsburgh. Iolanda’s research inter- ests are in the areas of Human-Robot Inter- action and Artificial Intelligence. She aims to develop autonomous socially intelligent robots that can assist people over long periods of time.

Getting to Know Iolanda Leite

When and how did you become interested in CS and AI?

I became interested in CS at the age of 4 when the first computer arrived at our home. It is more difficult to establish a time to define my interest in AI. I was born in the 80s and have always been fascinated by toys that had some level of “intelligence” or “life-likeness” like the Tamagotchi or the Furby robots. During my Master’s degree, I chose the Intelligent Sys- tems specialization. That time was probably when I seriously considered a research career in this area.

What professional achievement are you most proud of?

Seeing my students accomplish great things on their own.

What would you have chosen as your career if you hadn’t gone into CS?

I always loved to work with children so maybe something related to child education.

What do you wish you had known as a Ph.D. student or early researcher?

As an early researcher I often had a hard time dealing with the rejection of papers, applica- tions, etc. What I wish the “past me” could know is that if one keeps working hard, things will eventually work out well in the end. In other words, keeping faith in the system.

What is the most interesting project you are currently involved with?

All of them! If I have to highlight one, we are working with elementary schools that have classes of newly arrived children in a project where we are using social robots to promote inclusion between newly arrived and local chil- dren. This is part of an early career fellowship awarded by the Jacobs Foundation.

We currently observe many promising and exciting advances in using AI in education, going beyond automating Piazza answering, how should we make use of AI to teach AI?

I believe that AI can be used to complement teachers and provide personalized instruction to students of all ages and in a variety of top- ics. Robotic tutors can play an important role in education because the mere physical pres- ence of a robot has shown to have a positive impact on how much information students can recall, for example when compared to a virtual agent displayed in a computer screen deliver- ing the exact same content.

How can we make AI more diverse? Do you have a concrete idea on what we as (PhD) students, researchers, and educators in AI can do to increase diversity our field?

Something we can all do is to participate in outreach initiatives targeting groups underrep- resented in AI to show them that there is space for them in the community. If we start bottom-up, in the long-term I am positive that our community will be more diverse at all lev- els and the bias in opportunities, recruiting, etc. will go away.

What was your most difficult professional decision and why?

Leaving my home country (Portugal) after fin- ishing my PhD to continue my research career because I miss my family and friends, and also the good weather!

How do you balance being involved in so many different aspects of the AI community?

I love what I do and I currently don’t have any hobbies 🙂

AI is grown up – it’s time to make use of it for good. Which real-world problem would you like to see solved by AI in the future?

If AI could fully address any of the Sustainable Development Goals established by the United Nations, it would be (more than) great. Al- though there are excellent research and fund- ing initiatives in that direction, we are still not there yet.

What is your favorite AI-related movie or book and why?

One of my favorite ones recently was the Westworld TV Series because of the power relationships between the human and the robotic characters. I find it hard to believe that humans will treat robots the way they are treated in the series, but it makes me reflect on how our future interactions with technol- ogy that is becoming more personalized and “human-like” might look like.

Autonomous Vehicles: Policy and Technology

In 2018, we discussed language that aims at safety and degrees of autonomy rather than having, possibly unattainable, goals of completely autonomous things. A better approach, at least for the next 5-10 years, is to seek the correct balance between technology and humans in hybrid devices and systems. See for example, the Unmanned Integrated Systems Roadmap, 2017-2042 and Ethically Aligned Design. We also need to consider the limits and possibilities for research on the technologies and their impacts on time frames and the proper focus of policymaking.

In a recent interview, Dr. Harold Szu, a co-founder and former governor of the International Neural Network Society, discusses research ideas that better mimic human thinking and that could dramatically reduce the time to develop autonomous technology. He discusses a possible new level of brain-style computing that incorporates fuzzy membership functions into autonomous control systems. Autonomous technology incorporating human characteristics, along with safe policies and earlier arrival of brain-style technologies, could usher in the next big economic boom. For more details, view the Harold Szu interview.

Interview with Kristian Kersting

This column is the sixth in our series profiling senior AI researchers. This month we interview Kristian Kersting, Professor in Computer Science and Deputy Director of the Centre for Cognitive Science at the Technical University of Darmstadt, Germany.

Kristian Kerting’s Bio

After receiving his Ph.D. from the University of Freiburg in 2006, he was with the MIT, Fraunhofer IAIS, the University of Bonn, and the TU Dortmund University. His main research interests are statistical relational artificial intelligence (AI), probabilistic deep programming, and machine learning. Kristian has published over 170 peer-reviewed technical papers and co-authored a book on statistical relational AI. He received the European Association for Artificial Intelligence (EurAI, formerly ECCAI) Dissertation Award 2006 for the best AI dissertation in Europe and two best-paper awards (ECML 2006, AIIDE 2015). He gave several tutorials at top AI conferences, co-chaired several international workshops, and cofounded the international workshop series on Statistical Relational AI (StarAI). He regularly serves on the PC (often at senior level) for several top conference and co-chaired the PC of ECML PKDD 2013 and UAI 2017. He is the Speciality Editor in Chief for Machine Learning and AI of Frontiers in Big Data, and is/was an action editor of TPAMI, JAIR, AIJ, DAMI, and MLJ.

When and how did you become interested in AI?

As a student, I was attending an AI course of Bernhard Nebel at the University of Freiburg. This was the first time I dived deep into AI. However, my interest in AI was probably triggered earlier. Around the age of 16, I think, I was reading about AI in some popular science magazines. I did not get all the details, but I was fascinated.

What professional achievement are you most proud of?

We were collaborating with biologists on understanding better how plants react to (a)biotic stress using machine learning to analyze hyperspectral images. We got quite encouraging results. The first submission to a journal, however, got rejected. As you can imagine, I was disappointed. One of the biologists from our team looked at me and said ”Kristian, do not worry, your research helped us a lot.” This made me proud. But also the joint work with Martin Mladenov on compressing linear and quadratic programs using fractional automorpishms. This provides optimization flags for ML and AI compilers. Turning them on makes the compilers attempt to reduce the solver costs, making ML and AI automatically faster.

What would you have chosen as your career if you hadn’t gone into CS?

Physics, I guess, but back then I did not see any other option than Computer Science.

What do you wish you had known as a Ph.D. student or early researcher?

That “sleep is for post-docs,” as Michael Littman once said.

Artificial Intelligence = Machine Learning. What’s wrong with this equation?

Machine Learning (ML) and Artificial Intelligence (AI) are indeed similar, but not quite the same. AI is about problem solving, reasoning, and learning in general. To keep it simple, if you can write a very clever program that shows intelligent behavior, it can be AI. But unless the program is automatically learned from data, it is not ML. The easiest way to think of their relationship is to visualize them as concentric circles with AI first and ML sitting inside (with deep learning fitting inside both), since ML also requires to write programs, namely, of the learning process. The crucial point is that they share the idea of using computation as the language for intelligent behavior.

As you experienced AI research and education in the US and in Europe, what are the biggest differences between the two systems and what can we learn from each other?

If you present a new idea, US people will usually respond with “Sounds great, let’s do it!”, while the typical German reply is “This won’t work because …”. Here, AI is no exception. It is much more critically received in Germany than in the US. However, this also provides research opportunities such as transparent, fair and explainable AI. Generally, over the past three decades, academia and industry have been converging philosophically and phys cally much more in the US than in Germany. This facilitate the transfer of AI knowledge via well-trained, constantly learning AI experts, who can then continuously create new ideas within the company/university and keep pace with the AI development. To foster AI research and education, the department structure and tenure-track system common in the US is beneficial. On the other hand, Germany is offering access to free higher education to all students, regardless of their origin. AI has no borders. We have to take it from the ivory towers and make it accessible for all.

What is the most interesting project you are currently involved with?

Deep learning has made striking advances in enabling computers to perform tasks like recognizing faces or objects, but it does not show the general, flexible intelligence that lets people solve problems without being specially trained to do so. Thus, it is time to boost its IQ. Currently, we are working on deep learning approaches based on sum-product networks and other arithmetic circuits that explicitly quantify uncertainty. Together with colleagues—also from the Centre of Cognitive Science—we combining the resulting probabilistic deep learning with probabilistic (logical) programming languages. If successful, this would be a big step forward in programming languages, machine learning and AI.

AI is grown up – it’s time to make use of it for good. Which real-world problem would you like to see solved by AI in the future?

Due to climate change, population growth and food security concerns the world has to seek more innovative approaches to protecting and improving crop yield. AI should play a major role here. Next to feeding a hungry world, AI should aim to help eradicate disease and poverty.

We currently observe many promising and exciting advances in using AI in education, going beyond automating Piazza answering, how should we make use of AI to teach AI?

AI can be seen as an expanding and evolving network of ideas, scholars, papers, codes and showcases. Can machines read this data? We should establish the “AI Genome”, a dataset, a knowledge base, an ongoing effort to learn and reason about AI problems, concepts, algorithms, and experiments. This would not only help to curate and personalize the learning experience but also to meet the challenges of reproducible AI research. It would make AI truly accessible for all.

What is your favorite AI-related movie or book and why?

“Ex Machina” because the Turing test is shaping its plot. It makes me think about current real-life systems that give the impression that they pass the test. However, I think AI is hard than many people think.

Interview with Ayanna Howard

Welcome!  This column is the fifth in our series profiling senior AI researchers. This month focuses on Dr. Ayanna Howard.  In addition to our interview, Dr. Howard was recently interviewed by NPR and they created an animated video about how “Being Different Helped A NASA Roboticist Achieve Her Dream.”

Ayanna Howard’s Bio

Ayanna Howard

Ayanna Howard, Ph.D. is Professor and Linda J. and Mark C. Smith Endowed Chair in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. As an educator, researcher, and innovator, Dr. Howard’s career focus is on intelligent technologies that must adapt to and function within a human-centered world. Her work, which encompasses advancements in artificial intelligence (AI), assistive technologies, and robotics, has resulted in over 200 peer-reviewed publications in a number of projects – from assistive robots in the home to AI-powered STEM apps for children with diverse learning needs. She has over 20 years of R&D experience covering a number of projects that have been supported by various agencies including: National Science Foundation, NewSchools Venture Fund, Procter and Gamble, NASA, and the Grammy Foundation. Dr. Howard received her B.S. in Engineering from Brown University, her M.S.E.E. from the University of Southern California, her M.B.A. from the Drucker Graduate School of Management, and her Ph.D. in Electrical Engineering from the University of Southern California. To date, her unique accomplishments have been highlighted through a number of awards and articles, including highlights in USA Today, Upscale, and TIME Magazine, as well as being named a MIT Technology Review top young innovator and recognized as one of the 23 most powerful women engineers in the world by Business Insider. In 2013, she also founded Zyrobotics, which is currently licensing technology derived from her research and has released their first suite of STEM educational products to engage children of all abilities. From 1993-2005, Dr. Howard was at NASA’s Jet Propulsion Laboratory. She has also served as the Associate Director of Research for the Georgia Tech Institute for Robotics and Intelligent Machines and as Chair of the multidisciplinary Robotics Ph.D. program at Georgia Tech.

How did you become interested in Computer Science and AI?

I first became interested in robotics as a young, impressionable, middle school girl. My motivation was the television series called The Bionic Women – my goal in life, at that time, was to gain the skills necessary to build the bionic women. I figured that I had to acquire combined skill sets in engineering and computer science in order to accomplish that goal. With respect to AI, I became interested in AI after my junior year in college, when I was required to design my first neural network during my third NASA summer internship in 1992. I quickly saw that, if I could combine the power of AI with Robotics – I could enable the ambitious dreams of my youth.

 

What was your most difficult professional decision and why?

The most difficult professional decision I had to make, in the past, was to leave NASA and pursue robotics research as an academic. The primary place I’d worked at from 1990 until 2005 was at NASA. I’d grown over those 15 years in my technical job positions from summer intern to computer scientist (after college graduation) to information systems engineer, robotics researcher, and then senior robotics researcher. And then, I was faced with the realization that, in order to push my ambitious goals in robotics, I needed more freedom to pursue robotics applications outside of space exploration. The difficulty was, I still enjoyed the space robotics research efforts I was leading at NASA, but I also felt a need to expand beyond my intellectual comfort zone.

What professional achievement are you most proud of?

The professional achievement I am proudest of is founding of a startup company, Zyrobotics, which has commercialized educational products based on technology licensed from my lab at Georgia Tech. I’m most proud of this achievement because it allowed me to combine all of the hard-knock lessons I’ve learned in designing artificial intelligence algorithms, adaptive user interfaces, and human-robot interaction schemes with a real-world application that has large societal impact – that of engaging children of diverse abilities in STEM education, including coding.

What do you wish you had known as a Ph.D. student or early researcher?

As a Ph.D. student, I wish I had known that finding a social support group is just as important to your academic growth as finding an academic/research home.  I consider myself a fairly stubborn person – I consider words of discouragement a challenge to prove others wrong. But psychological death by a thousand cuts (i.e. words of negativism) is a reality for many early researchers.  A social support group helps to balance the negativism that others, sometimes unconsciously, subject others too.

What would you have chosen as your career if you hadn’t gone into CS?

If I hadn’t gone into the field of Robotics/AI, I would have chosen a career as a forensic scientist. I’ve always loved puzzles and in forensic science, as a career, I would have focused on solving life puzzles based on the physical evidence. The data doesn’t lie (although, as we know, you can bias the data so it seems to).

What is a “typical” day like for you?

Although I have no “typical” day – I can categorize my activities into five main buckets, in no priority order: 1) human-human interactions, 2) experiments and deployments, 3) writing (including emails), 4) life balance activities, and 5) thinking/research activities. Human-human interactions involve everything from meeting with my students to talking with special education teachers to one-on-one observations in the pediatric clinic. Experiments and deployments involve everything from running a participant study to evaluating the statistics associated with a study hypothesis. Writing involves reviewing my students’ publication drafts, writing proposals, and, of course, addressing email action items. Life-balance activities include achieving my daily exercise goals as well as ensuring I don’t miss any important family events. Finally thinking/research activities covers anything related to coding up a new algorithm, consulting with my company, or jotting down a new research concept on a scrap of paper.

What is the most interesting project you are currently involved with?

The most interesting project that I currently lead involves an investigation in developing robot therapy interventions for young children with motor disabilities. For this project, we have developed an interactive therapy game called SuperPop VR that requires children to play within a virtual environment based on a therapist-designed protocol. A robot playmate interacts with each child during game play and provides both corrective and motivational feedback. An example of corrective feedback is when the robot physically shows the child how to interact with the game at the correct movement speed (as compared to a normative data profile). An example of motivational feedback is when the robot, through social interaction, encourages the child when they have accomplished their therapy exercise goal. We’ve currently deployed the system in pilot studies with children with Cerebral Palsy and have shown positive changes with respect to their kinematic outcome metrics. We’re pushing the state-of-the-art in this space by incorporating additional factors for enhancing the long-term engagement through adaptation of both the therapy protocol as well as the robot behaviors.

How do you balance being involved in so many different aspects of the AI community?

In order for me to become involved in any new AI initiative and still maintain a healthy work-life balance, I ask myself – Is this initiative something that’s important to me and aligned with my value system; Can I provide a unique perspective to this initiative that would help to make a difference; Is it as important or more important than other initiatives I’m involved in; and Is there a current activity that I can replace so I have time to commit to the initiative now or in the near-future. If the answer is yes to all those questions, then I’m usually able to find an optimal balance of involvement in the different AI initiatives of interest.

What is your favorite CS or AI-related movie or book and why?

My favorite AI-related movie is the Matrix. What fascinates me about the Matrix is the symbiotic relationship that exists between humans and intelligent agents (both virtual and physical).  One entity can not seem to exist without the other. And operating in the physical world is much more difficult than operating in the virtual, although most agents don’t realize that difference until they accept the decision to navigate in both types of worlds.

Is it too late to address the moral , ethical, and economic issues introduced by the commercialization of AI?

What do recent deployments of AI mean to the public or the average citizen? Will AI be a transparent technology, invisible at the public policy level? Is it too late to address the moral , ethical, and economic issues introduced by the commercialization of AI?

On September 14, 2017 the NEOACM (Northeast Ohio ACM) Professional chapter held the “We come in peace 2” AI panel hosted by the McDonough Museum of Fine Art in Youngstown Ohio. The members of the panel were: Doug McCollough: CIO of Dublin Ohio, Dr. Shiqi Zhang: AI and Robotics Reseacher at Cleveland State University, Andrew Konya: Co-founder & CEO of Remesh, a Cleveland-based AI company,Dr. Jay Ramanathan: Executive Director of Arthapedia.zone, Paul Carlson: Intelligent Community Strategist for Columbus Ohio and Dr. Mark Vopat: Professor of Political Philosophy, Applied Ethics at Youngstown State University. Our moderator was Nikola Danaylov, author of the best selling book “Conversations with Future: 21 Visions for the 21st century”.

The goal of the panel was to was discuss the latent consequences both positive and negative of recent AI based technologies that are being deployed and reach the general public. The scope of the goal ranged from the ethics and policy that must be considered as smart cities are brought on line to the impact of robotics and decision making technologies in law enforcement. The panel visited such diverse subject matter as Cognitive Computing to Agent Belief. While the focus originally started out on AI deployments in cities in the state of Ohio, it became clear that most of the issues where universal in nature. The panel started at 6:00 p.m. EDT and it was just getting warmed up when we had to bring it to a close at 8:00 p.m. EDT. There just wasn’t time to get to all of the questions, or to do justice to all of the issues and topics that were introduced during the panel. There was a burning desire to continue the conversation and debate. So after a discussion with some of our fellow ACM members at SIGAI and the AI panelists we’ve decided to carry over some of that discussion to an AI-Matters blog in hopes that we could engage the broader AI community as well as have a more flexible format that would give us ample time and space. Some of the highlights for the AI Panel can be found at:

2017 AI Panel “We come in peace”

The plan is to tackle some of the subject matter in this blog and to handle other aspects of the subject matter in webinar form. We hope that our fellow SIGAI members will feel free to contribute to this conversation as it develops providing questions, insights, suggestions, and direction. The moderator Nikola Danaylov and the panelists have all agreed to participate in this blog so if this blog goes anything like the panel discussion, “hold on to your seats”! We want to dive into the questions such as what does this recent incarnation of “Artificial Intelligence” mean to the public or for the average citizen? What impact will it have on infrastructure and the economy? From a commercialization perspective has “AI” been displaced by machine learning and data science? If AI and machine learning become transparent technologies will it be possible to regulate their impact on society? Is it already too late to stop any potential negative impact of AI based technologies? And I for one am looking forward to a continuation of the discussion of just what constitutes agent beliefs, where they come from, and how will agent belief systems be dealt with at the public policy or commercialization level. And then again perhaps even these are the wrong questions to be asking if our concern is the public good. We hope you join us as we attempt to deal with these questions and more.

Cheers

Cameron Hughes
Current Chair NEOACM Professional Chapter
SIGAI Member

AI Matters Interview: Getting to Know Maja Mataric

AI Matters Interview with Maja Mataric

Welcome!  This month we interview Maja Mataric, Vice Dean for Research and the Director of the Robotics and Autonomous Systems Center at the University of Southern California.

Maja Mataric’s Bio

Maja Mataric named as one 10 up-and-coming LA innovators to watch

Maja Matarić is professor and Chan Soon-Shiong chair in Computer Science Department, Neuroscience Program, and the Department of Pediatrics at the University of Southern California, founding director of the USC Robotics and Autonomous Systems Center (RASC), co-director of the USC Robotics Research Lab and Vice Dean for Research in the USC Viterbi School of Engineering. She received her PhD in Computer Science and Artificial Intelligence from MIT in 1994, MS in Computer Science from MIT in 1990, and BS in Computer Science from the University of Kansas in 1987.

How did you become interested in robotics and AI?

When I moved to the US in my teens, my uncle wisely advised me that “computers are the future” and that I should study computer science. But I was always interested in human behavior. So AI was the natural combination of the two, but I really wanted to see behavior in the real world, and that is what robotics is about. Now that is especially interesting as we can study the interaction between people and robots, my area of research focus.

Do you have any suggestions for people interested in doing outreach to K-12 students or the general public?

Getting involved with K-12 students in incredibly rewarding! I do a huge amount of K-12 outreach, including students, teachers, and families. I find the best way to do so is by including my PhD students and undergraduates, who are naturally more relatable to the K-12 students: I always have them say what “grade” they are in and how much more fun “school” is once they get to do research. The other key parts to outreach include letting the audience do more than observe: the audience should get involved, touch, and ask questions. And finally, the audience should get to take something home, such as concrete links to more information and accessible and affordable activities so the outreach experience is not just a one-off. Above all, I think it’s critical to convey that STEM is changing on almost a daily basis, that everyone can do it, and that whoever gets into it can shape its future and with it, the future of society.

How do you think robotics or AI researchers in academia should best connect to industry?

Recently connections to industry have become especially pressing in robotics, which has gone, during my career so far, from being a small area of specialization to being a massive and booming area of employment opportunity and huge technology leaps. This means undergraduate and graduate students need to be trained in latest and most relevant skills and methods, and all students need to be inspired and empowered to pursue skills and careers in these areas, not just those who self-select as their most obvious path; we have to proactively work on diversity and inclusion as these are clearly articulated needs by industry. There are great models of companies that have strong outreach to researchers, such as Microsoft and Google to name two, both holding annual faculty research summits and having grant opportunities for faculty to connect with their research and business units. As in all contexts, it is best to develop personal relationships with contacts at relevant companies, as they tend to lead to most meaningful collaborations.

What was your most difficult professional decision and why?

It’s hard to pick one, but here are, briefly, three that are interesting: 1) I had to actively choose whether to speak up against unfair treatment when I was still pre-tenure and in a very under-repreresented group, or to stay silent and not make waves. I spoke up and never regretted being true to myself. 2) I had to choose whether to take part of my time away from research to get involved and stay involved in academic administration. I chose to do so, but also chose to never let it take more than the official half time, and never stomp on my research. 3) I had to choose whether to leave academia for a startup or industry. These days, that is an increasingly complex choice, but as long as academia allows us to explore and experiment, it will remain the best choice.

What professional achievement are you most proud of?

The successes of my students and of my research field. Seeing my PhD students receive presidential awards while having balanced lives with families and still responding to my emails just makes me beam with pride. Pioneering a field, socially assistive robotics, that focuses on helping users with special needs, from those with autism to those with Alzheimer’s, to reach their potential. Seeing that field become established and grow from the enthusiasm of wonderful students and young researchers is an unparalleled source of professional satisfaction.

What do you wish you had known as a Ph.D. student or early researcher?

Nobody, no matter how senior or famous, knows how things are going to work out and how much another person can achieve. So when receiving advice, believe encouragement and utterly ignore discouragement. I am fortunate to be very stubborn by nature, but it was still a hard lesson and I see too many young people taking advice too seriously; it’s good to get advice but take it with a grain of salt: keep pushing for what you enjoy and believe in, even if it makes some waves and raises some eyebrows.

What would you have chosen as your career if you hadn’t gone into robotics?

I think about that when I talk to K-12 students; I try to tell them that it is fine to have a meandering path. I finally understand that what really fascinates me is people and what makes us tick. I could have studied that from various perspectives, including medicine, psychology, neuroscience, anthropology, economics, history… but since I was advised (by my uncle, see above) to go into computer science, I found a way to connect those paths. It’s almost arbitrary but it turned out to be lucky, as I love what I do.

What is a “typical” day like for you?

I have no typical day, they are all crazy in enjoyable ways. I prefer to spend my time in face-to-face interactions with people, and there are so many to collaborate with, from PhD students and undergraduate students, to research colleagues, to dean’s office colleagues, to neighbors on my floor and around my lab, to K-12 students we host. It’s all about people. And sure, there is a lot of on-line work, too, too much of it given how much less satisfying it is compared to human-human interactions, but we have to read, review, evaluate, recommend, rank, approve, certify, link, purchase, pay, etc.

What is the most interesting project you are currently involved with?

Since I got involved with socially assistive robotics, I truly love all my research projects: we are working with children with autism, with reducing pain in hospital patients, and addressing anxiety, loneliness and isolation in the elderly. I share with my students the curiosity to try new things and enjoy the opportunity to do so collaborative and often in a very interdisciplinary way, so there is never a shortage of new things to discover, learn, and overcome, and, hopefully, to do some good.

How do you balance being involved in so many different aspects of the robotics and AI communities?

With daily difficult choices: it’s an hourly struggle to focus on what is most important, set the rest aside, and then get back to enough of it but not all of it and, above all, to know what is in what category. I find that my family provides an anchoring balance that helps greatly with prioritizing.

What is your favorite CS or AI-related movie or book and why?

“Wall*E”: it’s a wonderfully human (vulnerable, caring, empathetic, idealistic) portrayal of a robot, one that has all the best of our qualities and none of the worst. After that, “Robot and Frank” and “Big Hero 6”.

AI Matters Interview with Peter Stone

Welcome!  This column is the third in our series profiling senior AI researchers. This month focuses on Peter Stone, a Professor at the University of Texas Austin and the COO and co-founder of Cogitai, Inc.

Peter Stone’s Bio

Peter Stone

Dr. Peter Stone is the David Bruton, Jr. Centennial Professor and Associate Chair of Computer Science, as well as Chair of the Robotics Portfolio Program, at the University of Texas at Austin. In 2013 he was awarded the University of Texas System Regents’ Outstanding Teaching Award and in 2014 he was inducted into the UT Austin Academy of Distinguished Teachers, earning him the title of University Distinguished Teaching Professor. Professor Stone’s research interests in Artificial Intelligence include machine learning (especially reinforcement learning), multiagent systems, robotics, and e-commerce. Professor Stone received his Ph.D in Computer Science in 1998 from Carnegie Mellon University. From 1999 to 2002 he was a Senior Technical Staff Member in the Artificial Intelligence Principles Research Department at AT&T Labs – Research. He is an Alfred P. Sloan Research Fellow, Guggenheim Fellow, AAAI Fellow, Fulbright Scholar, and 2004 ONR Young Investigator. In 2003, he won an NSF CAREER award for his proposed long term research on learning agents in dynamic, collaborative, and adversarial multiagent environments, in 2007 he received the prestigious IJCAI Computers and Thought Award, given biannually to the top AI researcher under the age of 35, and in 2016 he was awarded the ACM/SIGAI Autonomous Agents Research Award.

How did you become interested in AI?

The first I remember becoming interested in AI was on a field trip to the University of Buffalo when I was in Middle School or early High School (I don’t remember which).  The students rotated through a number of science labs and one of the ones I ended up in was a computer science “lab.”  The thing that stands out in my mind is the professor showing us pictures of various shapes such as triangles and squares, pointing out how easy it was for us to distinguish them, but then asserting that nobody knew how to write a computer program to do so (to date myself, this must have been the mid ’80s).  I had already started programming computers, but this got me interested in the concept of modeling intelligence with computers.

What made you decide the time was right for an AI startup?

Reinforcement learning has been a relatively “niche” area of AI since I became interested in it my first year of graduate school.  But with recent advances, I became convinced that now was the time to move to the next level and work on problems that are only possible to attack in a commercial setting.

How did I become convinced?  For that, I owe the credit to Mark Ring, one of my co-founders at Cogitai.  He and I met at the first NIPS conference I attended back in the mid ’90s.  We’ve stayed in touch intermittently.  But then in the fall of 2014 he visited Austin and got in touch.  He pitched the idea to me of starting a company based on continual learning, and it just made sense.

What professional achievement are you most proud of?

I’m made proud over and over again by the achievements of my students and postdocs.  I’ve been very fortunate to work with a phenomenal group of individuals, both technically and personally.  Nothing makes me happier than seeing each succeed in his or her own way, and to think that I played some small role in it.

What do you wish you had known as a Ph.D. student or early researcher?

It’s cliche, but it’s true.  There’s no better time of life than when you’re a Ph.D. student.  You have the freedom to pursue one idea that you’re passionate about to the greatest possible, with very few other responsibilities.  You don’t have the status, appreciation, or salary that you deserve and that you’ll eventually inevitably get.  And yes, there are pressures.  But your job is to learn and to change the world in some small way.  I didn’t appreciate it when I was a student even though my advisor (Manuela Veloso) told me.  And I don’t expect my students to believe me when I tell them now.  But over time I hope they come to appreciate it as I have.  I loved my time as a Ph.D. student. But if I had known how many aspects of that time of life would be fleeting, I may have appreciated it even more.

What would you have chosen as your career if you hadn’t gone into AI?

I have no idea.  When I graduated from the University of Chicago as an undergrad, I applied to 4 CS Ph.D. programs, the Peace Corps, and Teach for America.  CMU was the only Ph.D. program that admitted me.  So I probably would have done the Peace Corps or Teach for America.  Who knows where that would have led me?

What is a “typical” day like for you?

I live a very full life.  Every day I spend as much time with my family as they’ll let me (teenagers….) and get some sort of exercise (usually either soccer, swimming, running, or biking).  I also play my violin about 3-4 times per week.  I schedule those things, and other aspects of my social life, and then work in all my “free” time.  That usually means catching up on email in the morning, attending meetings with students and colleagues either in person or by skype, reading articles, and editing students’ papers.  And I work late at night and on weekends when there’s no “fun” scheduled.  But really, there’s no “typical” day.  Some days I’m consumed with reading; others with proposal writing; others with negotiations with prospective employees; others with university politics; others with event organization; others with coming up with new ideas to burning problems.

I do a lot of multitasking, and I’m no better at it than anyone else. But I’m never bored.

How do you balance being involved in so many different aspects of the AI community?

I don’t know.  I have many interests and I can’t help but pursue them all.  And I multitask.

What is your favorite CS or AI-related movie or book and why?

Rather than a book, I’ll choose an author.  As a teenager, I read Isaac Asimov’s books voratiously – both his fiction (of course “I, Robot” made an impression, but the Foundation series was always my favorite), and his non-fiction.  He influenced my thoughts and imagination greatly.