Interview with Thomas Dietterich

Introduction

Welcome to the eighth interview in our se- ries profiling senior AI researchers. This month we are especially happy to interview our SIGAI advisory board member, Thomas Dietterich, Director of Intelligent Systems at the Institute for Collaborative Robotics and In- telligence Systems (CoRIS) at Oregon State University.

Tom Dietterich

Biography

Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; PhD Stan- ford University 1984) is Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University, where he joined the faculty in 1985. Dietterich is one of the pioneers of the field of Machine Learning and has authored more than 200 refereed publications and two books. His research is motivated by challenging real world problems with a special focus on ecological science, ecosystem management, and sustainable development. He is best known for his work on ensemble methods in machine learning including the development of error- correcting output coding. Dietterich has also invented important reinforcement learning algorithms including the MAXQ method for hierarchical reinforcement learning. Dietterich has devoted many years of service to the research community. He served as President of the Association for the Advancement of Artificial Intelligence (2014-2016) and as the founding president of the International Machine Learning Society (2001-2008). Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and Program Chair of AAAI 1990 and NIPS 2000. Dietterich is a Fellow of the ACM, AAAI, and AAAS.

Getting to Know Tom Dietterich

When and how did you become interested in CS and AI?

I learned to program in Basic in my early teens; I had an uncle who worked for GE on their time-sharing system. I learned Fortran in high school. I tried to build my own adding machine out of TTL chips around that time too. However, despite this interest, I didn’t really know what CS was until I reached graduate school at the University of Illinois. I first engaged with AI when I took a graduate assistant position with Ryszard Michalski on what became machine learning, and I took an AI class from Dave Waltz. I had also studied phi- losophy of science in college, so I had already thought a bit about how we acquire knowledge from data and experiment.

What would you have chosen as your career if you hadn’t gone into CS?

I had considered going into foreign service, and I have always been interested in policy issues. I might also have gone into technical management. Both of my brothers have been successful technical managers.

What do you wish you had known as a Ph.D. student or early researcher?

I wish I had understood the importance of strong math skills for CS research. I was a software engineer before I was a computer science researcher, and it took me a while to understand the difference. I still struggle with the difference between making an incremental advance within an existing paradigm versus asking fundamental questions that lead to new research paradigms.

What professional achievement are you most proud of?

Developing the MAXQ formalism for hierarchical reinforcement learning.

What is the most interesting project you are currently involved with?

I’m fascinated by the question of how machine learning predictors can have models of their own competence. This is important for mak- ing safe and robust AI systems. Today, we have ML methods that give accurate predictions in aggregate, but we struggle to provide point-wise quantification of uncertainty. Related to these questions are algorithms for anomaly detection and open category detection. In general, we need AI systems that can work well even in the presence of “unknown unknowns”.

Recent advances in AI led to many success stories of AI technology undertaking real-world problems. What are the challenges of deploying AI systems?

AI systems are software systems, so the main challenges are the same as with any soft- ware system. First, are we building the right system? Do we correctly understand the users’ needs? Have we correctly expressed user preferences in our reward functions, constraints, and loss functions? Have we done so in a way that respects ethical standards? Second, have we built the system we intended to build? How can we test software com- ponents created using machine learning? If the system is adapting online, how can we achieve continuous testing and quality assurance? Third, when ML is employed, the re- sulting software components (classifiers and similar predictive models) will fail if the input data distribution changes. So we must mon- itor the data distribution and model the pro- cess by which the data are being generated. This is sometimes known as the problem of “model management”. Fourth, how is the deployed system affecting the surrounding social and technical system? Are there unintended side-effects? Is user or institutional behavior changing as a result of the deployment?

One promising approach is combining humans and AI into a collaborative team. How can we design such a system to successfully tackle challenging high-risk applications? Who should be in charge, the human or the AI?

I have addressed this in a recent short paper (Robust Artificial Intelligence and Robust Human Organizations. Frontiers of Computer Science, 13(1): 1-3). To work well in high- risk applications, human teams must function as so-called “High reliability organizations” or HROs. When we add AI technology to such teams, we must ensure that it contributes to their high reliability rather than disrupting and degrading it. According to organizational researchers, HROs share five main practices: (a) continuous attention to anomalous and near-miss events, (b) seeking diverse explanations for such events, (c) maintaining continuous situational awareness, (d) practicing improvisational problem solving, and (e) delegating decision making authority to the team member who has the most expertise about the specific decision regardless of rank. AI systems in HROs must implement these five practices as well. They must be constantly watch- ing for anomalies and near misses. They must seek multiple explanations for such events (e.g., via ensemble methods). They must maintain situational awareness. They must support joint human-machine improvisational problem solving, such as mixed-initiative plan- ning. And they must build models of the expertise of each team member (including them- selves) to know which team member should make the final decision in any situation.

You ask “Who is in charge?” I’m not sure that is the right question. Our goal is to create human-machine teams that are highly reliable as a team. In an important sense, this means every member of the team has responsibil- ity for robust team performance. However, from an ethical standpoint, I think the human team leader should have ultimate responsibil- ity. That task of taking action in a specific situation could be delegated to the AI system, but the team leader has the moral responsibility for that action.

Moving towards transforming AI systems into high-reliable organizations, how can diversity help to achieve this goal?

Diversity is important for generating multiple hypotheses to explain anomalies and near misses. Experience in hospital operating rooms is that often it is the nurses who first detect a problem or have the right solution. The same has been noted in nuclear power plant operations. Conversely, teams often fail when the engage in “group think” and fixate on an incorrect explanation for a problem.

How do you balance being involved in so many different aspects of the AI community?

I try to stay very organized and manage my time carefully. I use a machine learning system called TAPE (Tagging Assistant for Productive Email) developed by my collaborator and student Michael Slater to automatically tag and organize my email. I also take copi- ous notes in OneNote. Oh, and I work long hours…

What was your most difficult professional decision and why?

The most difficult decision is to tell a PhD student that they are not going to succeed in completing their degree. All teachers and mentors are optimistic people. When we meet a new student, we hope they will be very successful. But when it is clear that a student isn’t going to succeed, that is a deep disappointment for the student (of course) but also for the professor.

What is your favorite AI-related movie or book and why?

I really don’t know much of the science fiction literature (in books or films). My favorite is 2001: A Space Odyssey because I think it depicts most accurately how AI could lead to bad outcomes. Unlike in many other stories, HAL doesn’t “go rogue”. Instead, HAL creatively achieves the objective programmed by its creators, unfortunately as a side effect, it kills the crew.

Interview with Iolanda Leite

Introduction

This column is the seventh in our series pro- filing senior AI researchers. This month we are happy to interview Iolanda Leite, Assistant Professor at the School of Computer Science and Electrical Engineering at the KTH Royal Institute of Technology in Sweden. This is a great opportunity to get to know Iolanda, the new AI Matters co-editor in chief. Welcome on board!

Biography

Iolanda Leite is an Assistant Professor at the School of Computer Science and Electri- cal Engineering at the KTH Royal Institute of Technology in Sweden. She holds a PhD in Information Systems and Computer Engineer- ing from IST, University of Lisbon. Prior to join- ing KTH, she was a Research Assistant at the Intelligent Agents and Synthetic Characters Group at INESC-ID Lisbon, a Postdoctoral As- sociate at the Yale Social Robotics Lab and an Associate Research Scientist at Disney Re- search Pittsburgh. Iolanda’s research inter- ests are in the areas of Human-Robot Inter- action and Artificial Intelligence. She aims to develop autonomous socially intelligent robots that can assist people over long periods of time.

Getting to Know Iolanda Leite

When and how did you become interested in CS and AI?

I became interested in CS at the age of 4 when the first computer arrived at our home. It is more difficult to establish a time to define my interest in AI. I was born in the 80s and have always been fascinated by toys that had some level of “intelligence” or “life-likeness” like the Tamagotchi or the Furby robots. During my Master’s degree, I chose the Intelligent Sys- tems specialization. That time was probably when I seriously considered a research career in this area.

What professional achievement are you most proud of?

Seeing my students accomplish great things on their own.

What would you have chosen as your career if you hadn’t gone into CS?

I always loved to work with children so maybe something related to child education.

What do you wish you had known as a Ph.D. student or early researcher?

As an early researcher I often had a hard time dealing with the rejection of papers, applica- tions, etc. What I wish the “past me” could know is that if one keeps working hard, things will eventually work out well in the end. In other words, keeping faith in the system.

What is the most interesting project you are currently involved with?

All of them! If I have to highlight one, we are working with elementary schools that have classes of newly arrived children in a project where we are using social robots to promote inclusion between newly arrived and local chil- dren. This is part of an early career fellowship awarded by the Jacobs Foundation.

We currently observe many promising and exciting advances in using AI in education, going beyond automating Piazza answering, how should we make use of AI to teach AI?

I believe that AI can be used to complement teachers and provide personalized instruction to students of all ages and in a variety of top- ics. Robotic tutors can play an important role in education because the mere physical pres- ence of a robot has shown to have a positive impact on how much information students can recall, for example when compared to a virtual agent displayed in a computer screen deliver- ing the exact same content.

How can we make AI more diverse? Do you have a concrete idea on what we as (PhD) students, researchers, and educators in AI can do to increase diversity our field?

Something we can all do is to participate in outreach initiatives targeting groups underrep- resented in AI to show them that there is space for them in the community. If we start bottom-up, in the long-term I am positive that our community will be more diverse at all lev- els and the bias in opportunities, recruiting, etc. will go away.

What was your most difficult professional decision and why?

Leaving my home country (Portugal) after fin- ishing my PhD to continue my research career because I miss my family and friends, and also the good weather!

How do you balance being involved in so many different aspects of the AI community?

I love what I do and I currently don’t have any hobbies 🙂

AI is grown up – it’s time to make use of it for good. Which real-world problem would you like to see solved by AI in the future?

If AI could fully address any of the Sustainable Development Goals established by the United Nations, it would be (more than) great. Al- though there are excellent research and fund- ing initiatives in that direction, we are still not there yet.

What is your favorite AI-related movie or book and why?

One of my favorite ones recently was the Westworld TV Series because of the power relationships between the human and the robotic characters. I find it hard to believe that humans will treat robots the way they are treated in the series, but it makes me reflect on how our future interactions with technol- ogy that is becoming more personalized and “human-like” might look like.

Interview with Kristian Kersting

This column is the sixth in our series profiling senior AI researchers. This month we interview Kristian Kersting, Professor in Computer Science and Deputy Director of the Centre for Cognitive Science at the Technical University of Darmstadt, Germany.

Kristian Kerting’s Bio

After receiving his Ph.D. from the University of Freiburg in 2006, he was with the MIT, Fraunhofer IAIS, the University of Bonn, and the TU Dortmund University. His main research interests are statistical relational artificial intelligence (AI), probabilistic deep programming, and machine learning. Kristian has published over 170 peer-reviewed technical papers and co-authored a book on statistical relational AI. He received the European Association for Artificial Intelligence (EurAI, formerly ECCAI) Dissertation Award 2006 for the best AI dissertation in Europe and two best-paper awards (ECML 2006, AIIDE 2015). He gave several tutorials at top AI conferences, co-chaired several international workshops, and cofounded the international workshop series on Statistical Relational AI (StarAI). He regularly serves on the PC (often at senior level) for several top conference and co-chaired the PC of ECML PKDD 2013 and UAI 2017. He is the Speciality Editor in Chief for Machine Learning and AI of Frontiers in Big Data, and is/was an action editor of TPAMI, JAIR, AIJ, DAMI, and MLJ.

When and how did you become interested in AI?

As a student, I was attending an AI course of Bernhard Nebel at the University of Freiburg. This was the first time I dived deep into AI. However, my interest in AI was probably triggered earlier. Around the age of 16, I think, I was reading about AI in some popular science magazines. I did not get all the details, but I was fascinated.

What professional achievement are you most proud of?

We were collaborating with biologists on understanding better how plants react to (a)biotic stress using machine learning to analyze hyperspectral images. We got quite encouraging results. The first submission to a journal, however, got rejected. As you can imagine, I was disappointed. One of the biologists from our team looked at me and said ”Kristian, do not worry, your research helped us a lot.” This made me proud. But also the joint work with Martin Mladenov on compressing linear and quadratic programs using fractional automorpishms. This provides optimization flags for ML and AI compilers. Turning them on makes the compilers attempt to reduce the solver costs, making ML and AI automatically faster.

What would you have chosen as your career if you hadn’t gone into CS?

Physics, I guess, but back then I did not see any other option than Computer Science.

What do you wish you had known as a Ph.D. student or early researcher?

That “sleep is for post-docs,” as Michael Littman once said.

Artificial Intelligence = Machine Learning. What’s wrong with this equation?

Machine Learning (ML) and Artificial Intelligence (AI) are indeed similar, but not quite the same. AI is about problem solving, reasoning, and learning in general. To keep it simple, if you can write a very clever program that shows intelligent behavior, it can be AI. But unless the program is automatically learned from data, it is not ML. The easiest way to think of their relationship is to visualize them as concentric circles with AI first and ML sitting inside (with deep learning fitting inside both), since ML also requires to write programs, namely, of the learning process. The crucial point is that they share the idea of using computation as the language for intelligent behavior.

As you experienced AI research and education in the US and in Europe, what are the biggest differences between the two systems and what can we learn from each other?

If you present a new idea, US people will usually respond with “Sounds great, let’s do it!”, while the typical German reply is “This won’t work because …”. Here, AI is no exception. It is much more critically received in Germany than in the US. However, this also provides research opportunities such as transparent, fair and explainable AI. Generally, over the past three decades, academia and industry have been converging philosophically and phys cally much more in the US than in Germany. This facilitate the transfer of AI knowledge via well-trained, constantly learning AI experts, who can then continuously create new ideas within the company/university and keep pace with the AI development. To foster AI research and education, the department structure and tenure-track system common in the US is beneficial. On the other hand, Germany is offering access to free higher education to all students, regardless of their origin. AI has no borders. We have to take it from the ivory towers and make it accessible for all.

What is the most interesting project you are currently involved with?

Deep learning has made striking advances in enabling computers to perform tasks like recognizing faces or objects, but it does not show the general, flexible intelligence that lets people solve problems without being specially trained to do so. Thus, it is time to boost its IQ. Currently, we are working on deep learning approaches based on sum-product networks and other arithmetic circuits that explicitly quantify uncertainty. Together with colleagues—also from the Centre of Cognitive Science—we combining the resulting probabilistic deep learning with probabilistic (logical) programming languages. If successful, this would be a big step forward in programming languages, machine learning and AI.

AI is grown up – it’s time to make use of it for good. Which real-world problem would you like to see solved by AI in the future?

Due to climate change, population growth and food security concerns the world has to seek more innovative approaches to protecting and improving crop yield. AI should play a major role here. Next to feeding a hungry world, AI should aim to help eradicate disease and poverty.

We currently observe many promising and exciting advances in using AI in education, going beyond automating Piazza answering, how should we make use of AI to teach AI?

AI can be seen as an expanding and evolving network of ideas, scholars, papers, codes and showcases. Can machines read this data? We should establish the “AI Genome”, a dataset, a knowledge base, an ongoing effort to learn and reason about AI problems, concepts, algorithms, and experiments. This would not only help to curate and personalize the learning experience but also to meet the challenges of reproducible AI research. It would make AI truly accessible for all.

What is your favorite AI-related movie or book and why?

“Ex Machina” because the Turing test is shaping its plot. It makes me think about current real-life systems that give the impression that they pass the test. However, I think AI is hard than many people think.