Interview with Iolanda Leite

Introduction

This column is the seventh in our series pro- filing senior AI researchers. This month we are happy to interview Iolanda Leite, Assistant Professor at the School of Computer Science and Electrical Engineering at the KTH Royal Institute of Technology in Sweden. This is a great opportunity to get to know Iolanda, the new AI Matters co-editor in chief. Welcome on board!

Biography

Iolanda Leite is an Assistant Professor at the School of Computer Science and Electri- cal Engineering at the KTH Royal Institute of Technology in Sweden. She holds a PhD in Information Systems and Computer Engineer- ing from IST, University of Lisbon. Prior to join- ing KTH, she was a Research Assistant at the Intelligent Agents and Synthetic Characters Group at INESC-ID Lisbon, a Postdoctoral As- sociate at the Yale Social Robotics Lab and an Associate Research Scientist at Disney Re- search Pittsburgh. Iolanda’s research inter- ests are in the areas of Human-Robot Inter- action and Artificial Intelligence. She aims to develop autonomous socially intelligent robots that can assist people over long periods of time.

Getting to Know Iolanda Leite

When and how did you become interested in CS and AI?

I became interested in CS at the age of 4 when the first computer arrived at our home. It is more difficult to establish a time to define my interest in AI. I was born in the 80s and have always been fascinated by toys that had some level of “intelligence” or “life-likeness” like the Tamagotchi or the Furby robots. During my Master’s degree, I chose the Intelligent Sys- tems specialization. That time was probably when I seriously considered a research career in this area.

What professional achievement are you most proud of?

Seeing my students accomplish great things on their own.

What would you have chosen as your career if you hadn’t gone into CS?

I always loved to work with children so maybe something related to child education.

What do you wish you had known as a Ph.D. student or early researcher?

As an early researcher I often had a hard time dealing with the rejection of papers, applica- tions, etc. What I wish the “past me” could know is that if one keeps working hard, things will eventually work out well in the end. In other words, keeping faith in the system.

What is the most interesting project you are currently involved with?

All of them! If I have to highlight one, we are working with elementary schools that have classes of newly arrived children in a project where we are using social robots to promote inclusion between newly arrived and local chil- dren. This is part of an early career fellowship awarded by the Jacobs Foundation.

We currently observe many promising and exciting advances in using AI in education, going beyond automating Piazza answering, how should we make use of AI to teach AI?

I believe that AI can be used to complement teachers and provide personalized instruction to students of all ages and in a variety of top- ics. Robotic tutors can play an important role in education because the mere physical pres- ence of a robot has shown to have a positive impact on how much information students can recall, for example when compared to a virtual agent displayed in a computer screen deliver- ing the exact same content.

How can we make AI more diverse? Do you have a concrete idea on what we as (PhD) students, researchers, and educators in AI can do to increase diversity our field?

Something we can all do is to participate in outreach initiatives targeting groups underrep- resented in AI to show them that there is space for them in the community. If we start bottom-up, in the long-term I am positive that our community will be more diverse at all lev- els and the bias in opportunities, recruiting, etc. will go away.

What was your most difficult professional decision and why?

Leaving my home country (Portugal) after fin- ishing my PhD to continue my research career because I miss my family and friends, and also the good weather!

How do you balance being involved in so many different aspects of the AI community?

I love what I do and I currently don’t have any hobbies 🙂

AI is grown up – it’s time to make use of it for good. Which real-world problem would you like to see solved by AI in the future?

If AI could fully address any of the Sustainable Development Goals established by the United Nations, it would be (more than) great. Al- though there are excellent research and fund- ing initiatives in that direction, we are still not there yet.

What is your favorite AI-related movie or book and why?

One of my favorite ones recently was the Westworld TV Series because of the power relationships between the human and the robotic characters. I find it hard to believe that humans will treat robots the way they are treated in the series, but it makes me reflect on how our future interactions with technol- ogy that is becoming more personalized and “human-like” might look like.

Autonomous Vehicles: Policy and Technology

In 2018, we discussed language that aims at safety and degrees of autonomy rather than having, possibly unattainable, goals of completely autonomous things. A better approach, at least for the next 5-10 years, is to seek the correct balance between technology and humans in hybrid devices and systems. See for example, the Unmanned Integrated Systems Roadmap, 2017-2042 and Ethically Aligned Design. We also need to consider the limits and possibilities for research on the technologies and their impacts on time frames and the proper focus of policymaking.

In a recent interview, Dr. Harold Szu, a co-founder and former governor of the International Neural Network Society, discusses research ideas that better mimic human thinking and that could dramatically reduce the time to develop autonomous technology. He discusses a possible new level of brain-style computing that incorporates fuzzy membership functions into autonomous control systems. Autonomous technology incorporating human characteristics, along with safe policies and earlier arrival of brain-style technologies, could usher in the next big economic boom. For more details, view the Harold Szu interview.

Discussion Issues for 2019

FaceBook, Face Recognition, Autonomous Things, and the Future of Work

Four focus areas of discussions at the end of 2018 are the initial topics for the SIGAI Policy Blog as we start 2019.  The following, with links to resources, are important ongoing subjects for our Policy blogsite in the new year:

FaceBook continues to draw attention to the general issue of data privacy and the role of personal data in business models. Here are some good resources to check:
NY Times on FaceBook Privacy
FaceBook Partners
Spotify
Netflix

Facial recognition software is known to be flawed, having side effects of bias, unwanted surveillance, and other problems. The Safe Face Pledge, developed by the Algorithmic Justice League and Georgetown University Law Center of Privacy & Technology, is an example of emerging efforts to make organizations aware of problems with facial recognition products, for example in autonomous weapons systems and law enforcement agencies. The Safe Face Pledge asks that companies commit to safety in business practices and promote public policy for broad regulation and government oversight on facial recognition applications.

“Autonomous” Things: Degrees of Separation: The R&D for “autonomous” vehicles and other devices that dominate our daily lives pose challenges for technologies as well as for ethics and policy considerations. In 2018, we discussed language that aims at safety and degrees of autonomy rather than having, possibly unattainable, goals of completely autonomous things. A better approach may be to seek the correct balance between technology and humans in hybrid devices and systems. See for example, the Unmanned Integrated Systems Roadmap, 2017-2042 and Ethically Aligned Design.

The Future of Work and Education is a topic that not only tries to predict the workforce of the future, but also how society needs to prepare for it. Many experts believe that our current school systems are not up to the challenge and that industry and government programs are needed for the challenges emerging in just a few years. See, for example, writing by the Ford Foundation and the World Economic Forum.

We welcome your feedback and discussions as we enter the 2019 world of AI and policy!

ACM SIGAI Industry Award for Excellence in Artificial Intelligence

The ACM SIGAI Industry Award for Excellence in Artificial Intelligence (AI) will be given annually to individuals or teams who created AI applications in recent years in ways that demonstrate the power of AI techniques via a combination of the following features: novelty of application area, novelty and technical excellence of the approach, importance of AI techniques for the approach, and actual and predicted societal impact of the application. The award plaque is accompanied by a prize of $5,000 and will be awarded at the International Joint Conference on Artificial Intelligence through an agreement with the IJCAI Board of Trustees.

After decades of progress in the theory of AI, research and development, AI applications are now increasingly moving into the commercial sector. A great deal of pioneering application-level work is being done—from startups to large corporations—and this is influencing commerce and the broad public in a wide variety of ways. This award complements the numerous academic, best paper and related awards, in that it focuses on innovators of fielded AI applications, honoring those who are playing key roles in AI commercialization. The award honors these innovators and highlights their achievements (and thus also the benefit of AI techniques) to computing professionals and the public at large. The award committee will consider applications that are open source or proprietary and that may or may not involve hardware.

Evaluation criteria: The criteria include the following, but there is no fixed weighting of them:

  • Novelty of application area
  • Novelty and technical excellence of the approach
  • Importance of AI techniques for the approach
  • Actual and predicted societal benefits of the fielded application

Eligibility criteria:  Any individual or team, worldwide, is eligible for the award.

Nomination procedure: One nomination and three endorsements must be submitted. The nomination  must identify the individual or team members, describe their fielded AI  system, and explain how it addresses the award criteria. The nomination must be written by a member of ACM SIGAI. Two of the endorsements must be  from members of ACM or ACM SIGAI. Anyone can join ACM SIGAI at any time for just US$11 (students) and US$25 (other) annual membership fee, even if they are not an ACM member.

Please submit the nomination and endorsements as a single PDF file in an
email to SIGAIIndustryAward@acm.org. We will  acknowledge receipt of the nomination.

Timeline:

  • Nominations due: March 1, 2019
  • Award announcement: April 25, 2019
  • Award presentation: August 10-16, 2019 at IJCAI in Macao, China

Call for Proposals: Artificial Intelligence Activities Fund

ACM SIGAI invites funding proposals for artificial intelligence (AI) activities with a strong outreach component to either students, researchers, or practitioners not working on AI technologies or to the public in general.

The purpose of this call is to promote a better understanding of current AI technologies, including their strengths and limitations, as well as their promise for the future. Examples of fundable activities include (but are not limited to) AI technology exhibits or exhibitions, holding meetings with panels on AI technology (including on AI ethics) with expert speakers, creating podcasts or short films on AI technologies that are accessible to the public, and holding AI programming competitions. ACM SIGAI will look for evidence that the information presented by the activity will be of high quality, accurate, unbiased (for example, not influenced by company interests), and at the right level for the intended audience.

ACM SIGAI has set aside $10,000 to provide grants of up to $2,000 each, with priority given to a) proposals from ACM affiliated organizations other than conferences (such as ACM SIGAI chapter or ACM chapters), b) out-of-the-box ideas, c) new activities (rather than existing and recurring activities), d) activities with long-term impact, e) activities that reach many people, and f) activities co-funded by others. We prefer not to fund activities for which sufficient funding is already available from elsewhere or that result in profit for the organizers. Note that expert talks on AI technology can typically be arranged with financial support of the ACM Distinguished Speaker program (https://speakers.acm.org/) and then are not appropriate for funding via this call.

A proposal should contain the following information on at most 3 pages:

  • a description of the activity (including when and where it will be held);
  • a budget for the activity and the amount of funding requested, and whether other organizations have been or will be approached for funding (and, if so, for how much);
  • an explanation of how the activity fits this call (including whether it is new or recurring, which audience it will benefit, and how large the audience is);
  • a description of the organizers and other participants (such as speakers) involved in the activity (including their expertise and their affiliation with ACM SIGAI or ACM);
  • a description of what will happen to the surplus in case there is, unexpectedly, one; and
  • the name, affiliation, and contact details (including postal and email address, phone number, and URL) of the corresponding organizer.

Grantees are required to submit reports to ACM SIGAI following completion of their activities with details on how they utilized the funds and other information which might also be published in the ACM SIGAI newsletter “AI Matters.”

The deadline for submissions is 11:59pm on March 15, 2019 (UTC-12). Proposals should be submitted as pdf documents in any style at

https://easychair.org/conferences/?conf=sigaiaaf2019.

The funding decisions of ACM SIGAI are final and cannot be appealed. Some funding earmarked for this call might not be awarded at the discretion of ACM SIGAI, for example, in case the number of high-quality proposals is not sufficiently large. In case of questions, please first check the ACM SIGAI blog for announcements and clarifications: https://sigai.acm.org/aimatters/blog/. Questions should be directed to Sven Koenig (skoenig@usc.edu).

ACM and ACM SIGAI

ACM brings together computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. As the world’s largest computing society, ACM strengthens the profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM’s reach extends to every part of the globe, with more than half of its 100,000 members residing outside the U.S.  Its growing membership has led to Councils in Europe, India, and China, fostering networking opportunities that strengthen ties within and across countries and technical communities. Their actions enhance ACM’s ability to raise awareness of computing’s important technical, educational, and social issues around the world. See https://www.acm.org/ for more information.

ACM SIGAI brings together academic and industrial researchers, practitioners, software developers, end users, and students who are interested in AI. It promotes and supports the growth and application of AI principles and techniques throughout computing, sponsors or co-sponsors AI-related conferences, organizes the Career Network and Conference for early-stage AI researchers, sponsors recognized AI awards, supports AI journals, provides scholarships to its student members to attend conferences, and promotes AI education and publications through various forums and the ACM digital library. See https://sigai.acm.org/ for more information.

Sven Koenig, ACM SIGAI chair
Sanmay Das, ACM SIGAI vice-chair
Rosemary Paradis, ACM SIGAI secretary/treasurer
Michael Rovatsos, ACM SIGAI conference coordination officer
Nicholas Mattei, ACM SIGAI AI and society officer

Follow the Data

The Ethical Machine — Big Ideas for Designing Fairer AI and Algorithms – is a “project that presents ideas to encourage a discussion about designing fairer algorithms” of the Shorenstein Center on Media, Politics, and Public Policy, Harvard Kennedy School. The November 27, 2018, publication is “Follow the Data! Algorithmic Transparency Starts with Data Transparency” by Julia Stoyanovich and Bill Howe. Their focus is local and municipal governments and NGOs that deliver vital human services in health, housing, and mobility. In the article, they give a welcome emphasis on the role of data instead of the common focus these days on just algorithms. They write, “data is used to customize generic algorithms for specific situations—that is to say that algorithms are trained using data. The same algorithm may exhibit radically different behavior—make different predictions; make a different number of mistakes and even different kinds of mistakes—when trained on two different data sets. In other words, without access to the training data, it is impossible to know how an algorithm would actually behave.” See their article for more discussion on designing systems for data transparency.

US and European Policy
Adam Eisgrau, ACM Director of Global Policy and Public Affairs, published an update on the ACM US and Europe Policy Committees in the November 29 ACM MemberNetKey points are

Interview with Kristian Kersting

This column is the sixth in our series profiling senior AI researchers. This month we interview Kristian Kersting, Professor in Computer Science and Deputy Director of the Centre for Cognitive Science at the Technical University of Darmstadt, Germany.

Kristian Kerting’s Bio

After receiving his Ph.D. from the University of Freiburg in 2006, he was with the MIT, Fraunhofer IAIS, the University of Bonn, and the TU Dortmund University. His main research interests are statistical relational artificial intelligence (AI), probabilistic deep programming, and machine learning. Kristian has published over 170 peer-reviewed technical papers and co-authored a book on statistical relational AI. He received the European Association for Artificial Intelligence (EurAI, formerly ECCAI) Dissertation Award 2006 for the best AI dissertation in Europe and two best-paper awards (ECML 2006, AIIDE 2015). He gave several tutorials at top AI conferences, co-chaired several international workshops, and cofounded the international workshop series on Statistical Relational AI (StarAI). He regularly serves on the PC (often at senior level) for several top conference and co-chaired the PC of ECML PKDD 2013 and UAI 2017. He is the Speciality Editor in Chief for Machine Learning and AI of Frontiers in Big Data, and is/was an action editor of TPAMI, JAIR, AIJ, DAMI, and MLJ.

When and how did you become interested in AI?

As a student, I was attending an AI course of Bernhard Nebel at the University of Freiburg. This was the first time I dived deep into AI. However, my interest in AI was probably triggered earlier. Around the age of 16, I think, I was reading about AI in some popular science magazines. I did not get all the details, but I was fascinated.

What professional achievement are you most proud of?

We were collaborating with biologists on understanding better how plants react to (a)biotic stress using machine learning to analyze hyperspectral images. We got quite encouraging results. The first submission to a journal, however, got rejected. As you can imagine, I was disappointed. One of the biologists from our team looked at me and said ”Kristian, do not worry, your research helped us a lot.” This made me proud. But also the joint work with Martin Mladenov on compressing linear and quadratic programs using fractional automorpishms. This provides optimization flags for ML and AI compilers. Turning them on makes the compilers attempt to reduce the solver costs, making ML and AI automatically faster.

What would you have chosen as your career if you hadn’t gone into CS?

Physics, I guess, but back then I did not see any other option than Computer Science.

What do you wish you had known as a Ph.D. student or early researcher?

That “sleep is for post-docs,” as Michael Littman once said.

Artificial Intelligence = Machine Learning. What’s wrong with this equation?

Machine Learning (ML) and Artificial Intelligence (AI) are indeed similar, but not quite the same. AI is about problem solving, reasoning, and learning in general. To keep it simple, if you can write a very clever program that shows intelligent behavior, it can be AI. But unless the program is automatically learned from data, it is not ML. The easiest way to think of their relationship is to visualize them as concentric circles with AI first and ML sitting inside (with deep learning fitting inside both), since ML also requires to write programs, namely, of the learning process. The crucial point is that they share the idea of using computation as the language for intelligent behavior.

As you experienced AI research and education in the US and in Europe, what are the biggest differences between the two systems and what can we learn from each other?

If you present a new idea, US people will usually respond with “Sounds great, let’s do it!”, while the typical German reply is “This won’t work because …”. Here, AI is no exception. It is much more critically received in Germany than in the US. However, this also provides research opportunities such as transparent, fair and explainable AI. Generally, over the past three decades, academia and industry have been converging philosophically and phys cally much more in the US than in Germany. This facilitate the transfer of AI knowledge via well-trained, constantly learning AI experts, who can then continuously create new ideas within the company/university and keep pace with the AI development. To foster AI research and education, the department structure and tenure-track system common in the US is beneficial. On the other hand, Germany is offering access to free higher education to all students, regardless of their origin. AI has no borders. We have to take it from the ivory towers and make it accessible for all.

What is the most interesting project you are currently involved with?

Deep learning has made striking advances in enabling computers to perform tasks like recognizing faces or objects, but it does not show the general, flexible intelligence that lets people solve problems without being specially trained to do so. Thus, it is time to boost its IQ. Currently, we are working on deep learning approaches based on sum-product networks and other arithmetic circuits that explicitly quantify uncertainty. Together with colleagues—also from the Centre of Cognitive Science—we combining the resulting probabilistic deep learning with probabilistic (logical) programming languages. If successful, this would be a big step forward in programming languages, machine learning and AI.

AI is grown up – it’s time to make use of it for good. Which real-world problem would you like to see solved by AI in the future?

Due to climate change, population growth and food security concerns the world has to seek more innovative approaches to protecting and improving crop yield. AI should play a major role here. Next to feeding a hungry world, AI should aim to help eradicate disease and poverty.

We currently observe many promising and exciting advances in using AI in education, going beyond automating Piazza answering, how should we make use of AI to teach AI?

AI can be seen as an expanding and evolving network of ideas, scholars, papers, codes and showcases. Can machines read this data? We should establish the “AI Genome”, a dataset, a knowledge base, an ongoing effort to learn and reason about AI problems, concepts, algorithms, and experiments. This would not only help to curate and personalize the learning experience but also to meet the challenges of reproducible AI research. It would make AI truly accessible for all.

What is your favorite AI-related movie or book and why?

“Ex Machina” because the Turing test is shaping its plot. It makes me think about current real-life systems that give the impression that they pass the test. However, I think AI is hard than many people think.

Pew Report on Attitudes Toward Algorithms

Pew Research Center just released a report Public Attitudes Toward Computer Algorithmsreported by Aaron Smith, on Americans’ concerns about fairness and effectiveness in making important decisions. The report says “This skepticism spans several dimensions. At a broad level, 58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free. And in various contexts, the public worries that these tools might violate privacy, fail to capture the nuance of complex situations, or simply put the people they are evaluating in an unfair situation. Public perceptions of algorithmic decision-making are also often highly contextual … the survey presented respondents with four different scenarios in which computers make decisions by collecting and analyzing large quantities of public and private data. Each of these scenarios were based on real-world examples of algorithmic decision-making … and included: a personal finance score used to offer consumers deals or discounts; a criminal risk assessment of people up for parole; an automated resume screening program for job applicants; and a computer-based analysis of job interviews. The survey also included questions about the content that users are exposed to on social media platforms as a way to gauge opinions of more consumer-facing algorithms.”
The report is available at http://www.pewinternet.org/2018/11/16/public-attitudes-toward-computer-algorithms/

Joint AAAI/ACM SIGAI Doctoral Dissertation Award

The Special Interest Group on Artificial Intelligence of the Association for Computing Machinery (ACM SIGAI) and the Association for the Advancement of Artificial Intelligence (AAAI) are happy to announce that they have established the Joint AAAI/ACM SIGAI Doctoral Dissertation Award to recognize and encourage superior research and writing by doctoral candidates in artificial intelligence. This annual award is presented at the AAAI Conference on Artificial Intelligence in the form of a certificate and is accompanied by the option to present the dissertation at the AAAI conference as well as to submit one 6-page summary for both the AAAI proceedings and the newsletter of ACM SIGAI. Up to two Honorable Mentions may also be awarded, also with the option to present their dissertations at the AAAI conference as well as submit one 6-page summary for both the AAAI proceedings and the newsletter of ACM SIGAI. The award will be presented for the first time at the AAAI conference in 2020 for dissertations that have been successfully defended (but not necessarily finalized) between October 1, 2018 and September 30, 2019. Nominations are welcome from any country, but only English language versions will be accepted. Only one nomination may be submitted per Ph.D. granting institution, including large universities. Dissertations will be reviewed for relevance to artificial intelligence, technical depth and significance of the research contribution, potential impact on theory and practice, and quality of presentation. The details of the nomination process will be announced in early 2019.

Legal AI

AI is impacting law and policy issues as both a tool and a subject area. Advances in AI provide tools for carrying out legal work in business and government, and the use of AI in all parts of society is creating new demands and challenges for the legal profession.

Lawyers and AI Tools

In a recent study, “20 top US corporate lawyers with decades of experience in corporate law and contract review were pitted against an AI. Their task was to spot issues in five Non-Disclosure Agreements (NDAs), which are a contractual basis for most business deals.” The LawGeex AI system attempted correct identification of basic legal principles in contracts The results suggest that AI systems can produce higher accuracy in shorter times compared to lawyers. As with other areas of AI applications, issues include trust in automation to make skilled legal decisions, safety in using AI systems, and impacts on the workforce of the future. For legal work, AI systems potentially reduce the time needed for high-volume and low-risk contracts and give lawyers more time to work on less mundane work. Policies should focus on automation where possible and safe, and the AI for legal work is another example of the need for collaborative roles for human and AI systems.

AI Impact on Litigation

The other side of tools and content is the emerging litigation in all parts of society from the use of AI. Understanding the nature of adaptive AI systems can be crucial for fact-finders and difficult to explain to non-experts. Smart policymaking needs to make clear the liability issues and ethics in cases involving the use of AI technology. Artificial Intelligence and the Role of Expert Witnesses in AI Litigation by Dani Alexis Ryskamp, writing for The Expert Institute,  discusses artificial intelligence in civil claims and the role of expert witnesses in elucidating the complexities of the technology in the context of litigation. “Over the past few decades, everything from motor vehicles to household appliances has become more complex and, in many cases, artificial intelligence only adds to that complexity. For end-users of AI products, determining what went wrong and whose negligence was responsible can be bafflingly complex. Experts retained in AI cases typically come from fields like computer or mechanical engineering, information systems, data analysis, robotics, and programming. They may specialize in questions surrounding hardware, software, 3D-printing, biomechanics, Bayesian logic, e-commerce, or other disciplines. The European Commission recently considered the question of whether to give legal status to certain robots. One of the issues weighed in the decision involved legal liability: if an AI-based robot or system, acting autonomously, injures a person, who is liable?”