Interview with Kristian Kersting

This column is the sixth in our series profiling senior AI researchers. This month we interview Kristian Kersting, Professor in Computer Science and Deputy Director of the Centre for Cognitive Science at the Technical University of Darmstadt, Germany.

Kristian Kerting’s Bio

After receiving his Ph.D. from the University of Freiburg in 2006, he was with the MIT, Fraunhofer IAIS, the University of Bonn, and the TU Dortmund University. His main research interests are statistical relational artificial intelligence (AI), probabilistic deep programming, and machine learning. Kristian has published over 170 peer-reviewed technical papers and co-authored a book on statistical relational AI. He received the European Association for Artificial Intelligence (EurAI, formerly ECCAI) Dissertation Award 2006 for the best AI dissertation in Europe and two best-paper awards (ECML 2006, AIIDE 2015). He gave several tutorials at top AI conferences, co-chaired several international workshops, and cofounded the international workshop series on Statistical Relational AI (StarAI). He regularly serves on the PC (often at senior level) for several top conference and co-chaired the PC of ECML PKDD 2013 and UAI 2017. He is the Speciality Editor in Chief for Machine Learning and AI of Frontiers in Big Data, and is/was an action editor of TPAMI, JAIR, AIJ, DAMI, and MLJ.

When and how did you become interested in AI?

As a student, I was attending an AI course of Bernhard Nebel at the University of Freiburg. This was the first time I dived deep into AI. However, my interest in AI was probably triggered earlier. Around the age of 16, I think, I was reading about AI in some popular science magazines. I did not get all the details, but I was fascinated.

What professional achievement are you most proud of?

We were collaborating with biologists on understanding better how plants react to (a)biotic stress using machine learning to analyze hyperspectral images. We got quite encouraging results. The first submission to a journal, however, got rejected. As you can imagine, I was disappointed. One of the biologists from our team looked at me and said ”Kristian, do not worry, your research helped us a lot.” This made me proud. But also the joint work with Martin Mladenov on compressing linear and quadratic programs using fractional automorpishms. This provides optimization flags for ML and AI compilers. Turning them on makes the compilers attempt to reduce the solver costs, making ML and AI automatically faster.

What would you have chosen as your career if you hadn’t gone into CS?

Physics, I guess, but back then I did not see any other option than Computer Science.

What do you wish you had known as a Ph.D. student or early researcher?

That “sleep is for post-docs,” as Michael Littman once said.

Artificial Intelligence = Machine Learning. What’s wrong with this equation?

Machine Learning (ML) and Artificial Intelligence (AI) are indeed similar, but not quite the same. AI is about problem solving, reasoning, and learning in general. To keep it simple, if you can write a very clever program that shows intelligent behavior, it can be AI. But unless the program is automatically learned from data, it is not ML. The easiest way to think of their relationship is to visualize them as concentric circles with AI first and ML sitting inside (with deep learning fitting inside both), since ML also requires to write programs, namely, of the learning process. The crucial point is that they share the idea of using computation as the language for intelligent behavior.

As you experienced AI research and education in the US and in Europe, what are the biggest differences between the two systems and what can we learn from each other?

If you present a new idea, US people will usually respond with “Sounds great, let’s do it!”, while the typical German reply is “This won’t work because …”. Here, AI is no exception. It is much more critically received in Germany than in the US. However, this also provides research opportunities such as transparent, fair and explainable AI. Generally, over the past three decades, academia and industry have been converging philosophically and phys cally much more in the US than in Germany. This facilitate the transfer of AI knowledge via well-trained, constantly learning AI experts, who can then continuously create new ideas within the company/university and keep pace with the AI development. To foster AI research and education, the department structure and tenure-track system common in the US is beneficial. On the other hand, Germany is offering access to free higher education to all students, regardless of their origin. AI has no borders. We have to take it from the ivory towers and make it accessible for all.

What is the most interesting project you are currently involved with?

Deep learning has made striking advances in enabling computers to perform tasks like recognizing faces or objects, but it does not show the general, flexible intelligence that lets people solve problems without being specially trained to do so. Thus, it is time to boost its IQ. Currently, we are working on deep learning approaches based on sum-product networks and other arithmetic circuits that explicitly quantify uncertainty. Together with colleagues—also from the Centre of Cognitive Science—we combining the resulting probabilistic deep learning with probabilistic (logical) programming languages. If successful, this would be a big step forward in programming languages, machine learning and AI.

AI is grown up – it’s time to make use of it for good. Which real-world problem would you like to see solved by AI in the future?

Due to climate change, population growth and food security concerns the world has to seek more innovative approaches to protecting and improving crop yield. AI should play a major role here. Next to feeding a hungry world, AI should aim to help eradicate disease and poverty.

We currently observe many promising and exciting advances in using AI in education, going beyond automating Piazza answering, how should we make use of AI to teach AI?

AI can be seen as an expanding and evolving network of ideas, scholars, papers, codes and showcases. Can machines read this data? We should establish the “AI Genome”, a dataset, a knowledge base, an ongoing effort to learn and reason about AI problems, concepts, algorithms, and experiments. This would not only help to curate and personalize the learning experience but also to meet the challenges of reproducible AI research. It would make AI truly accessible for all.

What is your favorite AI-related movie or book and why?

“Ex Machina” because the Turing test is shaping its plot. It makes me think about current real-life systems that give the impression that they pass the test. However, I think AI is hard than many people think.

Pew Report on Attitudes Toward Algorithms

Pew Research Center just released a report Public Attitudes Toward Computer Algorithmsreported by Aaron Smith, on Americans’ concerns about fairness and effectiveness in making important decisions. The report says “This skepticism spans several dimensions. At a broad level, 58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free. And in various contexts, the public worries that these tools might violate privacy, fail to capture the nuance of complex situations, or simply put the people they are evaluating in an unfair situation. Public perceptions of algorithmic decision-making are also often highly contextual … the survey presented respondents with four different scenarios in which computers make decisions by collecting and analyzing large quantities of public and private data. Each of these scenarios were based on real-world examples of algorithmic decision-making … and included: a personal finance score used to offer consumers deals or discounts; a criminal risk assessment of people up for parole; an automated resume screening program for job applicants; and a computer-based analysis of job interviews. The survey also included questions about the content that users are exposed to on social media platforms as a way to gauge opinions of more consumer-facing algorithms.”
The report is available at http://www.pewinternet.org/2018/11/16/public-attitudes-toward-computer-algorithms/

Joint AAAI/ACM SIGAI Doctoral Dissertation Award

The Special Interest Group on Artificial Intelligence of the Association for Computing Machinery (ACM SIGAI) and the Association for the Advancement of Artificial Intelligence (AAAI) are happy to announce that they have established the Joint AAAI/ACM SIGAI Doctoral Dissertation Award to recognize and encourage superior research and writing by doctoral candidates in artificial intelligence. This annual award is presented at the AAAI Conference on Artificial Intelligence in the form of a certificate and is accompanied by the option to present the dissertation at the AAAI conference as well as to submit one 6-page summary for both the AAAI proceedings and the newsletter of ACM SIGAI. Up to two Honorable Mentions may also be awarded, also with the option to present their dissertations at the AAAI conference as well as submit one 6-page summary for both the AAAI proceedings and the newsletter of ACM SIGAI. The award will be presented for the first time at the AAAI conference in 2020 for dissertations that have been successfully defended (but not necessarily finalized) between October 1, 2018 and September 30, 2019. Nominations are welcome from any country, but only English language versions will be accepted. Only one nomination may be submitted per Ph.D. granting institution, including large universities. Dissertations will be reviewed for relevance to artificial intelligence, technical depth and significance of the research contribution, potential impact on theory and practice, and quality of presentation. The details of the nomination process will be announced in early 2019.

Legal AI

AI is impacting law and policy issues as both a tool and a subject area. Advances in AI provide tools for carrying out legal work in business and government, and the use of AI in all parts of society is creating new demands and challenges for the legal profession.

Lawyers and AI Tools

In a recent study, “20 top US corporate lawyers with decades of experience in corporate law and contract review were pitted against an AI. Their task was to spot issues in five Non-Disclosure Agreements (NDAs), which are a contractual basis for most business deals.” The LawGeex AI system attempted correct identification of basic legal principles in contracts The results suggest that AI systems can produce higher accuracy in shorter times compared to lawyers. As with other areas of AI applications, issues include trust in automation to make skilled legal decisions, safety in using AI systems, and impacts on the workforce of the future. For legal work, AI systems potentially reduce the time needed for high-volume and low-risk contracts and give lawyers more time to work on less mundane work. Policies should focus on automation where possible and safe, and the AI for legal work is another example of the need for collaborative roles for human and AI systems.

AI Impact on Litigation

The other side of tools and content is the emerging litigation in all parts of society from the use of AI. Understanding the nature of adaptive AI systems can be crucial for fact-finders and difficult to explain to non-experts. Smart policymaking needs to make clear the liability issues and ethics in cases involving the use of AI technology. Artificial Intelligence and the Role of Expert Witnesses in AI Litigation by Dani Alexis Ryskamp, writing for The Expert Institute,  discusses artificial intelligence in civil claims and the role of expert witnesses in elucidating the complexities of the technology in the context of litigation. “Over the past few decades, everything from motor vehicles to household appliances has become more complex and, in many cases, artificial intelligence only adds to that complexity. For end-users of AI products, determining what went wrong and whose negligence was responsible can be bafflingly complex. Experts retained in AI cases typically come from fields like computer or mechanical engineering, information systems, data analysis, robotics, and programming. They may specialize in questions surrounding hardware, software, 3D-printing, biomechanics, Bayesian logic, e-commerce, or other disciplines. The European Commission recently considered the question of whether to give legal status to certain robots. One of the issues weighed in the decision involved legal liability: if an AI-based robot or system, acting autonomously, injures a person, who is liable?” 

FTC Hearing on AI and Algorithms

FTC Hearing on AI and Algorithms: November 13 and 14 in Washington, DC

From the FTC:  The hearing will examine competition and consumer protection issues associated with the use of algorithms, artificial intelligence, and predictive analytics in business decisions and conduct. See detailed agenda. The record of that proceeding will be open until mid-February. To further its consideration of these issues, the agency seeks public comment on the questions, and it welcomes input on other related topics not specifically listed in the 25 questions.

Please send your thoughts to lrm@gwu.edu on what SIGAI might submit in response to the 25 specific questions posed by the Commission. See below. The hearing will inform the FTC, other policymakers, and the public of
* the current and potential uses of these technologies;
* the ethical and consumer protection issues that are associated with the use of these technologies;
* how the competitive dynamics of firm and industry conduct are affected by the use of these technologies; and
* policy, innovation, and market considerations associated with the use of these technologies.

25 specific questions posed by the FTC

Background on Algorithms, Artificial Intelligence, and Predictive Analytics, and Applications of the Technologies

  1. What features distinguish products or services that use algorithms, artificial intelligence, or predictive analytics? In which industries or business sectors are they most prevalent?
  2. What factors have facilitated the development or advancement of these technologies? What types of resources were involved (e.g., human capital, financial, other)?
  3. Are there factors that have impeded the development of these technologies? Are there factors that could impede further development of these technologies?
  4. What are the advantages and disadvantages for consumers and for businesses of utilizing products or services facilitated by algorithms, artificial intelligence, or predictive analytics?
  5. From a technical perspective, is it sometimes impossible to ascertain the basis for a result produced by these technologies? If so, what concerns does this raise?
  6. What are the advantages and disadvantages of developing technologies for which the basis for the results can or cannot be determined? What criteria should determine when a “black box” system is acceptable, or when a result should be explainable?

Common Principles and Ethics in the Development and Use of Algorithms, Artificial Intelligence, and Predictive Analytics

  1. What are the main ethical issues (e.g., susceptibility to bias) associated with these technologies? How are the relevant affected parties (e.g., technologists, the business community, government, consumer groups, etc.) proposing to address these ethical issues? What challenges might arise in addressing them?
  2. Are there ethical concerns raised by these technologies that are not also raised by traditional computer programming techniques or by human decision-making? Are the concerns raised by these technologies greater or less than those of traditional computer programming or human decision-making? Why or why not?
  3. Is industry self-regulation and government enforcement of existing laws sufficient to address concerns, or are new laws or regulations necessary?
  4. Should ethical guidelines and common principles be tailored to the type of technology involved, or should the goal be to develop one overarching set of best practices?

Consumer Protection Issues Related to Algorithms, Artificial Intelligence, and Predictive Analytics

  1. What are the main consumer protection issues raised by algorithms, artificial intelligence, and predictive analytics?
  2. How well do the FTC’s current enforcement tools, including the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act, address issues raised by these technologies?
  3. In recent years, the FTC has held public forums to examine the consumer protection questions raised by artificial intelligence as used in certain contexts (e.g., the 2017 FinTech Forum on artificial intelligence and blockchain and the 2011 Face Facts Forum on facial recognition technology). Since those events, have technological advancements, or the increased prevalence of certain technologies, raised new or increased consumer protection concerns?
  4. What roles should explainability, risk management, and human control play in the implementation of these technologies?
  5. What choices and notice should consumers have regarding the use of these technologies?
  6. What educational role should the FTC play with respect to these technologies? What would be most useful to consumers?

Competition Issues Related to Algorithms, Artificial Intelligence, and Predictive Analytics

  1. Does the use of algorithms, artificial intelligence, and predictive analytics currently raise particular antitrust concerns (including, but not limited to, concerns about algorithmic collusion)?
  2. What antitrust concerns could arise in the future with respect to these technologies?
  3. Is the current antitrust framework for analyzing mergers and conduct sufficient to address any competition issues that are associated with the use of these technologies? If not, why not, and how should the current legal framework be modified?
  4. To what degree do any antitrust concerns raised by these technologies depend on the industry or type of use?

Other Policy Questions

  1. How are these technologies affecting competition, innovation, and consumer choices in the industries and business sectors in which they are used today? How might they do so in the future?
  2. How quickly are these technologies advancing? What are the implications of that pace of technological development from a policy perspective?
  3. How can regulators meet legitimate regulatory goals that may be raised in connection with these technologies without unduly hindering competition or innovation?
  4. Are there tensions between consumer protection and competition policy with respect to these technologies? If so, what are they, and how should they be addressed?
  5. What responsibility does a company utilizing these technologies bear for consumer injury arising from its use of these technologies? Can current laws and regulations address such injuries? Why or why not?

Comments can be submitted online and should be submitted no later than February 15, 2019. If any entity has provided funding for research, analysis, or commentary that is included in a submitted public comment, such funding and its source should be identified on the first page of the comment.