Following previous policy posts on terminology and popular discourse about AI, the focus today is on the impact on policy of the way we talk about automation. “Unmanned Autonomous Vehicle (UAV)” is a term that justifiably creates fear in the general public, but talk about a UAV usually misses the roles of humans and human decision making. Likewise, discussions about an “automated decision maker (ADM)” ignores the social and legal responsibility of those who design, manufacture, implement, and operate “autonomous” systems. The AI community has an important role to influence correct and realistic use of concepts and issues in discussions of science and technology systems that increase automation. The concept “hybrid system” might be helpful here for understanding the potential and limitations of combinations of technologies – and humans – in AI and Autonomous Systems (AI/AS) requiring less from humans over time.
Safe Design
In addition to avoiding confusion and managing expectations, design approaches and analyses of the performance of existing systems with automation are crucial to developing safe systems with which the public and policymakers can feel comfortable. In this regard, stakeholders should read information on design of systems with automation components, such as the IEEE report “Ethically Aligned Design”, which is subtitled “A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems”. The report says about AI and Autonomous Systems (AI/AS) , “We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles. AI/AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between humans and our technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.” See also Ben Shneiderman’s excellent summary and comments on the report as well as the YouTube video of his Turing Institute Lecture on “Algorithmic Accountability: Design for Safety”. See also his proposal for a National Algorithms Safety Board.
Advances in AI/AS Science and Technology
Another perspective on the automation issue is the need to increase safety of systems through advances in science and technology. In a future blog, we will present the transcript of an interview with Dr. Harold Szu, about the need for a next generation of AI that moves closer to brain-style computing that incorporates human behaviors into AI/AS systems. Dr. Szu was the founder and former president, and former governor, of the International Neural Network Society. He is acknowledged for outstanding contributions to ANN applications and scientific innovations.
Policy and Ethics
Over the summer 2018, increased activity in congress and state legislatures focused on understandings, accurate and not, of “unmanned autonomous vehicles” and what policies should be in place. The following examples are interesting for possible interventions, but also for the use of AI/AS terminology:
House Energy & Commerce Committee’s press release: the SELF DRIVE Act.
CNBC Commentary by Reps. Bob Latta (R-OH) and Jan Schakowsky (D-IL).
Politico, 08/03/2018.: “Trial lawyers speak out on Senate self-driving car bill”, by Brianna Gurciullo with help from Lauren Gardner.
“AV NON-STARTER: After being mum for months, the American Association for Justice said publicly Thursday that it has been pressing for the Senate’s self-driving car bill, S. 1885 (115) (definitions on p.42), to stipulate that companies can’t force arbitration, our Tanya Snyder reports for Pros. The trial lawyers group is calling for a provision to make sure ‘when a person, whether a passenger or pedestrian, is injured or killed by a driverless car, that person or their family is not forced into a secret arbitration proceeding,’ according to a statement. Senate Commerce Chairman John Thune (R-S.D.) has said that arbitration has been ‘a thorny spot’ in bill negotiations.”