Vehicle automation: safe design, scientific advances, and smart policy

Following previous policy posts on terminology and popular discourse about AI, the focus today is on the impact on policy of the way we talk about automation. “Unmanned Autonomous Vehicle (UAV)” is a term that justifiably creates fear in the general public, but talk about a UAV usually misses the roles of humans and human decision making. Likewise, discussions about an “automated decision maker (ADM)” ignores the social and legal responsibility of those who design, manufacture, implement, and operate “autonomous” systems. The AI community has an important role to influence correct and realistic use of concepts and issues in discussions of science and technology systems that increase automation. The concept “hybrid system” might be helpful here for understanding the potential and limitations of combinations of technologies – and humans – in AI and Autonomous Systems (AI/AS) requiring less from humans over time.

Safe Design

In addition to avoiding confusion and managing expectations, design approaches and analyses of the performance of existing systems with automation are crucial to developing safe systems with which the public and policymakers can feel comfortable. In this regard, stakeholders should read information on design of systems with automation components, such as the IEEE report “Ethically Aligned Design”, which is subtitled “A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems”. The report says about AI and Autonomous Systems (AI/AS) , “We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles. AI/AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between humans and our technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.” See also Ben Shneiderman’s excellent summary and comments on the report as well as the YouTube video of his Turing Lecture on “Algorithmic Accountability: Design for Safety”.

Advances in AI/AS Science and Technology

Another perspective on the automation issue is the need to increase safety of systems through advances in science and technology. In a future blog, we will present the transcript of an interview with Dr. Harold Szu, about the need for a next generation of AI that moves closer to brain-style computing that incorporates human behaviors into AI/AS systems. Dr. Szu was the founder and former president, and former governor, of the International Neural Network Society. He is acknowledged for outstanding contributions to ANN applications and scientific innovations.

Policy and Ethics

Over the summer 2018, increased activity in congress and state legislatures  focused on understandings, accurate and not, of “unmanned autonomous vehicles” and what policies should be in place. The following examples are interesting for possible interventions, but also for the use of AI/AS terminology:

House Energy & Commerce Committee’s press release: the SELF DRIVE Act.
CNBC Commentary by Reps. Bob Latta (R-OH) and Jan Schakowsky (D-IL).

Politico, 08/03/2018.: “Trial lawyers speak out on Senate self-driving car bill”, by Brianna Gurciullo with help from Lauren Gardner.
“AV NON-STARTER: After being mum for months, the American Association for Justice said publicly Thursday that it has been pressing for the Senate’s self-driving car bill, S. 1885 (115) (definitions on p.42), to stipulate that companies can’t force arbitration, our Tanya Snyder reports for Pros. The trial lawyers group is calling for a provision to make sure ‘when a person, whether a passenger or pedestrian, is injured or killed by a driverless car, that person or their family is not forced into a secret arbitration proceeding,’ according to a statement. Senate Commerce Chairman John Thune (R-S.D.) has said that arbitration has been ‘a thorny spot’ in bill negotiations.”

Privacy Challenges for Election Policies

A CBS/AP article discusses difficulty of social media efforts to prevent meddling in U.S. elections: “Facebook is spending heavily to prevent a repeat of the Russian interference that played out on its service in 2016. The social-media giant is bringing on thousands of human moderators and advanced artificial intelligence systems to weed out fake accounts and foreign propaganda campaigns.”

ACM Code of Ethics and USACM’s New Name

ACM Code of Ethics
Please note the message from ACM Headquarters and check the link below: “On Tuesday, July 17, ACM plans to announce the updated Code of Ethics and Professional Conduct. We would like your support in helping to reach as broad an audience of computing professionals as possible with this news. When the updated Code goes live at 10 a.m. EDT on July 17, it will be hosted at https://www.acm.org/code-of-ethics.
We encourage you to share the updated Code with your friends and colleagues at that time. If you use social media, please take part in the conversation around computing ethics using the hashtags #ACMCodeOfEthics and #IReadTheCode. And if you are not doing so already, please follow the @TheOfficialACM and @ACM_Ethics Twitter handles to share and engage with posts about the Code.  ACM also plans to host a Reddit AMA and Twitter chats on computing ethics in the weeks following this announcement. We will reach out to you again regarding these events when their details have been solidified.
Thank you in advance for helping to support and increase awareness of the ACM Code of Ethics and for promoting ethical conduct among computing professionals around the world.”

News From the ACM US Technology Policy Committee
The USACM has a new name. Please note the change and remember that SIGAI will continue to have a close relationship with the ACM US Technology Policy Committee. Here is a reminder of the purpose and goals: “The ACM US Technology Policy Committee is a leading independent and nonpartisan voice in addressing US public policy issues related to computing and information technology. The Committee regularly educates and informs Congress, the Administration, and the courts about significant developments in the computing field and how those developments affect public policy in the United States. The Committee provides guidance and expertise in varied areas, including algorithmic accountability, artificial intelligence, big data and analytics, privacy, security, accessibility, digital governance, intellectual property, voting systems, and tech law. As the internet is global, the ACM US Technology Policy Committee works with the other ACM policy entities on publications and projects related to cross-border issues, such as cybersecurity, encryption, cloud computing, the Internet of Things, and internet governance.”

The ACM US Technology Policy Committee’s New Leadership
ACM has named Prof. Jim Hendler as the new Chair of the ACM U.S. Technology Policy Committee (formerly USACM) under the new ACM Technology Policy Council. In addition to being a distinguished computer science professor at RPI, Jim has long been an active USACM member and has served as both a committee chair and as an at-large representative. This is a great choice to guide USACM into the future within ACM’s new technology policy structure. Please add individually to the SIGAI Public Policy congratulations to Jim. Our congratulations and appreciation go to outgoing Chair Stuart Shapiro for his outstanding leadership of USACM.

White House OSTP Petition

USACM and the Electronic Privacy Information Center (EPIC) have teamed up to petition the White House’s Office of Science and Technology Policy to construct and publicize a formal process by which the public might have input into the work of the recently-named Select Committee on Artificial Intelligence. Several associations and currently about 75 individual professionals, many ACM members, have signed on to the letter. You may have received an email message abut this recently from SIGAI.

The petition states that “The undersigned technical experts, legal scholars, and affiliated organizations formally request that the Office of Science and Technology Policy (OSTP) undertake a Request for Information (RFI) and solicit public comments so as to encourage meaningful public participation in the development of the nation s policy for Artificial Intelligence. This request follows from the recent establishment of a Select Committee on Artificial Intelligence and a similar OSTP RFI that occurred in 2016.”

Any technical expert with a relevant background, irrespective of ACM affiliation, who is interested in signing the letter should e-mail Jeramie Scott <jscott@epic.org> and Adam Eisgrau <eisgrau@hq.acm.org> as soon as possible. A goal is to have 100 individual signers on the letter, and the organizers hope to send the petition to the White House shortly after the July 4th holiday. If you would like to be added to the letter, send your information providing your name, title, and what school, company, or other affiliation (for ID purposes only) that you would like listed.

Data Privacy

Data Privacy Policy – ACM and SIGAI Emerging Issue

An issue recently raised involves the data privacy of SIGAI and ACM members using EasyChair to submit articles for publication, including the AI Matters Newsletter. As part of entering a new submission through EasyChair, the following message appears:
“AI Matters, 2014-present, is an ACM conference. The age and gender fields are added by ACM. By providing the information requested, you will help ACM to better understand where it stands in terms of diversity to be able to focus on areas of improvement.
It is mandatory for the submitting author (but you can select “prefer not to submit”) and it is desirable that you fill it out for all authors.
This information will be deleted from EasyChair after the conference.”

To evaluate the likelihood of privacy protection, one should pay attention to the EasyChair Terms of Service, particularly Section 6 “Use of Personal Information”. More investigation may allow a better assessment of the level of risk if our members choose to enter personal information. Your Public Policy Officer is working with the other SIGAI officers to clarify the issues and make recommendations for possible changes in ACM policy.

Please send your views on this topic to SIGAI and contribute comments to this Blog.

Policy News Matters

At their annual meeting this week, the American Medical Association produced a statement “AMA Passes First Policy Recommendations on Augmented Intelligence”, adopting broad policy recommendations for health and technology stakeholders. The statement quotes AMA Board Member Jesse M. Ehrenfeld as follows: “As technology continues to advance and evolve, we have a unique opportunity to ensure that augmented intelligence is used to benefit patients, physicians, and the broad health care community. Combining AI methods and systems with an irreplaceable human clinician can advance the delivery of care in a way that outperforms what either can do alone. But we must forthrightly address challenges in the design, evaluation and implementation as this technology is increasingly integrated into physicians’ delivery of care to patients.”

AI Terminology Matters

In the daily news and social media, AI terms are part of the popular lexicon for better or for worse. AI technology is both praised and feared in different corners of society. Big data practitioners and even educators add confusion by misusing AI terms and concepts.

“Algorithm” and “machine learning” may be the most prevalent terms that are picked up in the popular dialogue, including in the important fields of ethics and policy. The ACM and SIGAI could have a critical educational role in the public sphere. In the area of policy, the correct use of AI terms and concepts is important for establishing credibility with the scientific community and for creating policy that addresses the real problems.

In recent weeks, interesting articles have appeared by writers diverse in the degree of scientific expertise. A June issue of The Atlantic has an article by Henry Kissinger entitled “How the Enlightenment Ends” with the thesis that society is not prepared for AI. While some of the understanding of AI concepts can be questioned, the conclusion is reasonable: “AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.”

In May, The Atlantic had an article about the other extreme of scientific knowledge by Kevin Hartnett entitled “How a Pioneer of Machine Learning Became One of Its Sharpest Critics”. He writes about an interview with Judea Pearl about his current thinking, with Dana Mackenzie, in The Book of Why: The New Science of Cause and Effect. The interview includes a criticism of deep learning research and the need for a more fundamental approach.

Back to policy, I recently attended a DC event of the Center for Data Innovation on a proposed policy framework to create accountability in the use of algorithms. They have a report on the same topic. The event was another reminder of the diverse groups in dialogue in the public sphere on critical issues for AI and the need to bring together the policymakers and the scientific community. SIGAI can have a big role to play.

White House AI Summit

Updates and Reminders

AAAS Forum on Science & Technology Policy, Washington, D.C., June 21 – 22, 2018.

Potential revival of OTA progress from the House appropriations subcommittee:
“Technology Assessment Study: The Committee has heard testimony on, and received dozens of requests advocating for restoring funding to the Office of Technology Assessment (OTA).

White House new artificial intelligence advisory committee


White House 2018 Summit on AI for American Industry

Background from the report:

“Artificial intelligence (AI) has tremendous potential to benefit the American people, and has already demonstrated immense value in enhancing our national security and growing our economy.

AI is quickly transforming American life and American business, improving how we diagnose and treat illnesses, grow our food, manufacture and deliver new products, manage our finances, power our homes, and travel from point A to point B.

On May 10, 2018, the White House hosted the Artificial Intelligence for American Industry summit, to discuss the promise of AI and the policies we will need to realize that promise for the American people and maintain U.S. leadership in the age of artificial intelligence.

‘Artificial intelligence holds tremendous potential as a tool to empower the American worker, drive growth in American industry, and improve the lives of the American people. Our free market approach to scientific discovery harnesses the combined strengths of government, industry, and academia, and uniquely positions us to leverage this technology for the betterment of our great nation.’
– Michael Kratsios, Deputy Assistant to the President for Technology Policy

The summit brought together over 100 senior government officials, technical experts from top academic institutions, heads of industrial research labs, and American business leaders who are adopting AI technologies to benefit their customers, workers, and shareholders.”

Issues addressed at the 2018 summit are as follows:

  • Support for the national AI R&D ecosystem – “free market approach to scientific discovery that harnesses the combined strengths of government, industry, and academia.”
  • American workforce that can take full advantage of the benefits of AI – “new types of jobs and demand for new technical skills across industries … efforts to prepare America for the jobs of the future, from a renewed focus on STEM education throughout childhood and beyond, to technical apprenticeships, re-skilling, and lifelong learning programs to better match America’s skills with the needs of industry.”
  • Barriers to AI innovation in the United States – included “need to promote awareness of AI so that the public can better understand how these technologies work and how they can benefit our daily lives.”
  • High-impact, sector-specific applications of AI – “novel ways industry leaders are using AI technologies to empower the American workforce, grow their businesses, and better serve their customers.”

See details in the Summary of the 2018 White House Summit on AI for American Industry

Potential Revival of OTA

As a small agency within the Legislative Branch, the Office of Technology Assessment (OTA) originally provided the United States Congress with expert analyses of new technologies related to public policy, but OTA was defunded and ceased operations in 1995. A non-binding Resolution was introduced in the House of Representatives last week by Reps. Bill Foster (D-IL) and Bob Takano (D-CA) (press release), and Sen. Ron Wyden (D-OR) is expected to introduce a parallel bill in the Senate, expressing the non-binding “sense of Congress” that the agency and its funding should be revived. New coordinated efforts also are now underway among many groups to urge Congress to do exactly that.

Our colleagues at USACM have delivered letters of support for an inquiry into whether restoring OTA or its functions to the Legislative Branch would be advisable to the leaders of the House and Senate Appropriations Committees. The House Subcommittee met recently and voted to advance legislation to fund the Legislative Branch for FY 2019 to the full House Appropriations Committee but without addressing this issue. The full Committee’s meeting, at which an amendment to provide pilot funding for an inquiry into OTA-like services could be offered, is expected later in May. The Senate’s parallel Subcommittee and full Appropriations Committee is expected to act later this spring or early summer on the Legislative Branch’s FY19 funding bill. OTA-related amendments could be offered at either of their related business meetings.

Resources

2005 Report by the Congressional Research Service
Recent testimony by Zachary Graves of Washington’s R Street Institute
Letter from USACM to leaders in the House and Senate Appropriations Committees

Public Policy Opportunity

AAAS Forum on Science & Technology Policy, Washington, D.C., June 21 – 22, 2018.
From AAAS: “The annual AAAS Forum on Science and Technology Policy is the conference for people interested in public policy issues facing the science, engineering, and higher education communities. Since 1976, it has been the place where insiders go to learn what is happening and what is likely to happen in the coming year on the federal budget and the growing number of policy issues that affect researchers and their institutions.”

Bias in Elections

Upcoming Policy Event

AAAS Forum on Science & Technology Policy
Washington, D.C., June 21 – 22, 2018.
https://www.aaas.org/page/forum-science-technology-policy?et_rid=35075781&et_cid=1876236
From AAAS: “The annual AAAS Forum on Science and Technology Policy is the conference for people interested in public policy issues facing the science, engineering, and higher education communities. Since 1976, it has been the place where insiders go to learn what is happening and what is likely to happen in the coming year on the federal budget and the growing number of policy issues that affect researchers and their institutions.”

Follow-up on  the April 1 Policy Post: Experiments on FaceBook Data

 US organizations and individuals influence voters through posts in social media and analysis (and misanalysis) of publicly-available data. Experimentation has been reported on the use of FaceBook data to show techniques that can be used to change elections (Nature, volume 489, pages 295–298 (13 September 2012)). Particularly, the authors looked at data during the 2010 US Congressional elections and showed how to affect voting. They report “results from a randomized controlled trial of political mobilization messages delivered to 61 million Facebook users during the 2010 US congressional elections. The results show that the messages directly influenced political self-expression, information seeking and real-world voting behaviour of millions of people. Furthermore, the messages not only influenced the users who received them but also the users’ friends, and friends of friends.”

For more information and analysis, see Zoe Corbyn’s article “Facebook experiment boosts US voter turnout.”
https://www.nature.com/news/facebook-experiment-boosts-us-voter-turnout-1.11401

FaceBook, Google, and Bias

Current events involving FaceBook and the use of data they collect and analyze relate to issues addressed by SIGAI and USACM working groups on algorithmic accountability, transparency, and bias. The players in this area of ethics and policy include those who are unaware of the issues and ones who intentionally use methods and systems with bias to achieve organizational goals. The issues around use of customer data in ways that are not transparent, or difficult to discover, not only have negative impacts on individuals and society, but they also are difficult to address because they are integral to business models upon which companies are based.

A Forbes recent article “Google’s DeepMind Has An Idea For Stopping Biased AI” discusses research that addresses AI systems that spread prejudices that humans have about race and gender – the issue that when artificial intelligence is trained with biased data,  biased decisions may be made. An example cited in the article include facial recognition systems shown to have difficulty properly recognizing black women.

Machine-learning software is rapidly becoming widely accessible to developers across the world, many of whom are not aware of the dangers of using data contain biases.  The Forbes piece discusses an article “Path-Specific Counterfactual Fairness,” by DeepMind researchers Silvia Chiappa and Thomas Gillam. Counterfactual fairness refers to methods of decision-making for machines and ways that fairness might automatically be determined. DeepMind has a new division, DeepMind Ethics & Society that addresses this and other issues on the ethical and social impacts of AI technology.

The Forbes article quotes Kriti Sharma, a consultant in artificial intelligence with Sage, the British enterprise software company as follows: “Understanding the risk of bias in AI is not a problem that that technologists can solve in a vacuum. We need collaboration between experts in anthropology, law, policy makers, business leaders to address the questions emerging technology will continue to ask of us. It is exciting to see increased academic research activity in AI fairness and accountability over the last 18 months, but in truth we aren’t seeing enough business leaders, companies applying AI, those who will eventually make AI mainstream in every aspect of our lives, take the same level of responsibility to create unbiased AI.”