AI Matters: our blog
AI Terminology Matters
In the daily news and social media, AI terms are part of the popular lexicon for better or for worse. AI technology is both praised and feared in different corners of society. Big data practitioners and even educators add confusion by misusing AI terms and concepts.
“Algorithm” and “machine learning” may be the most prevalent terms that are picked up in the popular dialogue, including in the important fields of ethics and policy. The ACM and SIGAI could have a critical educational role in the public sphere. In the area of policy, the correct use of AI terms and concepts is important for establishing credibility with the scientific community and for creating policy that addresses the real problems.
In recent weeks, interesting articles have appeared by writers diverse in the degree of scientific expertise. A June issue of The Atlantic has an article by Henry Kissinger entitled “How the Enlightenment Ends” with the thesis that society is not prepared for AI. While some of the understanding of AI concepts can be questioned, the conclusion is reasonable: “AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.”
In May, The Atlantic had an article about the other extreme of scientific knowledge by Kevin Hartnett entitled “How a Pioneer of Machine Learning Became One of Its Sharpest Critics”. He writes about an interview with Judea Pearl about his current thinking, with Dana Mackenzie, in The Book of Why: The New Science of Cause and Effect. The interview includes a criticism of deep learning research and the need for a more fundamental approach.
Back to policy, I recently attended a DC event of the Center for Data Innovation on a proposed policy framework to create accountability in the use of algorithms. They have a report on the same topic. The event was another reminder of the diverse groups in dialogue in the public sphere on critical issues for AI and the need to bring together the policymakers and the scientific community. SIGAI can have a big role to play.
White House AI Summit
Updates and Reminders
AAAS Forum on Science & Technology Policy, Washington, D.C., June 21 – 22, 2018.
Potential revival of OTA progress – from the House appropriations subcommittee:
“Technology Assessment Study: The Committee has heard testimony on, and received dozens of requests advocating for restoring funding to the Office of Technology Assessment (OTA).
White House new artificial intelligence advisory committee
White House 2018 Summit on AI for American Industry
Background from the report:
“Artificial intelligence (AI) has tremendous potential to benefit the American people, and has already demonstrated immense value in enhancing our national security and growing our economy.
AI is quickly transforming American life and American business, improving how we diagnose and treat illnesses, grow our food, manufacture and deliver new products, manage our finances, power our homes, and travel from point A to point B.
On May 10, 2018, the White House hosted the Artificial Intelligence for American Industry summit, to discuss the promise of AI and the policies we will need to realize that promise for the American people and maintain U.S. leadership in the age of artificial intelligence.
‘Artificial intelligence holds tremendous potential as a tool to empower the American worker, drive growth in American industry, and improve the lives of the American people. Our free market approach to scientific discovery harnesses the combined strengths of government, industry, and academia, and uniquely positions us to leverage this technology for the betterment of our great nation.’
– Michael Kratsios, Deputy Assistant to the President for Technology Policy
The summit brought together over 100 senior government officials, technical experts from top academic institutions, heads of industrial research labs, and American business leaders who are adopting AI technologies to benefit their customers, workers, and shareholders.”
Issues addressed at the 2018 summit are as follows:
- Support for the national AI R&D ecosystem – “free market approach to scientific discovery that harnesses the combined strengths of government, industry, and academia.”
- American workforce that can take full advantage of the benefits of AI – “new types of jobs and demand for new technical skills across industries … efforts to prepare America for the jobs of the future, from a renewed focus on STEM education throughout childhood and beyond, to technical apprenticeships, re-skilling, and lifelong learning programs to better match America’s skills with the needs of industry.”
- Barriers to AI innovation in the United States – included “need to promote awareness of AI so that the public can better understand how these technologies work and how they can benefit our daily lives.”
- High-impact, sector-specific applications of AI – “novel ways industry leaders are using AI technologies to empower the American workforce, grow their businesses, and better serve their customers.”
See details in the Summary of the 2018 White House Summit on AI for American Industry
Potential Revival of OTA
As a small agency within the Legislative Branch, the Office of Technology Assessment (OTA) originally provided the United States Congress with expert analyses of new technologies related to public policy, but OTA was defunded and ceased operations in 1995. A non-binding Resolution was introduced in the House of Representatives last week by Reps. Bill Foster (D-IL) and Bob Takano (D-CA) (press release), and Sen. Ron Wyden (D-OR) is expected to introduce a parallel bill in the Senate, expressing the non-binding “sense of Congress” that the agency and its funding should be revived. New coordinated efforts also are now underway among many groups to urge Congress to do exactly that.
Our colleagues at USACM have delivered letters of support for an inquiry into whether restoring OTA or its functions to the Legislative Branch would be advisable to the leaders of the House and Senate Appropriations Committees. The House Subcommittee met recently and voted to advance legislation to fund the Legislative Branch for FY 2019 to the full House Appropriations Committee but without addressing this issue. The full Committee’s meeting, at which an amendment to provide pilot funding for an inquiry into OTA-like services could be offered, is expected later in May. The Senate’s parallel Subcommittee and full Appropriations Committee is expected to act later this spring or early summer on the Legislative Branch’s FY19 funding bill. OTA-related amendments could be offered at either of their related business meetings.
Resources
2005 Report by the Congressional Research Service
Recent testimony by Zachary Graves of Washington’s R Street Institute
Letter from USACM to leaders in the House and Senate Appropriations Committees
Public Policy Opportunity
AAAS Forum on Science & Technology Policy, Washington, D.C., June 21 – 22, 2018.
From AAAS: “The annual AAAS Forum on Science and Technology Policy is the conference for people interested in public policy issues facing the science, engineering, and higher education communities. Since 1976, it has been the place where insiders go to learn what is happening and what is likely to happen in the coming year on the federal budget and the growing number of policy issues that affect researchers and their institutions.”
Bias in Elections
Upcoming Policy Event
AAAS Forum on Science & Technology Policy
Washington, D.C., June 21 – 22, 2018.
https://www.aaas.org/page/forum-science-technology-policy?et_rid=35075781&et_cid=1876236
From AAAS: “The annual AAAS Forum on Science and Technology Policy is the conference for people interested in public policy issues facing the science, engineering, and higher education communities. Since 1976, it has been the place where insiders go to learn what is happening and what is likely to happen in the coming year on the federal budget and the growing number of policy issues that affect researchers and their institutions.”
Follow-up on the April 1 Policy Post: Experiments on FaceBook Data
US organizations and individuals influence voters through posts in social media and analysis (and misanalysis) of publicly-available data. Experimentation has been reported on the use of FaceBook data to show techniques that can be used to change elections (Nature, volume 489, pages 295–298 (13 September 2012)). Particularly, the authors looked at data during the 2010 US Congressional elections and showed how to affect voting. They report “results from a randomized controlled trial of political mobilization messages delivered to 61 million Facebook users during the 2010 US congressional elections. The results show that the messages directly influenced political self-expression, information seeking and real-world voting behaviour of millions of people. Furthermore, the messages not only influenced the users who received them but also the users’ friends, and friends of friends.”
For more information and analysis, see Zoe Corbyn’s article “Facebook experiment boosts US voter turnout.”
https://www.nature.com/news/facebook-experiment-boosts-us-voter-turnout-1.11401
FaceBook, Google, and Bias
Current events involving FaceBook and the use of data they collect and analyze relate to issues addressed by SIGAI and USACM working groups on algorithmic accountability, transparency, and bias. The players in this area of ethics and policy include those who are unaware of the issues and ones who intentionally use methods and systems with bias to achieve organizational goals. The issues around use of customer data in ways that are not transparent, or difficult to discover, not only have negative impacts on individuals and society, but they also are difficult to address because they are integral to business models upon which companies are based.
A Forbes recent article “Google’s DeepMind Has An Idea For Stopping Biased AI” discusses research that addresses AI systems that spread prejudices that humans have about race and gender – the issue that when artificial intelligence is trained with biased data, biased decisions may be made. An example cited in the article include facial recognition systems shown to have difficulty properly recognizing black women.
Machine-learning software is rapidly becoming widely accessible to developers across the world, many of whom are not aware of the dangers of using data contain biases. The Forbes piece discusses an article “Path-Specific Counterfactual Fairness,” by DeepMind researchers Silvia Chiappa and Thomas Gillam. Counterfactual fairness refers to methods of decision-making for machines and ways that fairness might automatically be determined. DeepMind has a new division, DeepMind Ethics & Society that addresses this and other issues on the ethical and social impacts of AI technology.
The Forbes article quotes Kriti Sharma, a consultant in artificial intelligence with Sage, the British enterprise software company as follows: “Understanding the risk of bias in AI is not a problem that that technologists can solve in a vacuum. We need collaboration between experts in anthropology, law, policy makers, business leaders to address the questions emerging technology will continue to ask of us. It is exciting to see increased academic research activity in AI fairness and accountability over the last 18 months, but in truth we aren’t seeing enough business leaders, companies applying AI, those who will eventually make AI mainstream in every aspect of our lives, take the same level of responsibility to create unbiased AI.”