Data Privacy

Data Privacy Policy – ACM and SIGAI Emerging Issue

An issue recently raised involves the data privacy of SIGAI and ACM members using EasyChair to submit articles for publication, including the AI Matters Newsletter. As part of entering a new submission through EasyChair, the following message appears:
“AI Matters, 2014-present, is an ACM conference. The age and gender fields are added by ACM. By providing the information requested, you will help ACM to better understand where it stands in terms of diversity to be able to focus on areas of improvement.
It is mandatory for the submitting author (but you can select “prefer not to submit”) and it is desirable that you fill it out for all authors.
This information will be deleted from EasyChair after the conference.”

To evaluate the likelihood of privacy protection, one should pay attention to the EasyChair Terms of Service, particularly Section 6 “Use of Personal Information”. More investigation may allow a better assessment of the level of risk if our members choose to enter personal information. Your Public Policy Officer is working with the other SIGAI officers to clarify the issues and make recommendations for possible changes in ACM policy.

Please send your views on this topic to SIGAI and contribute comments to this Blog.

Policy News Matters

At their annual meeting this week, the American Medical Association produced a statement “AMA Passes First Policy Recommendations on Augmented Intelligence”, adopting broad policy recommendations for health and technology stakeholders. The statement quotes AMA Board Member Jesse M. Ehrenfeld as follows: “As technology continues to advance and evolve, we have a unique opportunity to ensure that augmented intelligence is used to benefit patients, physicians, and the broad health care community. Combining AI methods and systems with an irreplaceable human clinician can advance the delivery of care in a way that outperforms what either can do alone. But we must forthrightly address challenges in the design, evaluation and implementation as this technology is increasingly integrated into physicians’ delivery of care to patients.”

AI Terminology Matters

In the daily news and social media, AI terms are part of the popular lexicon for better or for worse. AI technology is both praised and feared in different corners of society. Big data practitioners and even educators add confusion by misusing AI terms and concepts.

“Algorithm” and “machine learning” may be the most prevalent terms that are picked up in the popular dialogue, including in the important fields of ethics and policy. The ACM and SIGAI could have a critical educational role in the public sphere. In the area of policy, the correct use of AI terms and concepts is important for establishing credibility with the scientific community and for creating policy that addresses the real problems.

In recent weeks, interesting articles have appeared by writers diverse in the degree of scientific expertise. A June issue of The Atlantic has an article by Henry Kissinger entitled “How the Enlightenment Ends” with the thesis that society is not prepared for AI. While some of the understanding of AI concepts can be questioned, the conclusion is reasonable: “AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.”

In May, The Atlantic had an article about the other extreme of scientific knowledge by Kevin Hartnett entitled “How a Pioneer of Machine Learning Became One of Its Sharpest Critics”. He writes about an interview with Judea Pearl about his current thinking, with Dana Mackenzie, in The Book of Why: The New Science of Cause and Effect. The interview includes a criticism of deep learning research and the need for a more fundamental approach.

Back to policy, I recently attended a DC event of the Center for Data Innovation on a proposed policy framework to create accountability in the use of algorithms. They have a report on the same topic. The event was another reminder of the diverse groups in dialogue in the public sphere on critical issues for AI and the need to bring together the policymakers and the scientific community. SIGAI can have a big role to play.

White House AI Summit

Updates and Reminders

AAAS Forum on Science & Technology Policy, Washington, D.C., June 21 – 22, 2018.

Potential revival of OTA progress from the House appropriations subcommittee:
“Technology Assessment Study: The Committee has heard testimony on, and received dozens of requests advocating for restoring funding to the Office of Technology Assessment (OTA).

White House new artificial intelligence advisory committee


White House 2018 Summit on AI for American Industry

Background from the report:

“Artificial intelligence (AI) has tremendous potential to benefit the American people, and has already demonstrated immense value in enhancing our national security and growing our economy.

AI is quickly transforming American life and American business, improving how we diagnose and treat illnesses, grow our food, manufacture and deliver new products, manage our finances, power our homes, and travel from point A to point B.

On May 10, 2018, the White House hosted the Artificial Intelligence for American Industry summit, to discuss the promise of AI and the policies we will need to realize that promise for the American people and maintain U.S. leadership in the age of artificial intelligence.

‘Artificial intelligence holds tremendous potential as a tool to empower the American worker, drive growth in American industry, and improve the lives of the American people. Our free market approach to scientific discovery harnesses the combined strengths of government, industry, and academia, and uniquely positions us to leverage this technology for the betterment of our great nation.’
– Michael Kratsios, Deputy Assistant to the President for Technology Policy

The summit brought together over 100 senior government officials, technical experts from top academic institutions, heads of industrial research labs, and American business leaders who are adopting AI technologies to benefit their customers, workers, and shareholders.”

Issues addressed at the 2018 summit are as follows:

  • Support for the national AI R&D ecosystem – “free market approach to scientific discovery that harnesses the combined strengths of government, industry, and academia.”
  • American workforce that can take full advantage of the benefits of AI – “new types of jobs and demand for new technical skills across industries … efforts to prepare America for the jobs of the future, from a renewed focus on STEM education throughout childhood and beyond, to technical apprenticeships, re-skilling, and lifelong learning programs to better match America’s skills with the needs of industry.”
  • Barriers to AI innovation in the United States – included “need to promote awareness of AI so that the public can better understand how these technologies work and how they can benefit our daily lives.”
  • High-impact, sector-specific applications of AI – “novel ways industry leaders are using AI technologies to empower the American workforce, grow their businesses, and better serve their customers.”

See details in the Summary of the 2018 White House Summit on AI for American Industry

Potential Revival of OTA

As a small agency within the Legislative Branch, the Office of Technology Assessment (OTA) originally provided the United States Congress with expert analyses of new technologies related to public policy, but OTA was defunded and ceased operations in 1995. A non-binding Resolution was introduced in the House of Representatives last week by Reps. Bill Foster (D-IL) and Bob Takano (D-CA) (press release), and Sen. Ron Wyden (D-OR) is expected to introduce a parallel bill in the Senate, expressing the non-binding “sense of Congress” that the agency and its funding should be revived. New coordinated efforts also are now underway among many groups to urge Congress to do exactly that.

Our colleagues at USACM have delivered letters of support for an inquiry into whether restoring OTA or its functions to the Legislative Branch would be advisable to the leaders of the House and Senate Appropriations Committees. The House Subcommittee met recently and voted to advance legislation to fund the Legislative Branch for FY 2019 to the full House Appropriations Committee but without addressing this issue. The full Committee’s meeting, at which an amendment to provide pilot funding for an inquiry into OTA-like services could be offered, is expected later in May. The Senate’s parallel Subcommittee and full Appropriations Committee is expected to act later this spring or early summer on the Legislative Branch’s FY19 funding bill. OTA-related amendments could be offered at either of their related business meetings.

Resources

2005 Report by the Congressional Research Service
Recent testimony by Zachary Graves of Washington’s R Street Institute
Letter from USACM to leaders in the House and Senate Appropriations Committees

Public Policy Opportunity

AAAS Forum on Science & Technology Policy, Washington, D.C., June 21 – 22, 2018.
From AAAS: “The annual AAAS Forum on Science and Technology Policy is the conference for people interested in public policy issues facing the science, engineering, and higher education communities. Since 1976, it has been the place where insiders go to learn what is happening and what is likely to happen in the coming year on the federal budget and the growing number of policy issues that affect researchers and their institutions.”

Bias in Elections

Upcoming Policy Event

AAAS Forum on Science & Technology Policy
Washington, D.C., June 21 – 22, 2018.
https://www.aaas.org/page/forum-science-technology-policy?et_rid=35075781&et_cid=1876236
From AAAS: “The annual AAAS Forum on Science and Technology Policy is the conference for people interested in public policy issues facing the science, engineering, and higher education communities. Since 1976, it has been the place where insiders go to learn what is happening and what is likely to happen in the coming year on the federal budget and the growing number of policy issues that affect researchers and their institutions.”

Follow-up on  the April 1 Policy Post: Experiments on FaceBook Data

 US organizations and individuals influence voters through posts in social media and analysis (and misanalysis) of publicly-available data. Experimentation has been reported on the use of FaceBook data to show techniques that can be used to change elections (Nature, volume 489, pages 295–298 (13 September 2012)). Particularly, the authors looked at data during the 2010 US Congressional elections and showed how to affect voting. They report “results from a randomized controlled trial of political mobilization messages delivered to 61 million Facebook users during the 2010 US congressional elections. The results show that the messages directly influenced political self-expression, information seeking and real-world voting behaviour of millions of people. Furthermore, the messages not only influenced the users who received them but also the users’ friends, and friends of friends.”

For more information and analysis, see Zoe Corbyn’s article “Facebook experiment boosts US voter turnout.”
https://www.nature.com/news/facebook-experiment-boosts-us-voter-turnout-1.11401

FaceBook, Google, and Bias

Current events involving FaceBook and the use of data they collect and analyze relate to issues addressed by SIGAI and USACM working groups on algorithmic accountability, transparency, and bias. The players in this area of ethics and policy include those who are unaware of the issues and ones who intentionally use methods and systems with bias to achieve organizational goals. The issues around use of customer data in ways that are not transparent, or difficult to discover, not only have negative impacts on individuals and society, but they also are difficult to address because they are integral to business models upon which companies are based.

A Forbes recent article “Google’s DeepMind Has An Idea For Stopping Biased AI” discusses research that addresses AI systems that spread prejudices that humans have about race and gender – the issue that when artificial intelligence is trained with biased data,  biased decisions may be made. An example cited in the article include facial recognition systems shown to have difficulty properly recognizing black women.

Machine-learning software is rapidly becoming widely accessible to developers across the world, many of whom are not aware of the dangers of using data contain biases.  The Forbes piece discusses an article “Path-Specific Counterfactual Fairness,” by DeepMind researchers Silvia Chiappa and Thomas Gillam. Counterfactual fairness refers to methods of decision-making for machines and ways that fairness might automatically be determined. DeepMind has a new division, DeepMind Ethics & Society that addresses this and other issues on the ethical and social impacts of AI technology.

The Forbes article quotes Kriti Sharma, a consultant in artificial intelligence with Sage, the British enterprise software company as follows: “Understanding the risk of bias in AI is not a problem that that technologists can solve in a vacuum. We need collaboration between experts in anthropology, law, policy makers, business leaders to address the questions emerging technology will continue to ask of us. It is exciting to see increased academic research activity in AI fairness and accountability over the last 18 months, but in truth we aren’t seeing enough business leaders, companies applying AI, those who will eventually make AI mainstream in every aspect of our lives, take the same level of responsibility to create unbiased AI.”

News and SIGAI Webinar

News from USACM

Next week the  USACM Council will be holding its annual in-person meeting in Washington, beginning with a reception Wednesday, March 21st from 5 to 7 at the Georgetown home of Law Committee Chair Andy Grosso. We cordially invite DC-area USACM members to join us. If you plan to attend, please RSVP to Adam Eisgrau <eisgrau@HQ.ACM.ORG>, who will provide further details.

Statement of the European Group on Ethics in Science and New Technologies on “Artificial Intelligence, Robotics and ‘Autonomous’ Systems,” published March 9:
http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf
The statement calls for the EC to “launch a process that paves the way towards a common, internationally recognized ethical and legal framework for the design, production, use and governance of artificial intelligence, robotics, and ‘autonomous’ systems.”

President Donald Trump today tapped Obama-era deputy U.S. CTO Ed Felten to serve on the Privacy and Civil Liberties Oversight Board (https://www.pclob.gov/).

ACM SIGAI Learning Webinar “Advances in Socio-Behavioral Computing”

This live presentation was given on Thursday, March 15  by Tomek Strzalkowski, Director of the Institute for Informatics, Logics, and Security Studies and Professor at SUNY Albany. Plamen Petrov, Director of Cognitive Technology at KPMG LLP and Industry Liaison Officer of ACM SIGAI, and Rose Paradis, Data Scientist at Leidos Health and Life Sciences and SIGAI Secretary/Treasurer, moderated the questions and answers session.

Slides are available here.

This talk presented ongoing research on computational modeling and understanding of social, behavioral, and cultural phenomena in multi-party interactions. They discussed how various linguistic cues reveal the social dynamics in group interactions, based on a series of experiments conducted in virtual on-line chat rooms, and then showed that these dynamics generalize to other forms of communication including traditional face-to-face discourse as well as the large scale online interaction via social media. They also showed how language compensates for the reduced cue environment in which online interactions take place.

They described a two-tier analytic approach for detecting and classifying certain sociolinguistic behaviors exhibited by discourse participants, including topic control, task control, disagreement, and involvement, that serve as intermediate models from which presence the higher level social roles and states such as leadership and group cohesion may be inferred. The results of an initial phase of the work used a system of sociolinguistic tools called DSARMD (Detecting Social Actions and Roles in Multiparty Dialogue).

Several extensions of the basic DSARMD model move beyond recognition and understanding of social dynamics and attempt to quantify and measure the effects that sociolinguistic behaviors by individuals and groups have on other discourse participants. Potentially, autonomous artificial agents could be constructed capable of exerting influence and manipulating human behavior in certain situations. Such extended capabilities could possibly be deployed to increase accuracy of predicting online information cascades, persuasion campaigns, and even defend against certain forms of social engineering attacks.

The model and tools presented in the Webinar are interesting to consider in the detection and assessment of algorithmic bias.

AAAI-18

The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) was on  February 2–7, 2018, at the Hilton New Orleans Riverside.  The AAAI/ACM Conference on AI, Ethics, and Society (AIES) was held at the beginning of AAAI-18. Developers and participants included members of SIGAI and USACM.

The AIES conference description follows: “As AI is becoming more pervasive in our life, its impact on society is more significant and concerns and issues are raised regarding aspects such as value alignment, data handling and bias, regulations, and workforce displacement. Only a multi-disciplinary and multi-stakeholder effort can find the best ways to address these concerns, including experts of various disciplines, such as
ethics, philosophy, economics, sociology, psychology, law, history, and politics. In order to address these issues in a scientific context, AAAI
and  ACM have joined forces to start this new conference.”

The full schedule for the AIES 2018 Conference is available at www.aies-conference.com. A panel relevant to our policy blog discussions “Prioritizing Ethical Considerations in Intelligent and Autonomous Systems – Who Sets the Standards?” was designed by our IEEE/ACM committee and will be covered in a future post.

Educational Policy for AI and an Uncertain Labor Market

In the next few blog posts, we will present information and generate discussion on policy issues at the intersection of AI, the future of the workforce, and educational systems. Because AI technology and applications are developing at such a rapid pace, current policies will likely not be able to impact sufficiently the workforce needs even in 2024 — the time frame for middle school students to prepare for low skill jobs and for beginning college students to prepare for higher skilled work. Transparency in educational policies requires goal setting based on better data and insights into emerging technologies, likely changes in the labor market, and corresponding challenges to our educational systems. The topics and resources below will be the focus of future AI Policy posts.

Technology

IBM’s Jim Spohrer has an outstanding set of slides “A Look Toward the Future”, incorporating his rich experience and current work on anticipated impacts of new technology with milestones every ten years through 2045. Radical developments in technology would challenge public policy in ways that are difficult to imagine, but current policymakers and the AI community need to try. Currently, AI systems are superior to human capabilities in calculating and game playing, and near human level performance for data-driven speech and image recognition and for driverless vehicles. By 2024, large advances are likely in video understanding, episodic memory, and reasoning.

The roles of future workers will involve increasing collaboration with AI systems in the government and public sector, particularly with autonomous systems but also in traditional areas of healthcare and education. Advances in human-technology collaboration also lead to  issues relevant to public policy, including privacy and algorithmic transparency. The increasing mix of AI with humans in ubiquitous public and private systems puts a new emphasis on education for understanding and anticipating challenges in communication and collaboration.

Workforce

Patterns for the future workforce in the age of autonomous systems and cognitive assistance are emerging. Please take a look at Andrew McAfee’s presentation at the recent Computing Research Summit. Also, see the latest McKinsey ReportJobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation. Among other things, this quote from page 20 catches attention: “Automation represents both hope and challenge. The global economy needs the boost to productivity and growth that it will bring, especially at a time when aging populations are acting as a drag on GDP growth. Machines can take on work that is routine, dangerous, or dirty, and may allow us all to use our intrinsically human talents more fully. But to capture these benefits, societies will need to prepare for complex workforce transitions ahead. For policy makers, business leaders, and individual workers the world over, the task at hand is to prepare for a more automated future by emphasizing new skills, scaling up training, especially for midcareer workers, and ensuring robust economic growth.”

Education for the Future

An article in Education Week “The Future of Work Is Uncertain, Schools Should Worry Now” addresses the issue of automation and artificial intelligence disrupting the labor market and what K-12 educators and policymakers need to know. A recent report by the Bureau of Labor Statistics “STEM Occupations: Past, Present, And Future” is consistent with the idea that even in STEM professions workforce needs will be less at programming levels and more in ways to collaborate with cognitive assistance systems and in human-computer teams. Demands for STEM professionals will be for verifying, interpreting, and acting on machine outputs; designing and assembling larger systems with robotic and cognitive components; and dealing with ethics issues such as bias in systems and algorithmic transparency.

Recent and Current Events: CRA and IEEE

December is a busy month for AI Policy activities. This blog post is a summary of the important topics in which SIGAI members are involved. Subsequent Policy blog posts will cover these in more detail.  Meanwhile, we encourage you to read the information in this post and participate in the IEEE Standards Association December 18th online event on Policy for Artificial Intelligence.

Computing Research Association December 12, 2017
Summit on Technology and Jobs

The summit co-sponsors included ACM and ACM SIGAI. The overview is as follows:
“The goal of the summit was to put the issue of technology and jobs on the national agenda in an informed and deliberate manner. The summit brought together leading technologists, economists, and policy experts who offered their views on where technology is headed and what its impact may be, and on policy issues raised by these projections and possible policy responses. The summit was hosted by the Computing Research Association, as part of its mission to engage the computing research community to provide trusted, non-partisan input to policy thinkers and makers.”

I attended and will be writing about this important issue in the January 1 post. Please look at the livestream of the sessions at
https://livestream.com/accounts/11031579/events/7936961/videos/167138978
https://livestream.com/accounts/11031579/events/7936961/videos/167149704
https://livestream.com/accounts/11031579/events/7936961/videos/167155909

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

As reported in previous posts, members of SIGAI and USACM have been working closely with IEEE colleagues on ethics and policy issues.

The Global Initiative was launched in April of 2016 to move beyond the paranoia and the uncritical admiration regarding autonomous and intelligent technologies and to illustrate that aligning technology development and use with ethical values will help advance innovation while diminishing fear in the process. The goal of The IEEE Global Initiative is “to incorporate ethical aspects of human well-being that may not automatically be considered in the current design and manufacture of A/IS technologies and to reframe the notion of success so human progress can include the intentional prioritization of individual, community, and societal ethical values.”

The goal of the Global Initiative is “to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educatedtrained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.”

Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS) encourages technologists to prioritize ethical considerations in the creation of A/IS systems. EADv2 is being released as a Request For Input.  Details on how to submit public comments are available via The Initiative’s Submission Guidelines.

Download here: EADv2

Policy for Artificial Intelligence: The Power of Imaginaries

IEEE Standards Association (IEEE-SA) will present the third in a series of three free online events focused on Policy for Artificial Intelligence on December 18, 2017, at 12:00 p.m. EST

Policy for Artificial Intelligence: The Power of Imaginaries, will feature Konstantinos Karachalios (Managing Director, IEEE-SA; Member of IEEE Management Council), Nicolas Miailhe (Co-Founder and President, The Future Society; Harvard Kennedy School, Senior Visiting Fellow, Program on Science Technology and Society and member, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and Cyrus Hodes, Director of the AI Initiative with The Future Society at Harvard Kennedy School. John C. Havens, Executive Director, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, will moderate.

IEEE-SA: “Imaginaries are, ‘collectively held, institutionally stabilized, and publicly performed visions of a desirable future, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology’ (Jasanoff & Kim; from Dreamscapes of Modernity).   If we want to have a positive future in regards to AI, we have to critically reflect upon our current imaginary in order to ‘imagine’ a new one, and the policy and principles we need to attain it.”
REGISTER TODAY