GenAI

(Note: This blog post was not created by a GenAI tool. A human brain gathered, organized, and summarized text from several sources to create the blog content.)

The uses of Generative AI (GenAI) systems — including fully automated ones — are raising red flags throughout the business, academic, and legal communities. The ACM Technology Policy Council, US Technology Policy Committee, and Europe Technology Policy Committee are on record with statements and principles addressing these technologies and associated issues.

Principles for the Development, Deployment, and Use of Generative AI Technologies (June 27, 2023)

Generative Artificial Intelligence (GenAI) is a broad term used to describe computing techniques and tools that can be used to create new content including text, speech and audio, images and video, computer code, and other digital artifacts. While such systems offer tremendous opportunities for benefits to society, they also pose very significant risks. The increasing power of GenAI systems, the speed of their evolution, breadth of application, and potential to cause significant or even catastrophic harm means that great care must be taken in researching, designing, developing, deploying, and using them. Existing mechanisms and modes for avoiding such harm likely will not suffice.

This statement puts forward principles and recommendations for best practices in these and related areas based on a technical understanding of GenAI systems. The first four principles address issues regarding limits of use, ownership, personal data control, and correctability. The four principles were derived and adapted from the joint ACM Statement on Principles for Responsible Algorithmic Systems released in October 2022. These pertain to transparency, auditability and contestability, limiting environmental impacts, and security and privacy. This statement also reaffirms and includes five principles from the joint statement as originally formulated and has been informed by the January 2023 ACM TechBrief: Safer Algorithmic Systems. The instrumental principles, consistent with the ACM Code of Ethics, are intended to foster fair, accurate, and beneficial decision-making concerning generative and all other AI technologies:

The first set of generative AI advances rest on very large AI models that are trained on an extremely large corpus of data. Examples that are text-oriented include BLOOM, Chinchilla, GPT-4, LaMDA, and OPT, as well as conversation-oriented models like Bard, ChatGPT, and others. This is a rapidly evolving area, so this list of examples is by no means exhaustive. The principles advanced in this document also are certain to evolve in response to changing circumstances, technological capabilities, and societal norms.

Generative AI models and tools offer significant new opportunities for enhancing numerous online experiences and services, automating tasks normally done by humans, and assisting and enhancing human creativity. From another perspective, such models and tools also have raised significant concerns about multiple aspects of information and its use, including accuracy, disinformation, deception, data collection, ownership, attribution, accountability, transparency, bias, user control, confidentiality, privacy, and security. GenAI also raises important questions, including many about the replacement of human labor and jobs by AI-based machines and automation.

ACM TechBrief on GenAI (Summer 2023 | Issue 8)

This TechBrief is focused on the rapid commercialization of GenAI posing multiple large-scale risks to individuals, society, and the planet. Mitigation requires a rapid, internationally coordinated response to mitigate. The TechBrief presents conclusions concerning AI policy incorporating end-to-end governance approaches that address risks “by design” and regulate at all stages of the design-to-deployment life cycle of AI products, governance mechanisms for GenAI technologies addressing the entirety of their complex supply chains, and actors subject to controls that are proportionate to the scope and scale of the risks their products pose.

Development and Use of Systems to Detect Generative AI Content (under development)

The dramatic increase in the availability, proliferation, and use of GenAI technology in all sectors of society has created concomitant growing demand for systems that can reliably detect when a document, image, or audio file contains information produced in whole or in part by a generative AI system. Specifically, for example,

● educational institutions want systems that can reliably detect when college applications and student assignments were created with the assistance of generative AI systems;

● employers want systems that can detect the use of generative AI in job applications;

● media companies want generative AI systems that can distinguish human comments from responses generated by chatbots; and 

● government agencies need to tell human letters and comments from responses that were algorithmically generated.

Regardless of the demand, such systems are currently not reliably accurate or fair. No presently available detection technology is sufficiently dependable for exclusive support of critical, potentially life- and career-altering decisions. Accordingly, while AI detection systems may provide useful preliminary assessments, their outputs should not be accepted as proof of AI-generated content.

For additional resources, contact the ACM Technology Policy Office
1701 Pennsylvania Ave NW, Suite 200 Washington, DC 20006
+1 202.580.6555 acmpo@acm.org www.acm.org/publicpolicy

AI Policy Matters

As SIGAI Public Policy Officer I have developed links with other policy groups, particularly the ACM US Technology Policy Committee (USTPC). AI has an expanding share of the technology policy area, and as the new Chair of USTPC I plan to report current resources and issues regularly through the AI Matters blog.

ACM and its US Technology Policy Committee are non-profit, non-lobbying, and entirely apolitical. The mission is simply to help policymakers and their staff, the science community, and the public understand all forms of computing technology so they can make technically informed decisions and recommendations. A short list of recent USTPC policy products on artificial intelligence include our latest on Generative AI and Cybersecurity. More information on key issues is here. Sample policy products are

Another ACM policy resource is the TechBrief series of short technical bulletins that present scientifically-grounded perspectives on the impact of specific developments or applications of technology. Designed to complement ACM’s activities in the policy arena, the primary goal is to inform rather than advocate for specific policies. AI topics in recent and upcoming TechBriefs include AI and trust, AI media disinformation, smart cities, safer AI systems, and generative AI.

Future AI Matters blog posts will focus on specific AI public policy projects and resources, and we look forward to blog discussions on these important topics. USTPC always seeks participation from the experts at SIGAI to help identify emerging issues, write policy statements, and present at hearings.

I welcome your ideas in messages to medsker@acm.org and participation in the blog discussions.

Recent and Upcoming Events

Brookings Webinar: Should the Government Play a Role in Reducing Algorithmic Bias?

On March 12, the Center for Technology Innovation at Brookings hosted a webinar on the role of government in identifying and reducing algorithmic biases (see video). Speakers discussed what is needed to prioritize fairness in machine-learning models and how to weed out artificial intelligence models that perpetuate discrimination. Questions included
How do the European Union, U.K., and U.S. differ in their approaches to bias and discrimination?
What lessons can they learn from each other?
Should approaches to AI bias be universally applied to ensure civil and human rights for protected groups?

They observe that “policymakers and researchers throughout the world are considering strategies for reducing biased decisions made by machine-learning algorithms. To date, the U.K. has been the most forward in outlining a role for government in identifying and mitigating biases and their unintended consequences, especially decisions that impact marginalized populations. In the U.S., legislators and policymakers have focused on algorithmic accountability and the explanation of models to ensure fairness in predictive decision making.”

The moderator was Alex Engler, Rubenstein Fellow – Governance Studies.
Speakers and discussants were
Lara Macdonald and Ghazi Ahamat, Senior Policy Advisors – UK Centre for Data Ethics and Innovation;
Nicol Turner Lee, Brookings Senior Fellow – Governance Studies  and Director, Center for Technology Innovation; and
Adrian Weller, Programme Director for AI at the Alan Turing Institute

Algo2021 Conference to Be Held on April 29, 2021

The University College London (Online) will present The Algo2021 Conference: Ecosystems of Excellence & Trust, building upon the success of their 2020 inaugural conference. They will platform all major stakeholders – academia, civil service, and industry – by showcasing the cutting-edge developments, contemporary debates, and perspectives of major players. The 2021 conference theme reflects the desire to promote public good innovation. Sessions and topics include the following:
Machine Learning in Healthcare,
Trust and the Human-on-the-Loop,
Artificial Intelligence and Predictive Policing,
AI and Innovation in Healthcare Technologies,
AI in Learning and Education Technologies,
Building Communities of Excellence in AI, and
Human-AI and Ethics Issues.

Politico’s AI Online Summit on May 31, 2021

The 2021 Summit plans to dissect Europe’s AI legislative package, along with the impact of geopolitical tensions and tech regulations, on topics such as data and privacy concerns. The summit will convene top EU and national decision makers, opinion formers, and tech industry leaders.

“The European Commission will soon introduce legislation to govern the use of AI, acting on its aim to draw up rules for the technology sector over the next five years and on its legacy as the world’s leading regulator of digital privacy.  At the heart of the issue is the will to balance the need for rules with the desire to boost innovation, allowing the old continent to assert its digital sovereignty. On where the needle should be, opinions are divided – and the publication of the Commission’s draft proposal will not be the end of the discussion.”
Issues to be addressed are the following:
How rules may fit broader plans to build European tech platforms that compete globally with other regions;
How new requirements on algorithmic transparency might be viewed by regular people; and
What kind of implementation efforts will be required for startups, mid-size companies and big tech.

The Politico 4th edition of the AI Summit will address important questions in panel discussions, exclusive interviews, and interactive roundtable discussions. Top regulators, tech leaders, startups, and civil society stakeholders will examine the EU’s legislative framework on AI and data flow while tackling uncomfortable questions about people’s fundamental rights, misinformation, and international cooperation that will determine the future of AI in Europe and worldwide.

AI Future

HCAI for Policymakers

“Human-Centered AI” by Ben Shneiderman was recently published in Issues in Science and Technology 37, no. 2 (Winter 2021): 56–61. A timely observation is that Artificial Intelligence is clearly expanding to include human-centered issues from ethics, explainability, and trust to applications such as user interfaces for self-driving cars. The importance of the HCAI fresh approach, which can enable more widespread use of AI in safe ways that promote human control, is acknowledged by the article’s appearance in NAS Issues in Science and Technology. An implication of the article is that computer scientists should build devices to enhance and empower—not replace—humans.

HCAI as described by Prof. Shneiderman represents a radically different approach to systems design by imagining a different role for machines. Envisioning AI systems as comprising machines and people working together is a much different starting point than the assumption and goal of autonomous AI. In fact, a design process with this kind of forethought might even lead to a product not being developed, thus preventing future harm. One of the many interesting points in the NAS Issues article is the observation about the philosophical clash between two approaches to gaining knowledge about the world—Aristotle’s rationalism and Leonardo da Vinci’s empiricism—and the connection with the current perspective of AI developers: “The rationalist viewpoint, however, is dominant in the AI community. It leads researchers and developers to emphasize data-driven solutions based on algorithms.” Data science unfortunately often focuses on the rationalist approach without including the contributions from, and protection of, the human experience.

From the NAS article, HCAI is aligned with “the rise of the concept of design thinking, an approach to innovation that begins with empathy for users and pushes forward with humility about the limits of machines and people. Empathy enables designers to be sensitive to the confusion and frustration that users might have and the dangers to people when AI systems fail. Humility leads designers to recognize the inevitability of failure and inspires them to be always on the lookout for what wrongs are preventable.”

Policymakers need to “understand HCAI’s promise not only for our machines but for our lives. A good starting place is an appreciation of the two competing philosophies that have shaped the development of AI, and what those imply for the design of new technologies … comprehending these competing imperatives can provide a foundation for navigating the vast thicket of ethical dilemmas now arising in the machine-learning space.” An HCAI approach can incorporate creativity and innovation into AI systems by understanding and incorporating human insights about complexity into the design of AI systems and using machines to prepare data for taking advantage of human insight and experience. For many more details and enjoyable reading, go to https://issues.org/human-centered-ai/.

NSCAI Final Report

The National Security Commission on Artificial Intelligence (NSCAI) issued a final report. This bipartisan commission of 15 technologists, national security professionals, business executives, and academic leaders delivered an “uncomfortable message: America is not prepared to defend or compete in the AI era.” They discuss a “reality that demands comprehensive, whole-of-nation action.” The final report presents a strategy to “defend against AI threats, responsibly employ AI for national security, and win the broader technology competition for the sake of our prosperity, security, and welfare.”

The mandate of the National Security Commission on Artificial Intelligence (NSCAI) is to make recommendations to the President and Congress to “advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.” The 16 chapters in the Main Report contain many conclusions and recommendations, including a “Blueprints for Action” with detailed steps for implementing the recommendations.

Data for AI: Interview with Eric Daimler

I recently spoke with Dr. Eric Daimler about how we can build on the framework he and his colleagues established during his tenure as a contributor to issues of AI policy in the White House during the Obama administration. Eric is the CEO of the MIT-spinout Conexus.com and holds a PhD in Computer Science from Carnegie Mellon University. Here are the interesting results of my interview with him. His ideas are important as part of the basis for ACM SIGAI Public Policy recommendations.

LRM: What are the main ways we should be addressing this issue of data for AI? 

EAD: To me there is one big re-framing from which we can approach this collection of issues, prioritizing data interoperability within a larger frame of AI as a total system. In the strict definition of AI, it is a learning algorithm. Most people know of subsets such as Machine Learning and subsets of that called Deep Learning. That doesn’t help the 99% who are not AI researchers. When I have spoken to non-researchers or even researchers who want to better appreciate the sensibilities of those needing to adopt their technology, I think of AI as the interactions that it has. There is the collection of the data, the transportation of the data, the analysis or planning (the traditional domain in which the definition most strictly fits), and the acting on the conclusions. That sense, plan, act framework works pretty well for most people.

LRM: Before you explain just how we can do that, can you go ahead and define some of your important terms for our readers?

EAD: AI is often described as the economic engine of the future. But to realize that growth, we must think beyond AI to the whole system of data, and the rules and context that surround it: our data infrastructure (DI). Our DI supports not only our AI technology, but also our technical leadership more generally; it underpins COVID reporting, airline ticket bookings, social networking, and most if not all activity on the internet. From the unsuccessful launch of healthcare.gov, to the recent failure of Haven, to the months-long hack into hundreds of government databases, we have seen the consequences faulty DI can have. More data does not lead to better outcomes; improved DI does. 

Fortunately, we have the technology and foresight to prevent future disasters, if we act now. Because AI is fundamentally limited by the data that feeds it, to win the AI race, we must build the best DI. The new presidential administration can play a helpful role here, by defining standards and funding research into data technologies. Attention to the need for better DI will speed responsiveness to future crises (consider COVID data delays) and establish global technology leadership via standards and commerce. Investing in more robust DI will ensure that anomalies, like ones that would have helped us identify the Russia hack much sooner, will be evident, so we can prevent future malfeasance by foreign actors. The US needs to build better data infrastructure to remain competitive in AI.

LRM: So how might we go about prioritizing data interoperability?

EAD: In 2016, the Department of Commerce (DOC) discovered that on average, it took six months to onboard new suppliers to a midsize trucking company—because of issues with data interoperability. The entire American economy would benefit from encouraging more companies to establish semantic standards, internally and between companies, so that data can speak to other data. According to a DOC report in early 2020, the technology now exists for mismatched data to communicate more easily and data integrity to be guaranteed, thanks to a new area of math called Applied Category Theory (ACT). This should be made widely available.

LRM: And what about enforcing data provenance? 

EAD: As data is transformed across platforms—including trendy cloud migrations—its lineage often gets lost. A decision denying your small business loan can and should be traceable back to the precise data the loan officer had at that time. There are traceability laws on the books, but they have been rarely enforced because up until now, the technology hasn’t been available to comply. That’s no longer an excuse. The fidelity of data and the models on top of them should be proven—down to the level of math—to have maintained integrity.

LRM: Speaking more generally, how can we start to lay the groundwork to reap the benefits of these advancements in data infrastructure? 

EAD: We need to formalize. When we built 20th century assembly lines, we established in advance where and how screws would be made; we did not ask the village blacksmith to fashion custom screws for every home repair. With AI, once we know what we want to have automated (and there are good reasons to not to automate everything!), we should then define in advance how we want it to behave. As you read this, 18 million programmers are already formalizing rules across every aspect of technology. As an automated car approaches a crosswalk, should it slow down every time, or only if it senses a pedestrian? Questions like this one—across the whole economy—are best answered in a uniform way across manufacturers, based on standardized, formal, and socially accepted definitions of risk.

LRM: In previous posts, I have discussed roles and responsibilities for change in the use of AI. Government regulation is of course important, but what roles do you see for AI tech companies, professional societies, and other entities in making the changes you recommend for DI and other aspects of data for AI?

What is different this time is the abruptness of change. When automation technologies work, they can be wildly disruptive. Sometimes this is very healthy (see: Schumpeter). I find that the “go fast and…” framework has its place, but in AI it can be destructive and invite resistance. That is what we have to watch out for. Only with responsible coordinated action do we encourage adoption of these fantastic and magical technologies. Automation in software can be powerful. These processes need not be linked into sequences just because they can. That is, just because some system can be automated does not mean that it should. Too often there is absolutism in AI deployments when what is called for in these discussions is nuance and context. For example, in digital advertising my concerns are around privacy, not physical safety. When I am subject to a plane’s autopilot, my priorities are reversed.

With my work in the US Federal Government, my bias remains against regulation as a first-step. Shortly after my time with the Obama Whitehouse, I am grateful to have participated with a diverse group for a couple of days at the Halcyon House in Washington D.C. We created some principles for deploying AI to maximize adoption. We can build on these and rally around a sort of LEED-like standard for AI deployment.

Dr. Eric Daimler is CEO & Founder of Conexus and Board Member of Petuum and WelWaze. He was a Presidential Innovation Fellow, Artificial Intelligence and Robotics. Eric is a leading authority in robotics and artificial intelligence with over 20 years of experience as an entrepreneur, investor, technologist, and policymaker.  Eric served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of President, as the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI & Robotics. Eric has incubated, built and led several technology companies recognized as pioneers in their fields ranging from software systems to statistical arbitrage. Currently, he serves on the boards of WelWaze and Petuum, the largest AI investment by Softbank’s Vision Fund. His newest venture, Conexus, is a groundbreaking solution for what is perhaps today’s biggest information technology problem — data deluge. Eric’s extensive career across business, academics and policy gives him a rare perspective on the next generation of AI.  Eric believes information technology can dramatically improve our world.  However, it demands our engagement. Neither a utopia nor dystopia is inevitable. What matters is how we shape and react to, its development. As a successful entrepreneur, Eric is looking towards the next generation of AI as a system that creates a multi-tiered platform for fueling the development and adoption of emerging technology for industries that have traditionally been slow to adapt.  As founder and CEO of Conexus, Eric is leading CQL a patent-pending platform founded upon category theory — a revolution in mathematics — to help companies manage the overwhelming challenge of data integration and migration. A frequent speaker, lecturer, and commentator, Eric works to empower communities and citizens to leverage robotics and AI to build a more sustainable, secure, and prosperous future. His academic research has been at the intersection of AI, Computational Linguistics, and Network Science (Graph Theory). His work has expanded to include economics and public policy. He served as Assistant Professor and Assistant Dean at Carnegie Mellon’s School of Computer Science where he founded the university’s Entrepreneurial Management program and helped to launch Carnegie Mellon’s Silicon Valley Campus.  He has studied at the University of Washington-Seattle, Stanford University, and Carnegie Mellon University, where he earned his Ph.D. in Computer Science.

Face Recognition and Bad Science

FR and Bad Science: Should some research not be done?

Facial recognition issues continue to appear in the news, as well as in scholarly journal articles, while FR systems are being banned and some research is shown to be bad science. AI system researchers who try to associate facial technology output with human characteristics are sometimes referred to as machine-assisted phrenologists. Problems with FR research have been demonstrated in machine learning research such as work by Steed and Caliskan in “A set of distinct facial traits learned by machines is not predictive of appearance bias in the wild.”  Meanwhile many examples of harmful products and misuses have been identified in areas such as criminality, video interviewing, and many others. Some communities have considered bans on FR products.

Yet, journals and conferences continue to publish bad science in facial recognition.

Some people say the choice of research topics is up to the researchers – the public can choose not to use the products of their research. However, areas such as genetic, biomedical, and cyber security R&D do have limits. Our professional computing societies can choose to disapprove research areas that cause harm. Sources of mitigating and preventing irresponsible research being introduced into the public space include:
– Peer pressure on academic and corporate research and development
– Public policy through laws and regulations
– Corporate and academic self-interest – organizations’ bottom lines can
suffer from bad publicity
– Vigilance by journals about publishing papers that promulgate the misuse
of FR

A recent article by Matthew Hutson in The New Yorker discusses “Who should stop unethical AI.” He remarks that “Many kinds of researchers—biologists, psychologists, anthropologists, and so on—encounter checkpoints at which they are asked about the ethics of their research. This doesn’t happen as much in computer science. Funding agencies might inquire about a project’s potential applications, but not its risks. University research that involves human subjects is typically scrutinized by an I.R.B., but most computer science doesn’t rely on people in the same way. In any case, the Department of Health and Human Services explicitly asks I.R.B.s not to evaluate the “possible long-range effects of applying knowledge gained in the research,” lest approval processes get bogged down in political debate. At journals, peer reviewers are expected to look out for methodological issues, such as plagiarism and conflicts of interest; they haven’t traditionally been called upon to consider how a new invention might rend the social fabric.”

OSTP News

OSTP Launches National AI Initiative Office

The White House Office of Science and Technology Policy announced the establishment of the National Artificial Intelligence Initiative Office.  As outlined in legislation, this Office will serve as the point of contact on Federal AI activities across the interagency, as well as with private sector, academia, and other stakeholders. The Select Committee on Artificial Intelligence will oversee the National AI Initiative Office, and Dr. Lynne E. Parker, Deputy United States Chief Technology Officer, will serve as the Founding Director. As explained in Inside Tech Media, the newly enacted National Defense Authorization Act contains important provisions regarding the development and deployment of AI technologies, many of which build upon previous legislation introduced in the 116th Congress, including the establishment of the National AI Initiative Office.

White House Science Team

On January 15, key members of President-Elect Biden’s were announced. The press release says “These diverse, deeply experienced scientists and experts will play a key role in shaping America’s future — and will prepare us to lead the world in the 21st century and beyond.” President-elect Joe Biden said, “Science will always be at the forefront of my administration — and these world-renowned scientists will ensure everything we do is grounded in science, facts, and the truth. Their trusted guidance will be essential as we come together to end this pandemic, bring our economy back, and pursue new breakthroughs to improve the quality of life of all Americans.”
He will nominate Dr. Eric Lander (photo) as Director of the OSTP and to serve as the Presidential Science Advisor. “The president-elect is elevating the role of science within the White House, including by designating the Presidential Science Advisor as a member of the Cabinet for the first time in history.”
Other key members are
Alondra Nelson, Ph.D., OSTP Deputy Director for Science and Society (photo)
Frances H. Arnold, Ph.D., Co-Chair of the President’s Council of Advisors on Science and Technology (photo)
Maria Zuber, Ph.D., Co-Chair of the President’s Council of Advisors on Science and Technology (photo)
Francis S. Collins, M.D., Ph.D., Director of the National Institutes of Health (photo)
Kei Koizumi, OSTP Chief of Staff  (photo)
Narda Jones, OSTP Legislative Affairs Director (photo)

Policy-Related Article from AI and Ethics

Stix, C., Maas, M.M. Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy. AI Ethics (2021). https://doi.org/10.1007/s43681-020-00037-w

AI Policy Nuggets II

What Can Biden Do for Science?

A Science|Business Webcast presented a forum of public and private sector leaders discussing ideas about the need for the president-elect to convene world leaders to re-establish ‘rules of engagement’ on science.

Brookings Webinar on the Future of AI

“On November 17, 2020, the Brookings Institution Center for Technology Innovation hosted a webinar to discuss the future of AI, how it is being deployed, and the policy and legal issues being raised. Speakers explored ways to mitigate possible concerns and how to move forward safely, securely, and in a manner consistent with human values.”

Section 230 Update

Politico reports that “Trump for months has urged Congress to revoke industry legal shield Section 230, while its staunchest critics largely pushed to revamp it instead. But the president’s more drastic call for a total repeal — echoed by Biden for very different reasons — is gaining traction among Republicans in Washington. The NYT reported Thursday that White House chief of staff Mark Meadows has even offered Trump’s support for a must-pass annual defense spending bill if it includes such a repeal.”

The European AI Policy Conference

AI may be the most important digital innovation technology transforming industries around the world.
“Businesses in Europe are at the forefront of some of the latest advancements in the field, and European universities are home to the greatest concentration of AI researchers in the world. Every week, new case studies emerge showing the potential opportunities that can arise from greater use of the technology.” The European AI Policy Conference brings together leading voices in AI from to discuss why European success in AI is important, how the EU compares to other world leaders today, and what steps European policymakers should take to be more competitive in AI. “The European AI Policy Conference is a high-level forum to connect stakeholders working to promote AI in Europe, showcase advances in AI, and promote AI policies supporting its development to EU policymakers and thought leaders.”

Policy Issues from AI and Ethics

The inaugural issue of the new journal AI and Ethics contains several articles relevant to AI and Public Policy.

Jelinek, T., Wallach, W. & Kerimi, D. “Policy brief: the creation of a G20 coordinating committee for the governance of artificial intelligence” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00019-y

This policy brief proposes a group of twenty (G20) coordinating committee for the governance of artificial intelligence (CCGAI) to plan and coordinate on a multilateral level the mitigation of AI risks. The G20 is the appropriate regime complex for such a metagovernance mechanism, given the involvement of the largest economies and their highest political representatives.

Gambelin, O. “Brave: what it means to be an AI Ethicist” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00020-5

This piece offers a preliminary definition of what it means to be an AI Ethicist, first examining the concept of an ethicist in the context of artificial intelligence, followed by exploring what responsibilities are added to the role in industry specifically, and ending on the fundamental characteristic that underlies it all: bravery.

Smith, P., Smith, L. “Artificial intelligence and disability: too much promise, yet too little substance?” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00004-5

Much has been written about the potential of artificial intelligence (AI) to support, and even transform, the lives of disabled people. Many individuals are benefiting, but what are the true limits of such tools? What are the ethics of allowing AI tools to suggest different courses of action, or aid in decision-making? And does AI offer too much promise for individuals? We draw as to how AI software and technology might best be developed in the future.

Coeckelbergh, M. “AI for climate: freedom, justice, and other ethical and political challenges” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00007-2

Artificial intelligence can and should help to build a greener, more sustainable world and to deal with climate change, but these opportunities also raise ethical and political issues that need to be addressed. This article discusses these issues, with a focus on problems concerning freedom and justice at a global level, and calls for responsible use of AI for climate in the light of these challenges.

Hickok, M. “Lessons learned from AI ethics principles for future actions” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00008-1

The use of AI systems is significantly more prevalent in recent years, and the concerns on how these systems collect, use and process big data has also increased. To address these concerns and advocate for ethical and responsible AI development and implementation, NGOs, research centers, private companies, and governmental agencies have published more than 100 AI ethics principles and guidelines. Lessons must be learned from the shortcomings of AI ethics principles to ensure that future investments, collaborations, standards, codes, and legislation reflect the diversity of voices and incorporate the experiences of those who are already impacted by AI.

Fall Nuggets

USTPC Panel on Section 230

On November 18 from 5:00 to 6:30 PM EST, experts from ACM’s US Technology Policy Committee (USTPC) will discuss the legal liability of Internet platforms such as Facebook and Twitter under Section 230 of the Communications Decency Act. The USTPC panelists are Andy Grosso (Moderator), Mark Rasch, Pam Samuelson, Richard M. Sherman, and Danny Weitzner.

Biden and Science

Participants in a Science and Business Webcast urged that a global assembly “should press leaders of the big industrial nations to open – or re-open – their research systems, while also ensuring that COVID-19 vaccines are freely available to everyone in the world. An international summit.” About an international summit, Robert-Jan Smits, former director-general of the European Commission’s research and innovation directorate said it, “would really show that senior leaders are turning the page,”

Center for Data Innovation On the EU Data Governance Act

“The European Commission is planning to release its Data Governance Act to facilitate data sharing within the EU. The goal is to increase data sharing among businesses, make more public-sector data available for reuse, and foster data sharing of personal data, including for ‘altruistic’ purposes. While the goals of the act are commendable, many of the specific policies outlined in a draft would create a new data localization requirement, undermine the EU’s commitments to digital free trade, and contradict its open data principles.”

AI Data

Confusion in the popular media about terms such as algorithm and what constitutes AI technology cause critical misunderstandings among the public and policymakers. More importantly, the role of data is often ignored in ethical and operational considerations. Even if AI systems are perfectly built, low quality and biased data cause unintentional and even intentional hazards.

Language Models and Data

A generative pre-trained transformer GPT-3 is currently in the news. For example, James Vincent in the July 30, 2020, article in The Verge writes about GPT-3, which was created by OpenAI. Language models, GPT-3 the current ultimate product, have ethics issues on steroids for products being made. Inputs to the system have all the liabilities discussed about Machine Learning and Artificial Neural Network products. The dangers of bias and mistakes are raised in some writings but are likely not a focus among the wide range of enthusiastic product developers using the open-source GPT-3. Language models suggest output sequences of words given an input sequence. Thus, samples of text from social media can be used to produce new text in the same style as the author and potentially can be used to influence public opinion. Cases have been found of promulgating incorrect grammar and misuse of terms based on poor quality inputs to language models. An article by David Pereira includes examples and comments on the use of GPT-3. The article “GPT-3: an AI Game-Changer or an Environmental Disaster?” by John Naughton gives examples of and commentary on results from GPT-3.

Data Governance

A possible meta solution for policymakers to keep up with technological advances is discussed by Alex Woodie in “AI Ethics and Data Governance: A Virtuous Cycle.”

He quotes James Cotton, who is the international director of the Data Management Centre of Excellence at Information Builders’ Amsterdam office: “as powerful as the AI technology is, it can’t be implemented in an ethical manner if the underlying data is poorly managed and badly governed. It’s critical to understand the relationship between data governance and AI ethics. One is foundational for the other. You can’t preach being ethical or using data in an ethical way if you don’t know what you have, where it came from, how it’s being used, or what it’s being used for.”