GenAI

(Note: This blog post was not created by a GenAI tool. A human brain gathered, organized, and summarized text from several sources to create the blog content.)

The uses of Generative AI (GenAI) systems — including fully automated ones — are raising red flags throughout the business, academic, and legal communities. The ACM Technology Policy Council, US Technology Policy Committee, and Europe Technology Policy Committee are on record with statements and principles addressing these technologies and associated issues.

Principles for the Development, Deployment, and Use of Generative AI Technologies (June 27, 2023)

Generative Artificial Intelligence (GenAI) is a broad term used to describe computing techniques and tools that can be used to create new content including text, speech and audio, images and video, computer code, and other digital artifacts. While such systems offer tremendous opportunities for benefits to society, they also pose very significant risks. The increasing power of GenAI systems, the speed of their evolution, breadth of application, and potential to cause significant or even catastrophic harm means that great care must be taken in researching, designing, developing, deploying, and using them. Existing mechanisms and modes for avoiding such harm likely will not suffice.

This statement puts forward principles and recommendations for best practices in these and related areas based on a technical understanding of GenAI systems. The first four principles address issues regarding limits of use, ownership, personal data control, and correctability. The four principles were derived and adapted from the joint ACM Statement on Principles for Responsible Algorithmic Systems released in October 2022. These pertain to transparency, auditability and contestability, limiting environmental impacts, and security and privacy. This statement also reaffirms and includes five principles from the joint statement as originally formulated and has been informed by the January 2023 ACM TechBrief: Safer Algorithmic Systems. The instrumental principles, consistent with the ACM Code of Ethics, are intended to foster fair, accurate, and beneficial decision-making concerning generative and all other AI technologies:

The first set of generative AI advances rest on very large AI models that are trained on an extremely large corpus of data. Examples that are text-oriented include BLOOM, Chinchilla, GPT-4, LaMDA, and OPT, as well as conversation-oriented models like Bard, ChatGPT, and others. This is a rapidly evolving area, so this list of examples is by no means exhaustive. The principles advanced in this document also are certain to evolve in response to changing circumstances, technological capabilities, and societal norms.

Generative AI models and tools offer significant new opportunities for enhancing numerous online experiences and services, automating tasks normally done by humans, and assisting and enhancing human creativity. From another perspective, such models and tools also have raised significant concerns about multiple aspects of information and its use, including accuracy, disinformation, deception, data collection, ownership, attribution, accountability, transparency, bias, user control, confidentiality, privacy, and security. GenAI also raises important questions, including many about the replacement of human labor and jobs by AI-based machines and automation.

ACM TechBrief on GenAI (Summer 2023 | Issue 8)

This TechBrief is focused on the rapid commercialization of GenAI posing multiple large-scale risks to individuals, society, and the planet. Mitigation requires a rapid, internationally coordinated response to mitigate. The TechBrief presents conclusions concerning AI policy incorporating end-to-end governance approaches that address risks “by design” and regulate at all stages of the design-to-deployment life cycle of AI products, governance mechanisms for GenAI technologies addressing the entirety of their complex supply chains, and actors subject to controls that are proportionate to the scope and scale of the risks their products pose.

Development and Use of Systems to Detect Generative AI Content (under development)

The dramatic increase in the availability, proliferation, and use of GenAI technology in all sectors of society has created concomitant growing demand for systems that can reliably detect when a document, image, or audio file contains information produced in whole or in part by a generative AI system. Specifically, for example,

● educational institutions want systems that can reliably detect when college applications and student assignments were created with the assistance of generative AI systems;

● employers want systems that can detect the use of generative AI in job applications;

● media companies want generative AI systems that can distinguish human comments from responses generated by chatbots; and 

● government agencies need to tell human letters and comments from responses that were algorithmically generated.

Regardless of the demand, such systems are currently not reliably accurate or fair. No presently available detection technology is sufficiently dependable for exclusive support of critical, potentially life- and career-altering decisions. Accordingly, while AI detection systems may provide useful preliminary assessments, their outputs should not be accepted as proof of AI-generated content.

For additional resources, contact the ACM Technology Policy Office
1701 Pennsylvania Ave NW, Suite 200 Washington, DC 20006
+1 202.580.6555 acmpo@acm.org www.acm.org/publicpolicy

Leave a Reply

Your email address will not be published.