The Alan Turing Institute Public Policy Programme
Among the complexities of public policy making, the new world of AI and data science requires careful consideration of ethics and safety in addressing complex and far-reaching challenges in the public domain. Data and AI systems lead to opportunities that can produce both good and bad outcomes. Ethical and safe systems require intentional processes and designs for organizations responsible for providing public services and creating public policies. An increasing amount of research focuses on developing comprehensive guidelines and techniques for industry and government groups to make sure they consider the range of issues in AI ethics and safety in their work. An excellent example is the Public Policy Programme at The Alan Turing Institute under the direction of Dr. David Leslie [1]. Their work complements and supplements the Data Ethics Framework [2], which is a practical tool for use in any project initiation phase. Data Ethics and AI Ethics regularly overlap.
The Public Policy Programme describes AI Ethics as “a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies. These values, principles, and techniques are intended both to motivate morally acceptable practices and to prescribe the basic duties and obligations necessary to produce ethical, fair, and safe AI applications. The field of AI ethics has largely emerged as a response to the range of individual and societal harms that the misuse, abuse, poor design, or negative unintended consequences of AI systems may cause.”
They cite the following as some of the most consequential potential harms:
- Bias and Discrimination
- Denial of Individual Autonomy, Recourse, and Rights
- Non-transparent, Unexplainable, or Unjustifiable Outcomes
- Invasions of Privacy
- Isolation and Disintegration of Social Connection
- Unreliable, Unsafe, or Poor-Quality Outcomes
The Ethical Platform for the Responsible Delivery of an AI Project, strives to enable the “ethical design and deployment of AI systems using a multidisciplinary team effort. It demands the active cooperation of all team members both in maintaining a deeply ingrained culture of responsibility and in executing a governance architecture that adopts ethically sound practices at every point in the innovation and implementation lifecycle.” The goal is to “unite an in-built culture of responsible innovation with a governance architecture that brings the values and principles of ethical, fair, and safe AI to life.”
[1] Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. https://doi.org/10.5281/zenodo.3240529
[2] Data Ethics Framework (2018). https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework.
Principled Artificial Intelligence
In January, 2020, the Berkman Klein Center released a report by Jessica Fjeld and Adam Nagy “Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI”, which summarizes contents of 36 documents on AI principles.
This work acknowledges the surge in frameworks based on ethical and human rights to guide the development and use of AI technologies. The authors focus on understanding ethics efforts in terms of eight key thematic trends:
- Privacy
- Accountability
- Safety & security
- Transparency & explainability
- Fairness & non-discrimination
- Human control of technology
- Professional responsibility
- Promotion of human values
They report “our analysis examined the forty-seven individual principles that make up the themes, detailing notable similarities and differences in interpretation found across the documents. In sharing these observations, it is our hope that policymakers, advocates, scholars, and others working to maximize the benefits and minimize the harms of AI will be better positioned to build on existing efforts and to push the fractured, global conversation on the future of AI toward consensus.”
Human-Centered AI
Prof. Ben Shneiderman recently presented his extensive work “Human-Centered AI: Trusted, Reliable & Safe” at the University of Arizona’s NSF Workshop on “Assured Autonomy”. His research emphasizes human autonomy as opposed to the popular notion of autonomous machines. His Open Access paper quickly drew 3200+ downloads. The ideas are now available in the International Journal of Human–Computer Interaction. The abstract is as follows: “Well-designed technologies that offer high levels of human control and high levels of computer automation can increase human performance, leading to wider adoption. The Human-Centered Artificial Intelligence (HCAI) framework clarifies how to (1) design for high levels of human control and high levels of computer automation so as to increase human performance, (2) understand the situations in which full human control or full computer control are necessary, and (3) avoid the dangers of excessive human control or excessive computer control. The methods of HCAI are more likely to produce designs that are Reliable, Safe & Trustworthy (RST). Achieving these goals will dramatically increase human performance, while supporting human self-efficacy, mastery, creativity, and responsibility.”