In the SIGAI June blog posts, we covered the USACM-EUACM joint statement on Algorithmic Transparency and Accountability (ATA). This topic is being actively discussed online and in public presentations. An interesting development is an FAQ project by the USACM Algorithms Working Group, which aims “to take the lead addressing the technical aspects of algorithms and to have content prepared for media inquiries and policymakers.” The FAQ could also help raise the profile of USACM’s work if stakeholders look to it for answers on the technical underpinnings of algorithms. The questions build on issues raised in the USACM-EUACM joint statement on ATA. The briefing materials will also support a forthcoming USACM policy event.
The FAQ is interesting in its own right, and an AI Matters blog discussion could be helpful to USACM and the ongoing evolution of the ATA issue. Please make Comment to this posting so we can collect and share your input with USACM. You can also send your ideas and suggestions directly with Cynthia Florentino, ACM Policy Analyst, at firstname.lastname@example.org.
Below are the questions being discussed. The USACM Working Group will appreciate the input from SIGAI. I hope you enjoy thinking about these questions and the ideas around the issue of algorithmic transparency and accountability.
Current Questions in the DRAFT Working Document
Frequently Asked Questions
USACM Statement on Algorithmic Transparency and Accountability
Q: What is an algorithm?
Q: Can algorithms be explained? Why or why not? ? Why or why not? What are the challenges?
Q: What are the technical challenges associated with data inputs to an algorithm?
Q: What are machine learning models?
Q: What are neural networks?
Q: What are decision trees?
Q: How can we introduce checks and balances into the development and operation of software to make it impartial?
Q: When trying to introduce checks and balances, what is the impact of AI algorithms that are unable to export an explanation of their decision
Q:What lies ahead for algorithms?
Q: Who is the intended audience?
Q: Are these principles just for the US, or are they intended to applied world-wide?
Q: Are these principles for government or corporations to follow?
Q: Where did you get the idea for this project?
Q: What kind of decisions are being made by computers today?
Q: Can you give examples of biased decisions made by computer?
Q: Why is there resistance to explaining the decisions made by computer
Q: Who is responsible for biased decisions made with input from a machine learning algorithm?
Q: What are sources of bias in algorithmic decision making?
Q: What are some examples of the data sets used to train machine learning algorithms that contain bias?
Q: Human decision makers can be biased as well. Are decisions made by computers more or less biased?
Q: Can algorithms be biased even if they do not look at protected characteristics like race, gender, disability status, etc?
Q: What are some examples of proprietary algorithms being used to make decisions of public interest?
Q: Are there other sets of principles in this area?
Q: Are there other organizations is working in this area?
Q: Are there any academic courses in this area?
Your suggestions will be collected and sent to the USACM Algorithms Working Group, and you can share your input directly with Cynthia Florentino, ACM Policy Analyst