The previous SIGAI public policy post covered the USACM-EUACM joint statement on Algorithmic Transparency and Accountability. Several interesting developments and opportunities are available for SIGAI members to discuss related topics. In particular, individuals and groups are calling for measures to provide independent oversight that might mitigate the dangers of biased, faulty, and malicious algorithms. Transparency is important for data systems and algorithms that guide life-critical systems such as healthcare, air traffic control, and nuclear control rooms. Ben Shneiderman’s Turing lecture is highly recommended on this point: https://www.youtube.com/watch?v=UWuDgY8aHmU
A robust discussion on the SIGAI Public Policy blog would be great for exploring ideas on oversight measures. Additionally, we should weigh in on some fundamental questions such as those raised by Ed Felton in his recent article “What does it mean to ask for an ‘explainable’ algorithm?” He sets up an excellent framework for the discussion, and the comments about his article raise differing points of view we should consider.
Felton says that “one of the standard critiques of using algorithms for decision-making about people, and especially for consequential decisions about access to housing, credit, education, and so on, is that the algorithms don’t provide an ‘explanation’ for their results or the results aren’t ‘interpretable.’ This is a serious issue, but discussions of it are often frustrating. The reason, I think, is that different people mean different things when they ask for an explanation of an algorithm’s results”. Felton discusses four types of explainabilty:
1. A claim of confidentiality (institutional/legal). Someone withholds relevant information about how a decision is made.
2. Complexity (barrier to big picture understanding). Details about the algorithm are difficult to explain, but the impact of the results on a person can still be understood.
3. Unreasonableness (results don’t make sense). The workings of the algorithm are clear, and are justified by statistical evidence, but the nature of how our world functions isn’t clear.
4. Injustice (justification for designing the algorithm). Using the algorithm is unfair, unjust, or morally wrong.
In addition, SIGAI should provide input on the nature of AI systems and what it means to “explain” how decision-making AI technologies work – for example, the role of algorithms in supervised and unsupervised systems versus the choices of data and design options in creating an operational system.
Your comments are welcome. Also, please share what work you may be doing in the area of algorithmic transparency.