Our July 15th post summarized the USACM-EUACM joint statement on Algorithmic Transparency and Accountability (ATA) and introduced the ATA FAQ project by the USACM Algorithms Working Group. Their goal is “to take the lead addressing the technical aspects of algorithms and to have content prepared for media inquiries and policymakers.” The SIGAI has been asked to contribute expertise in developing content for the FAQ. Please comment to this posting so we can collect and share insights with USACM. You can also send your ideas and suggestion directly to Cynthia Florentino, ACM Policy Analyst, at cflorentino@acm.org.
The focus of this post is the discussion of “algorithms” in the FAQ. Your feedback will be appreciated. Some of the input we received is as follows:
“Q: What is an algorithm?
A: An algorithm is a set of well-defined steps that leads from inputs (data) to outputs (results). Today, algorithms are used in decision-making in education, access to credit, employment, and in the criminal justice system. An algorithm can be compared to a recipe that runs in the same way each time, automatically using the given input data. The input data is combined and placed through the same set of steps, and the output is dependent on the input data and the set of steps that comprise the algorithm.”
and
“Q: Can algorithms be explained? Why or why not? What are the challenges?
A: It is not always possible to interpret machine learning and algorithmic models. This is because a model may use an enormous volume of data in the process of figuring out the ideal approach. This in turn, makes it hard to go back and trace how the algorithm arrived at a certain decision.”
This post raises an issue with the use of the term “algorithm” in the era of Big Data in which the term “machine learning” has been incorporated into the field of data analytics and data science. The AI community needs, in the case of the ATA issues, to give careful attention to definitions and concepts that enables a clear discourse on ATA policy.
A case in point, and we welcome input of SIGAI, is the central role of artificial neural networks (NN) in machine learning and deep learning. In what sense is a NN algorithmic? Toward the goal of algorithmic transparency, what needs to be explained about how a NN works? From a policy perspective, what are the challenges in addressing the transparency of a NN component of machine learning frameworks with audiences of varying technical backgrounds?
The mechanisms for training neural networks are algorithmic in the traditional sense of the word by using a series of steps repeatedly in the adjustment of parameters such as in multilayer perceptron learning. The algorithms in NN training methods operate the same way for all specific applications in which input data is mapped to output results. Only a high-level discussion and use of simplified diagrams are practical for “explaining” these NN algorithms to policymakers and end users of systems involving machine learning.
On the other hand, the design and implementation of applications involving NN-based machine learning are surely the real points of concern for issues of “algorithmic transparency”. In that regard, the “explanation” of a particular application could discuss the careful description of a problem to be solved and the NN design model chosen to solve the problem. Further, (for now) human choices are made about the number and types of input items and the numbers of nodes and layers, method for cleaning and normalizing input data, choice of an appropriate error measure and number of training cycles, appropriate procedure for independent testing, and the interpretation of results with realistic uncertainty estimates. The application development procedure is algorithmic in a general sense, but the more important point is that assumptions and biases are involved in the design and implementation of the NN. The choice of data, and its relevance and quality, are eminently important in understanding the validity of a system involving machine learning. Thus, the transparency of NN algorithms, in the technical sense, might well be explained, but the transparency and biases of the model and implementation process are the aspects with serious policy consequences.
We welcome your feedback!