In the August 1 post, I offered a more detailed view of “algorithm” in “Algorithmic Transparency”, particularly in some machine learning software. The example was about systems involving neural networks, where algorithms in the technical sense are likely not the cause of concern, but the data used to train the system could lead to policy issues. On the other hand, “predictive” algorithms in systems are potentially a problem and need to be transparent and explained. They are susceptible to unintentional — and intentional — human bias and misuse. Today’s post gives a particular example.
Predictive policing software, popular and useful in law enforcement offices, is particularly prone to issues of bias, accuracy, and misuse. The algorithms are written to determine propensity to commit a crime and where crime might occur. Policy concerns are related to skepticism about the efficacy and fairness of such systems, and thus accountability and transparency are very important.
As stated in Slate, “The Intercept published a set of documents from a two-day event in July hosted by the U.S. Immigration and Customs Enforcement’s Homeland Security Investigations division, where tech companies were invited to learn more about the kind of software ICE is looking to procure for its new ‘Extreme Vetting Initiative.’ According to the documents, ICE is in the market for a tool that it can use to predict the potential criminality of people who come into the country.” Further information on the Slate article is available here.
The AI community should help investigate algorithmic accountability and transparency in the case of predictive policing and the subsequent application of the algorithms to new areas. We should then discuss our SIGAI position and public policy.
Efforts such as investigating the algorithmic accountability and transparency in the case of predictive policing are certainly a necessary component of what SIGAI should be doing. But is it sufficient? Can we do more?
By the time Request For Proposals (RFPs) are generated for the kinds of data mining and analytic software ICE is requesting its often too late to influence content, assumptions, and scope of the software & related tools.
When I see computer companies such as IBM, Redhat, SAS, Praescient Analytics as potential vendors for the development of such software, I immediately wonder are there any SIGAI members working at these organizations who might be able to offer insight, ethical considerations, and sober evaluation into the response phase for these kinds of RFPs. Likewise and in this particular case I wonder are there any SIGAI members among the computer folks at ICE that might be in the position to offer insight, ethical considerations and sober evaluation in the creation of the RFPs, RFQs prior to them being sent it out!
SIGAI is a group of academic and industrial researchers, practitioners, software developers, end users and students. In those capacities should we be able to have some involvement at the RFP creation level and at the RFP response level as computer companies line up as they did in the case of the ICE two day event eager to provide services?
It might not always be practical for the academic researchers in SIGAI to affect the RFP process, but our practitioners, software developers and end users, might have a chance, especially if they are employed at some of the companies that respond to the RFP’s, or implement the software & related tools.
As a member of the ACM professional chapter, SIGAI member, IEEE computer society member , AAAI member I have shared my professional & research interests and profiles with each organization. So this information is available in my case to those organizations. SIGAI should be able to identify those practitioners, software developers, and end user members who might be interested in getting involved at the RFP level at their companies, organizations, or clients, with respect to sharing SIGAI positions and insight that might inform the RFP creation, or RFP response process. Similarly Requests for Information (RFI) and Request for Quotes(RFQs) could potentially be informed by SIGAI members in the position to do so.
In many instances once the RFPs, RFIs, RFQs have been sent, vendors have responded, and money has been invested, the horse is out of the barn and after-the-fact discussions on public policy become somewhat mute or least more arduous.
I know there is an ACM professional chapter Code of Ethics, but is there an updated one or specialized one for SIGAI? Such a code of ethics might inform SIGAI members who are part of software or process development efforts that require AI solutions. In this way our SIG could also help at the software or process creation level, prior to marketing, sales, and public distribution.
I realize this may raise some serious issues, but if we don’t attempt to tackle this stuff now, then when? And if our SIG is not prepared to be engaged in such issues , the who will when it comes to AI?
The NEOACM 2017 AI Panel Discussing this very topic is being live streamed right now at:
https://youtu.be/AEN2AuIoi4M
September 14 04:00 p.m – 08:00 p.m EDT