In the next few blog posts, we will present information and generate discussion on policy issues at the intersection of AI, the future of the workforce, and educational systems. Because AI technology and applications are developing at such a rapid pace, current policies will likely not be able to impact sufficiently the workforce needs even in 2024 — the time frame for middle school students to prepare for low skill jobs and for beginning college students to prepare for higher skilled work. Transparency in educational policies requires goal setting based on better data and insights into emerging technologies, likely changes in the labor market, and corresponding challenges to our educational systems. The topics and resources below will be the focus of future AI Policy posts.
IBM’s Jim Spohrer has an outstanding set of slides “A Look Toward the Future”, incorporating his rich experience and current work on anticipated impacts of new technology with milestones every ten years through 2045. Radical developments in technology would challenge public policy in ways that are difficult to imagine, but current policymakers and the AI community need to try. Currently, AI systems are superior to human capabilities in calculating and game playing, and near human level performance for data-driven speech and image recognition and for driverless vehicles. By 2024, large advances are likely in video understanding, episodic memory, and reasoning.
The roles of future workers will involve increasing collaboration with AI systems in the government and public sector, particularly with autonomous systems but also in traditional areas of healthcare and education. Advances in human-technology collaboration also lead to issues relevant to public policy, including privacy and algorithmic transparency. The increasing mix of AI with humans in ubiquitous public and private systems puts a new emphasis on education for understanding and anticipating challenges in communication and collaboration.
Patterns for the future workforce in the age of autonomous systems and cognitive assistance are emerging. Please take a look at Andrew McAfee’s presentation at the recent Computing Research Summit. Also, see the latest McKinsey Report — Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation. Among other things, this quote from page 20 catches attention: “Automation represents both hope and challenge. The global economy needs the boost to productivity and growth that it will bring, especially at a time when aging populations are acting as a drag on GDP growth. Machines can take on work that is routine, dangerous, or dirty, and may allow us all to use our intrinsically human talents more fully. But to capture these benefits, societies will need to prepare for complex workforce transitions ahead. For policy makers, business leaders, and individual workers the world over, the task at hand is to prepare for a more automated future by emphasizing new skills, scaling up training, especially for midcareer workers, and ensuring robust economic growth.”
Education for the Future
An article in Education Week “The Future of Work Is Uncertain, Schools Should Worry Now” addresses the issue of automation and artificial intelligence disrupting the labor market and what K-12 educators and policymakers need to know. A recent report by the Bureau of Labor Statistics “STEM Occupations: Past, Present, And Future” is consistent with the idea that even in STEM professions workforce needs will be less at programming levels and more in ways to collaborate with cognitive assistance systems and in human-computer teams. Demands for STEM professionals will be for verifying, interpreting, and acting on machine outputs; designing and assembling larger systems with robotic and cognitive components; and dealing with ethics issues such as bias in systems and algorithmic transparency.
Artificial Intelligence Journal:
FUNDING OPPORTUNITIES for PROMOTING AI RESEARCH
Deadline for proposals: extended to January 20th, 2018
The Artificial Intelligence Journal (AIJ) is one of the longest established and most respected journals in AI, and since it was founded in 1970, it has published many of the key papers in the field. The operation of the Editorial Board is supported financially through an arrangement with AIJ’s publisher, Elsevier. Through this arrangement, the AIJ editorial board is able to make available substantial funds (of the order of 230,000 Euros per annum), to support the promotion and dissemination of AI research. Most of these funds are made available through a series of competitive open calls (the remaining part of the budget is reserved for sponsorship of studentships for the annual IJCAI conference).
The current call has a deadline of January 20th, 2018 and a budget of 120,000 Euros.
Proposals should be submitted following the format and content guidelines, as well as submission instructions, that can be found on the AIJ web site:
(We posted this call at a time when the above website had not yet been updated but it will soon be, hopefully by the time when you are reading this blog post. In the meantime, you can click here for the details.)
Welcome! This column is the fifth in our series profiling senior AI researchers. This month focuses on Dr. Ayanna Howard. In addition to our interview, Dr. Howard was recently interviewed by NPR and they created an animated video about how “Being Different Helped A NASA Roboticist Achieve Her Dream.”
Ayanna Howard’s Bio
Ayanna Howard, Ph.D. is Professor and Linda J. and Mark C. Smith Endowed Chair in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. As an educator, researcher, and innovator, Dr. Howard’s career focus is on intelligent technologies that must adapt to and function within a human-centered world. Her work, which encompasses advancements in artificial intelligence (AI), assistive technologies, and robotics, has resulted in over 200 peer-reviewed publications in a number of projects – from assistive robots in the home to AI-powered STEM apps for children with diverse learning needs. She has over 20 years of R&D experience covering a number of projects that have been supported by various agencies including: National Science Foundation, NewSchools Venture Fund, Procter and Gamble, NASA, and the Grammy Foundation. Dr. Howard received her B.S. in Engineering from Brown University, her M.S.E.E. from the University of Southern California, her M.B.A. from the Drucker Graduate School of Management, and her Ph.D. in Electrical Engineering from the University of Southern California. To date, her unique accomplishments have been highlighted through a number of awards and articles, including highlights in USA Today, Upscale, and TIME Magazine, as well as being named a MIT Technology Review top young innovator and recognized as one of the 23 most powerful women engineers in the world by Business Insider. In 2013, she also founded Zyrobotics, which is currently licensing technology derived from her research and has released their first suite of STEM educational products to engage children of all abilities. From 1993-2005, Dr. Howard was at NASA’s Jet Propulsion Laboratory. She has also served as the Associate Director of Research for the Georgia Tech Institute for Robotics and Intelligent Machines and as Chair of the multidisciplinary Robotics Ph.D. program at Georgia Tech.
How did you become interested in Computer Science and AI?
I first became interested in robotics as a young, impressionable, middle school girl. My motivation was the television series called The Bionic Women – my goal in life, at that time, was to gain the skills necessary to build the bionic women. I figured that I had to acquire combined skill sets in engineering and computer science in order to accomplish that goal. With respect to AI, I became interested in AI after my junior year in college, when I was required to design my first neural network during my third NASA summer internship in 1992. I quickly saw that, if I could combine the power of AI with Robotics – I could enable the ambitious dreams of my youth.
What was your most difficult professional decision and why?
The most difficult professional decision I had to make, in the past, was to leave NASA and pursue robotics research as an academic. The primary place I’d worked at from 1990 until 2005 was at NASA. I’d grown over those 15 years in my technical job positions from summer intern to computer scientist (after college graduation) to information systems engineer, robotics researcher, and then senior robotics researcher. And then, I was faced with the realization that, in order to push my ambitious goals in robotics, I needed more freedom to pursue robotics applications outside of space exploration. The difficulty was, I still enjoyed the space robotics research efforts I was leading at NASA, but I also felt a need to expand beyond my intellectual comfort zone.
What professional achievement are you most proud of?
The professional achievement I am proudest of is founding of a startup company, Zyrobotics, which has commercialized educational products based on technology licensed from my lab at Georgia Tech. I’m most proud of this achievement because it allowed me to combine all of the hard-knock lessons I’ve learned in designing artificial intelligence algorithms, adaptive user interfaces, and human-robot interaction schemes with a real-world application that has large societal impact – that of engaging children of diverse abilities in STEM education, including coding.
What do you wish you had known as a Ph.D. student or early researcher?
As a Ph.D. student, I wish I had known that finding a social support group is just as important to your academic growth as finding an academic/research home. I consider myself a fairly stubborn person – I consider words of discouragement a challenge to prove others wrong. But psychological death by a thousand cuts (i.e. words of negativism) is a reality for many early researchers. A social support group helps to balance the negativism that others, sometimes unconsciously, subject others too.
What would you have chosen as your career if you hadn’t gone into CS?
If I hadn’t gone into the field of Robotics/AI, I would have chosen a career as a forensic scientist. I’ve always loved puzzles and in forensic science, as a career, I would have focused on solving life puzzles based on the physical evidence. The data doesn’t lie (although, as we know, you can bias the data so it seems to).
What is a “typical” day like for you?
Although I have no “typical” day – I can categorize my activities into five main buckets, in no priority order: 1) human-human interactions, 2) experiments and deployments, 3) writing (including emails), 4) life balance activities, and 5) thinking/research activities. Human-human interactions involve everything from meeting with my students to talking with special education teachers to one-on-one observations in the pediatric clinic. Experiments and deployments involve everything from running a participant study to evaluating the statistics associated with a study hypothesis. Writing involves reviewing my students’ publication drafts, writing proposals, and, of course, addressing email action items. Life-balance activities include achieving my daily exercise goals as well as ensuring I don’t miss any important family events. Finally thinking/research activities covers anything related to coding up a new algorithm, consulting with my company, or jotting down a new research concept on a scrap of paper.
What is the most interesting project you are currently involved with?
The most interesting project that I currently lead involves an investigation in developing robot therapy interventions for young children with motor disabilities. For this project, we have developed an interactive therapy game called SuperPop VR that requires children to play within a virtual environment based on a therapist-designed protocol. A robot playmate interacts with each child during game play and provides both corrective and motivational feedback. An example of corrective feedback is when the robot physically shows the child how to interact with the game at the correct movement speed (as compared to a normative data profile). An example of motivational feedback is when the robot, through social interaction, encourages the child when they have accomplished their therapy exercise goal. We’ve currently deployed the system in pilot studies with children with Cerebral Palsy and have shown positive changes with respect to their kinematic outcome metrics. We’re pushing the state-of-the-art in this space by incorporating additional factors for enhancing the long-term engagement through adaptation of both the therapy protocol as well as the robot behaviors.
How do you balance being involved in so many different aspects of the AI community?
In order for me to become involved in any new AI initiative and still maintain a healthy work-life balance, I ask myself – Is this initiative something that’s important to me and aligned with my value system; Can I provide a unique perspective to this initiative that would help to make a difference; Is it as important or more important than other initiatives I’m involved in; and Is there a current activity that I can replace so I have time to commit to the initiative now or in the near-future. If the answer is yes to all those questions, then I’m usually able to find an optimal balance of involvement in the different AI initiatives of interest.
What is your favorite CS or AI-related movie or book and why?
My favorite AI-related movie is the Matrix. What fascinates me about the Matrix is the symbiotic relationship that exists between humans and intelligent agents (both virtual and physical). One entity can not seem to exist without the other. And operating in the physical world is much more difficult than operating in the virtual, although most agents don’t realize that difference until they accept the decision to navigate in both types of worlds.
December is a busy month for AI Policy activities. This blog post is a summary of the important topics in which SIGAI members are involved. Subsequent Policy blog posts will cover these in more detail. Meanwhile, we encourage you to read the information in this post and participate in the IEEE Standards Association December 18th online event on Policy for Artificial Intelligence.
The summit co-sponsors included ACM and ACM SIGAI. The overview is as follows:
“The goal of the summit was to put the issue of technology and jobs on the national agenda in an informed and deliberate manner. The summit brought together leading technologists, economists, and policy experts who offered their views on where technology is headed and what its impact may be, and on policy issues raised by these projections and possible policy responses. The summit was hosted by the Computing Research Association, as part of its mission to engage the computing research community to provide trusted, non-partisan input to policy thinkers and makers.”
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
As reported in previous posts, members of SIGAI and USACM have been working closely with IEEE colleagues on ethics and policy issues.
The Global Initiative was launched in April of 2016 to move beyond the paranoia and the uncritical admiration regarding autonomous and intelligent technologies and to illustrate that aligning technology development and use with ethical values will help advance innovation while diminishing fear in the process. The goal of The IEEE Global Initiative is “to incorporate ethical aspects of human well-being that may not automatically be considered in the current design and manufacture of A/IS technologies and to reframe the notion of success so human progress can include the intentional prioritization of individual, community, and societal ethical values.”
The goal of the Global Initiative is “to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.”
Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS) encourages technologists to prioritize ethical considerations in the creation of A/IS systems. EADv2 is being released as a Request For Input. Details on how to submit public comments are available via The Initiative’s Submission Guidelines.
Policy for Artificial Intelligence: The Power of Imaginaries
IEEE Standards Association (IEEE-SA) will present the third in a series of three free online events focused on Policy for Artificial Intelligence on December 18, 2017, at 12:00 p.m. EST
Policy for Artificial Intelligence: The Power of Imaginaries, will feature Konstantinos Karachalios (Managing Director, IEEE-SA; Member of IEEE Management Council), Nicolas Miailhe (Co-Founder and President, The Future Society; Harvard Kennedy School, Senior Visiting Fellow, Program on Science Technology and Society and member, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and Cyrus Hodes, Director of the AI Initiative with The Future Society at Harvard Kennedy School. John C. Havens, Executive Director, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, will moderate.
IEEE-SA: “Imaginaries are, ‘collectively held, institutionally stabilized, and publicly performed visions of a desirable future, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology’ (Jasanoff & Kim; from Dreamscapes of Modernity). If we want to have a positive future in regards to AI, we have to critically reflect upon our current imaginary in order to ‘imagine’ a new one, and the policy and principles we need to attain it.” REGISTER TODAY
Nominations are solicited for the 2018 ACM SIGAI Autonomous Agents Research Award. This award is made for excellence in research in the area of autonomous agents. It is intended to recognize researchers in autonomous agents whose current work is an important influence on the field. The award is an official ACM award, funded by an endowment created by ACM SIGAI from the proceeds of previous Autonomous Agents conferences. The recipient of the award will receive a monetary prize and a certificate, and will be invited to present a plenary talk at the AAMAS 2018 conference in Stockholm, Sweden.
Previous winners of the ACM SIGAI Autonomous Agents Research Award are: David Parkes (2017), Peter Stone (2016), Catherine Pelachaud (2015), Michael Wellman (2014), Jeff Rosenschein (2013), Moshe Tennenholtz (2012), Joe Halpern (2011), Jonathan Gratch and Stacy Marsella (2010), Manuela Veloso (2009), Yoav Shoham (2008), Sarit Kraus (2007), Michael Wooldridge (2006), Milind Tambe (2005), Makoto Yokoo (2004), Nicholas R. Jennings (2003), Katia Sycara (2002), and Tuomas Sandholm (2001). For more information on the award, see the Autonomous Agents Research Award page.
How to nominate
Anyone can make a nomination. Nominations should be made by email to the chair of the award committee, Jeff Rosenschein (firstname.lastname@example.org), and should consist of a short (< 1 page) statement that emphasizes not only the research contributions that the individual has made that merit the award but also how the individual’s current work is an important influence on the field.
NOTE: a candidate can only be considered for the award if they are explicitly nominated. If you believe that someone deserves the award, then NOMINATE THEM — don’t assume that somebody else will!
17 January 2018 — Deadline for nominations
7 February 2018 — Announcement of 2017 winner
10-15 July 2018 — AAMAS-2018 conference in Stockholm
This year’s Fall Symposium Series (November 9-11) provided updates and insights on advances in research and technology, including resources for discussion of AI policy issues. The symposia addressed topics in human-robot interaction, cognitive assistance in government and public sectors, military applications, human-robot collaboration, and a standard model of the mind. An important theme for public policy was the advances and questions on human-AI collaboration.
The cognitive assistance sessions this year focused on government and public sector applications, particularly autonomous systems, healthcare, and education. Human-technology collaboration advances involved discussions of issues relevant to public policy, including privacy and algorithmic transparency. The increasing mix of AI with humans in ubiquitous public and private systems was the subject of discussions about new technological developments and the need for understanding and anticipating challenges for communication and collaboration. Particular issues were on jobs and de-skilling of the workforce, credit and blame when AI applications work or fail, and the role of humans with autonomous systems.
IBM’s Jim Spohrer made an outstanding presentation “A Look Toward the Future”, incorporating his rich experience and current work on anticipated impacts of new technology. His slides are well worth studying, especially for the role of hardware in game-changing technologies with likely milestones every ten years through 2045. Radical developments in technology would challenge public policy in ways that are difficult to imagine, but current policymakers and the AI community need to try.
Particular takeaways, and anticipated subjects for future blogs, are about the importance of likely far-reaching research and applications on public policy. The degree and nature of cognitive collaboration with machines, the future of jobs, new demands on educational systems as cognitive assistance becomes deep and pervasive, and the anticipated radical changes in AI capabilities put the challenges to public policy in a new perspective. AI researchers and developers need to partner with social scientists to anticipate communication and societal issues as human-machine collaboration accelerates, both in system development teams and in the new workforce.
Some recommended topics for thinking about AI technology and policy are the following:
Jim Spohrer’s slideshare Noriko Arai’s TED talk on Todai Robot
Humans, Robotics, and the Future of Manufacturing New education systems and the future of work
Computing education: Coding vs. learning to use systems
Smart phone app “Seeing AI” AAAI for information related to science policy issues.
The membership of USACM will be voting soon to elect at-large representatives to the USACM Council, with terms starting January 1st. At-large Council members whose terms expire this December 31st are Jean Camp, Simson Garfinkel, and Jonathan Smith. If you are a member of USACM and are interested in serving on USACM Council, please contact a member of the nominations committee. If there is someone is in line with what you think USACM should be doing, then please nominate that person. Only those who have been USACM members for at least one year as of January 1, 2018, are eligible. The deadline for having a slate of candidates is November 13th.
ACM Policy Award
Consider nominating someone for this award, which is made in alternate years and the initial one is yet to be made because insufficient nominations were received the first time around. “The ACM Policy Award was established in 2014 to recognize an individual or small group that had a significant positive impact on the formation or execution of public policy affecting computing or the computing community. This can be for education, service, or leadership in a technology position; for establishing an innovative program in policy education or advice; for building the community or community resources in technology policy; or other notable policy activity. The award is accompanied by a $10,000 prize.” Further information and instructions are available at http://awards.acm.org/policy/nominations.
The award can recognize one or more of the following:
– Contributions to policy while working in a policy position
– Distinguished service on and contributions to policy issues
– Advanced scholarly work that has impacted policy
The deadline for nominations is January 15, 2018.
Missed Opportunities — Federal Science Policy Offices
I reached out to people who might know of prospects for the current Administration to make important policy position appointments.
Not much to report:
1. The Administration has yet to nominate a Director for the White House Office of Science and Technology Policy (OSTP). OSTP director traditionally serves as the president’s science adviser.
2. Office of the Chief Technology Officer is also vacant. In the past, the CTO team helps shape Federal policies, initiatives, capacity, and investments that support the mission of harnessing the power of technology. They have also worked to anticipate and guard against the consequences that can accompany new discoveries and technologies.
3. The U.S. Department of Agriculture’s chief scientist nominee, Sam Clovis, recently withdrew his name from consideration. Clovis is a climate change denier with no training in science, food, or agriculture. For months, scientists, activists, and a broad coalition of groups have come together to demand that the Senate reject his nomination.
What do recent deployments of AI mean to the public or the average citizen? Will AI be a transparent technology, invisible at the public policy level? Is it too late to address the moral , ethical, and economic issues introduced by the commercialization of AI?
On September 14, 2017 the NEOACM (Northeast Ohio ACM) Professional chapter held the “We come in peace 2” AI panel hosted by the McDonough Museum of Fine Art in Youngstown Ohio. The members of the panel were: Doug McCollough: CIO of Dublin Ohio, Dr. Shiqi Zhang: AI and Robotics Reseacher at Cleveland State University, Andrew Konya: Co-founder & CEO of Remesh, a Cleveland-based AI company,Dr. Jay Ramanathan: Executive Director of Arthapedia.zone, Paul Carlson: Intelligent Community Strategist for Columbus Ohio and Dr. Mark Vopat: Professor of Political Philosophy, Applied Ethics at Youngstown State University. Our moderator was Nikola Danaylov, author of the best selling book “Conversations with Future: 21 Visions for the 21st century”.
The goal of the panel was to was discuss the latent consequences both positive and negative of recent AI based technologies that are being deployed and reach the general public. The scope of the goal ranged from the ethics and policy that must be considered as smart cities are brought on line to the impact of robotics and decision making technologies in law enforcement. The panel visited such diverse subject matter as Cognitive Computing to Agent Belief. While the focus originally started out on AI deployments in cities in the state of Ohio, it became clear that most of the issues where universal in nature. The panel started at 6:00 p.m. EDT and it was just getting warmed up when we had to bring it to a close at 8:00 p.m. EDT. There just wasn’t time to get to all of the questions, or to do justice to all of the issues and topics that were introduced during the panel. There was a burning desire to continue the conversation and debate. So after a discussion with some of our fellow ACM members at SIGAI and the AI panelists we’ve decided to carry over some of that discussion to an AI-Matters blog in hopes that we could engage the broader AI community as well as have a more flexible format that would give us ample time and space. Some of the highlights for the AI Panel can be found at:
The plan is to tackle some of the subject matter in this blog and to handle other aspects of the subject matter in webinar form. We hope that our fellow SIGAI members will feel free to contribute to this conversation as it develops providing questions, insights, suggestions, and direction. The moderator Nikola Danaylov and the panelists have all agreed to participate in this blog so if this blog goes anything like the panel discussion, “hold on to your seats”! We want to dive into the questions such as what does this recent incarnation of “Artificial Intelligence” mean to the public or for the average citizen? What impact will it have on infrastructure and the economy? From a commercialization perspective has “AI” been displaced by machine learning and data science? If AI and machine learning become transparent technologies will it be possible to regulate their impact on society? Is it already too late to stop any potential negative impact of AI based technologies? And I for one am looking forward to a continuation of the discussion of just what constitutes agent beliefs, where they come from, and how will agent belief systems be dealt with at the public policy or commercialization level. And then again perhaps even these are the wrong questions to be asking if our concern is the public good. We hope you join us as we attempt to deal with these questions and more.
Current Chair NEOACM Professional Chapter
The new joint ACM/IEEE group met recently via conference calls to explore the idea of proposing a session at the 2018 RightsCon in Toronto on a topic of mutual interest to the two organizations’ ethics and policy members. Your SIGAI members Simson Garfinkel, Sven Koenig, Nick Mattei, and Larry Medsker are participating in the group. Stuart Shapiro, Chair of ACM US Public Policy Council, is representing ACM. Members from IEE include John C. Havens, Executive Director of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and Dr. Ansgar Koene, University of Nottingham and working group chair for IEEE Standard on Algorithm Bias Considerations.
The group meets again soon to propose a panel in the area of bias and algorithmic accountability. SIGAI members are welcome to nominate panel members and volunteer. SIGAI members are also encouraged to contribute ideas that could focus the discussion and meet the following RightsCon goals:
– including speakers from a diverse range of backgrounds
– addressing an important challenge to human rights in the digital age
– engaging participants in a way that inspires real-world outcomes
(e.g., new policy approaches and innovative technology solutions)
– introducing new voices, new concepts, and fresh take on an issue
– having the potential to encourage cross-sector collaborations
– using an innovative format to present the idea and generate outcomes
The call for proposals mentions “Artificial Intelligence, Automation, and Algorithmic Accountability” as one of their program “buckets”. RightsCon is accepting presentation proposals until November 24, 2017. Sessions will have 16 program buckets, which cover topics including Digital Security and Encryption, Artificial Intelligence, Automation, Algorithmic Accountability, Misinformation, Journalism, and the Future of Online Media.
Computing Community Consortium says it “has hosted dozens of research visioning workshops to imagine, discuss, and debate the future of computing and its role in addressing societal needs. The second CCC Computing Research symposium draws these topics into a program designed to illuminate current and future trends in computing and the potential for computing to address national challenges.”
You may also want to check out the CCC Blog at http://www.cccblog.org/ for policy issues of common interest for SIGAI members.
Following the USACM Panel at the National Press Club, attendees discussed ways to bring together the voices of ACM and IEEE on Algorithmic Transparency and Accountability. One opportunity is at RightsCon Toronto: May 16-18, 2018. The call for proposals mentions “Artificial Intelligence, Automation, and Algorithmic Accountability” as one of their program “buckets”. RightsCon is accepting proposals for presentations until November 24, 2017. Sessions will have 16 program buckets, which cover topics including Digital Security and Encryption, Artificial Intelligence, Automation, and Algorithmic Accountability to Misinformation, Journalism, and the Future of Online Media.
A new initiative is Local Champions at RightsCon Toronto,which features leading voices in Canada’s digital rights landscape. They plan to support thought leadership, program guidance, and topic identification to ensure that the most pressing issues are represented at RightsCon.
Dr. Koene also shared information about the IEEE P7001 Working Group on the IEEE Standard on Transparency of Autonomous Systems http://sites.ieee.org/sagroups-7001/. This working group is chaired by Prof. Alan Winfield who is also very interested in the idea of data recorders, like airplane ‘black boxes’, to provide insight into behavior of autonomous vehicles for accident investigation. http://www.cems.uwe.ac.uk/~a-winfield/
Please share additional opportunities for SIGAI members to join with other groups working on issues in algorithmic transparency and accountability. We welcome also your comments on the many AI applications and technologies that should be included in our focus on public policy.