{"id":616,"date":"2021-03-16T02:03:16","date_gmt":"2021-03-16T02:03:16","guid":{"rendered":"http:\/\/sigai.acm.org\/aimatters\/blog\/?p=616"},"modified":"2021-03-16T02:17:06","modified_gmt":"2021-03-16T02:17:06","slug":"recent-and-upcoming-events","status":"publish","type":"post","link":"https:\/\/sigai.acm.org\/aimatters\/blog\/2021\/03\/16\/recent-and-upcoming-events\/","title":{"rendered":"Recent and Upcoming Events"},"content":{"rendered":"\n<p><strong>Brookings Webinar: Should the Government Play a Role in Reducing Algorithmic Bias?<\/strong><\/p>\n\n\n\n<p>On March 12, the Center for Technology Innovation at Brookings hosted a webinar on the role of government in identifying and reducing algorithmic biases (<a href=\"https:\/\/www.brookings.edu\/events\/should-the-government-play-a-role-in-reducing-algorithmic-bias\/?utm_campaign=Events%3A%20Governance%20Studies&amp;utm_medium=email&amp;utm_content=115213969&amp;utm_source=hs_email\">see video<\/a>). Speakers discussed what is needed to prioritize fairness in machine-learning models and how to weed out artificial intelligence models that perpetuate discrimination. Questions included<br>How do the European Union, U.K., and U.S. differ in their approaches to bias and discrimination? <br>What lessons can they learn from each other?<br>Should approaches to AI bias be universally applied to ensure civil and human rights for protected groups?<\/p>\n\n\n\n<p>They observe that \u201cpolicymakers and researchers throughout the world are\nconsidering strategies for reducing biased decisions made by machine-learning\nalgorithms. To date, the U.K. has been the most forward in outlining a role for\ngovernment in identifying and mitigating biases and their unintended\nconsequences, especially decisions that impact marginalized populations. In the\nU.S., legislators and policymakers have focused on algorithmic accountability\nand the explanation of models to ensure fairness in predictive decision making.\u201d<\/p>\n\n\n\n<p>The moderator was Alex Engler, Rubenstein Fellow &#8211;&nbsp;Governance Studies.<br>Speakers and discussants were<br>Lara Macdonald and Ghazi Ahamat, Senior Policy Advisors \u2013&nbsp;<a href=\"https:\/\/cdei.blog.gov.uk\/author\/lara-macdonald\/\">UK Centre for Data Ethics and Innovation<\/a>;<br>Nicol Turner Lee, Brookings Senior Fellow &#8211;&nbsp;<a href=\"https:\/\/www.brookings.edu\/program\/governance-studies\/\">Governance Studies<\/a>&nbsp; and Director,&nbsp;<a href=\"https:\/\/www.brookings.edu\/center\/center-for-technology-innovation\/\">Center for Technology Innovation<\/a>; and<br>Adrian Weller, Programme Director for <a href=\"https:\/\/www.turing.ac.uk\/research\/research-areas\/artificial-intelligence\">AI at the Alan Turing Institute<\/a> <\/p>\n\n\n\n<p><strong>Algo2021 Conference to\nBe Held on April 29, 2021<\/strong><\/p>\n\n\n\n<p>The University College London (Online) will present The <a href=\"https:\/\/www.thealgo.co\/welcome\">Algo2021 Conference<\/a>: Ecosystems of Excellence &amp; Trust, building upon the success of their 2020 inaugural conference. They will platform all major stakeholders \u2013 academia, civil service, and industry \u2013 by showcasing the cutting-edge developments, contemporary debates, and perspectives of major players. The 2021 conference theme reflects the desire to promote public good innovation. <a href=\"https:\/\/www.thealgo.co\/agenda-2\">Sessions and topics<\/a> include the following:<br>Machine Learning in Healthcare,<br>Trust and the Human-on-the-Loop,<br>Artificial Intelligence and Predictive Policing,<br>AI and Innovation in Healthcare Technologies,<br>AI in Learning and Education Technologies,<br>Building Communities of Excellence in AI, and<br>Human-AI and Ethics Issues.<\/p>\n\n\n\n<p><strong>Politico\u2019s AI Online Summit on May 31, 2021<\/strong><\/p>\n\n\n\n<p>The <a href=\"https:\/\/www.politico.eu\/ai-summit\/\">2021 Summit<\/a> plans to\ndissect&nbsp;Europe\u2019s AI legislative package, along with the impact of\ngeopolitical tensions and tech regulations, on topics such as data and privacy\nconcerns. The summit will convene top EU and national decision makers, opinion\nformers, and tech industry leaders.<\/p>\n\n\n\n<p>\u201cThe European Commission will soon introduce legislation to govern the use of AI, acting on its aim to draw up rules for the technology sector over the next five years and on its legacy as the world\u2019s leading regulator of digital privacy.&nbsp; At the heart of the issue is the will to balance the need for rules with the desire to boost innovation, allowing the old continent to assert its digital sovereignty. On where the needle should be, opinions are divided \u2013 and the publication of the Commission\u2019s draft proposal will not be the end of the discussion.\u201d <br>Issues to be addressed are the following:<br>How rules may fit broader plans to build European tech platforms that compete globally with other regions; <br>How new requirements on algorithmic transparency might be viewed by regular people; and<br>What kind of implementation efforts will be required for startups, mid-size companies and big tech. <br><br>The Politico 4<sup>th<\/sup>&nbsp;edition of the AI Summit will address important questions in panel discussions, exclusive interviews, and interactive roundtable discussions. Top regulators, tech leaders, startups, and civil society stakeholders will examine the EU\u2019s legislative framework on AI and data flow while tackling uncomfortable questions about people\u2019s fundamental rights, misinformation, and international cooperation that will determine the future of AI in Europe and worldwide. <\/p>\n","protected":false},"excerpt":{"rendered":"<p>Brookings Webinar: Should the Government Play a Role in Reducing Algorithmic Bias? On March 12, the Center for Technology Innovation at Brookings hosted a webinar on the role of government in identifying and reducing algorithmic biases (see video). Speakers discussed what is needed to prioritize fairness in machine-learning models and how to weed out artificial &hellip; <a href=\"https:\/\/sigai.acm.org\/aimatters\/blog\/2021\/03\/16\/recent-and-upcoming-events\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Recent and Upcoming Events&#8221;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[11,4],"tags":[],"_links":{"self":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts\/616"}],"collection":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/comments?post=616"}],"version-history":[{"count":4,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts\/616\/revisions"}],"predecessor-version":[{"id":621,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts\/616\/revisions\/621"}],"wp:attachment":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/media?parent=616"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/categories?post=616"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/tags?post=616"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}