{"id":498,"date":"2019-03-18T03:22:07","date_gmt":"2019-03-18T03:22:07","guid":{"rendered":"http:\/\/sigai.acm.org\/aimatters\/blog\/?p=461"},"modified":"2019-03-18T03:22:07","modified_gmt":"2019-03-18T03:22:07","slug":"openai","status":"publish","type":"post","link":"https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/","title":{"rendered":"OpenAI"},"content":{"rendered":"\n<p>A recent controversy erupted over <a href=\"https:\/\/crunchbase.com\/organization\/openai\">OpenAI<\/a>\u2019s new version of\ntheir language model for generating well-written next words of text based on\nunsupervised analysis of large samples of writing. Their announcement and\ndecision not to follow open-source practices raises interesting policy issues\nabout regulation and self-regulation of AI products. OpenAI, a non-profit AI\nresearch company founded by Elon Musk and others, <a href=\"https:\/\/openai.com\/blog\/better-language-models\/\">announced<\/a> on\nFebruary 14, 2019, that \u201cWe\u2019ve trained a large-scale unsupervised language\nmodel which generates coherent paragraphs of text, achieves state-of-the-art\nperformance on many language modeling benchmarks, and performs rudimentary\nreading comprehension, machine translation, question answering, and\nsummarization\u2014all without task-specific&nbsp;training.\u201d<\/p>\n\n\n\n<p>The reactions to the announcement followed from the decision behind the following statement in the release: \u201cDue to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much\u00a0smaller model\u00a0for researchers to experiment with, as well as a\u00a0<a rel=\"noreferrer noopener\" href=\"https:\/\/d4mucfpksywv.cloudfront.net\/better-language-models\/language_models_are_unsupervised_multitask_learners.pdf\" target=\"_blank\">technical\u00a0paper<\/a>.&#8221;<\/p>\n\n\n\n<p>Examples of the many reactions are <a href=\"https:\/\/techcrunch.com\/2019\/02\/17\/openai-text-generator-dangerous\/\">TechCrunch.com<\/a> and <a href=\"https:\/\/www.wired.com\/story\/ai-text-generator-too-dangerous-to-make-public\/\">Wired<\/a>. The Electronic Frontier Foundation has an <a href=\"https:\/\/www.eff.org\/deeplinks\/2019\/03\/openais-recent-announcement-what-went-wrong-and-how-it-could-be-better\">analysis<\/a> of the manner of the release (letting journalists know first) and concludes, \u201cwhen an otherwise respected research entity like OpenAI makes a unilateral decision to go against the trend of full release, it endangers the open publication norms that currently prevail in language understanding research.\u201d<\/p>\n\n\n\n<p>This issue is an example of previous ideas in our Public Policy\nblog about who, if anyone, should regulate AI developments and products that\nhave potential negative impacts on society. Do we rely on self-regulation or\nrequire governmental regulations? What if the U.S. has regulations and other\ncountries do not? Would a clearinghouse approach put profit-based pressure on\ndevelopers and corporations? Can the open source movement be successful without\nregulatory assistance?<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A recent controversy erupted over OpenAI\u2019s new version of their language model for generating well-written next words of text based on unsupervised analysis of large samples of writing. Their announcement and decision not to follow open-source practices raises interesting policy issues about regulation and self-regulation of AI products. OpenAI, a non-profit AI research company founded [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"categories":[12],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>OpenAI - ACM SIGAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"OpenAI - ACM SIGAI\" \/>\n<meta property=\"og:description\" content=\"A recent controversy erupted over OpenAI\u2019s new version of their language model for generating well-written next words of text based on unsupervised analysis of large samples of writing. Their announcement and decision not to follow open-source practices raises interesting policy issues about regulation and self-regulation of AI products. OpenAI, a non-profit AI research company founded [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/\" \/>\n<meta property=\"og:site_name\" content=\"ACM SIGAI\" \/>\n<meta property=\"article:published_time\" content=\"2019-03-18T03:22:07+00:00\" \/>\n<meta name=\"author\" content=\"Larry Medsker\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Larry Medsker\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"1 minute\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/\",\"url\":\"https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/\",\"name\":\"OpenAI - ACM SIGAI\",\"isPartOf\":{\"@id\":\"https:\/\/sigai.acm.org\/main\/#website\"},\"datePublished\":\"2019-03-18T03:22:07+00:00\",\"author\":{\"@id\":\"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/5097a3e1c76f2c205fe0f5ebb9b51fdb\"},\"breadcrumb\":{\"@id\":\"https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/sigai.acm.org\/main\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"OpenAI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/sigai.acm.org\/main\/#website\",\"url\":\"https:\/\/sigai.acm.org\/main\/\",\"name\":\"ACM SIGAI\",\"description\":\"ACM Special Interest Group on Artificial Intelligence\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/sigai.acm.org\/main\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/5097a3e1c76f2c205fe0f5ebb9b51fdb\",\"name\":\"Larry Medsker\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/a175bde07d4c8846a16bc64afa6e97f1?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/a175bde07d4c8846a16bc64afa6e97f1?s=96&d=mm&r=g\",\"caption\":\"Larry Medsker\"},\"url\":\"https:\/\/sigai.acm.org\/main\/author\/larrym\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"OpenAI - ACM SIGAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/","og_locale":"en_US","og_type":"article","og_title":"OpenAI - ACM SIGAI","og_description":"A recent controversy erupted over OpenAI\u2019s new version of their language model for generating well-written next words of text based on unsupervised analysis of large samples of writing. Their announcement and decision not to follow open-source practices raises interesting policy issues about regulation and self-regulation of AI products. OpenAI, a non-profit AI research company founded [&hellip;]","og_url":"https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/","og_site_name":"ACM SIGAI","article_published_time":"2019-03-18T03:22:07+00:00","author":"Larry Medsker","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Larry Medsker","Est. reading time":"1 minute"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/","url":"https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/","name":"OpenAI - ACM SIGAI","isPartOf":{"@id":"https:\/\/sigai.acm.org\/main\/#website"},"datePublished":"2019-03-18T03:22:07+00:00","author":{"@id":"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/5097a3e1c76f2c205fe0f5ebb9b51fdb"},"breadcrumb":{"@id":"https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/sigai.acm.org\/main\/2019\/03\/18\/openai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/sigai.acm.org\/main\/"},{"@type":"ListItem","position":2,"name":"OpenAI"}]},{"@type":"WebSite","@id":"https:\/\/sigai.acm.org\/main\/#website","url":"https:\/\/sigai.acm.org\/main\/","name":"ACM SIGAI","description":"ACM Special Interest Group on Artificial Intelligence","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/sigai.acm.org\/main\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/5097a3e1c76f2c205fe0f5ebb9b51fdb","name":"Larry Medsker","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/a175bde07d4c8846a16bc64afa6e97f1?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a175bde07d4c8846a16bc64afa6e97f1?s=96&d=mm&r=g","caption":"Larry Medsker"},"url":"https:\/\/sigai.acm.org\/main\/author\/larrym\/"}]}},"_links":{"self":[{"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/posts\/498"}],"collection":[{"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/comments?post=498"}],"version-history":[{"count":0,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/posts\/498\/revisions"}],"wp:attachment":[{"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/media?parent=498"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/categories?post=498"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/tags?post=498"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}