News and SIGAI Webinar

News from USACM

Next week the  USACM Council will be holding its annual in-person meeting in Washington, beginning with a reception Wednesday, March 21st from 5 to 7 at the Georgetown home of Law Committee Chair Andy Grosso. We cordially invite DC-area USACM members to join us. If you plan to attend, please RSVP to Adam Eisgrau <eisgrau@HQ.ACM.ORG>, who will provide further details.

Statement of the European Group on Ethics in Science and New Technologies on “Artificial Intelligence, Robotics and ‘Autonomous’ Systems,” published March 9:
http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf
The statement calls for the EC to “launch a process that paves the way towards a common, internationally recognized ethical and legal framework for the design, production, use and governance of artificial intelligence, robotics, and ‘autonomous’ systems.”

President Donald Trump today tapped Obama-era deputy U.S. CTO Ed Felten to serve on the Privacy and Civil Liberties Oversight Board (https://www.pclob.gov/).

ACM SIGAI Learning Webinar “Advances in Socio-Behavioral Computing”

This live presentation was given on Thursday, March 15  by Tomek Strzalkowski, Director of the Institute for Informatics, Logics, and Security Studies and Professor at SUNY Albany. Plamen Petrov, Director of Cognitive Technology at KPMG LLP and Industry Liaison Officer of ACM SIGAI, and Rose Paradis, Data Scientist at Leidos Health and Life Sciences and SIGAI Secretary/Treasurer, moderated the questions and answers session.

Slides are available here.

This talk presented ongoing research on computational modeling and understanding of social, behavioral, and cultural phenomena in multi-party interactions. They discussed how various linguistic cues reveal the social dynamics in group interactions, based on a series of experiments conducted in virtual on-line chat rooms, and then showed that these dynamics generalize to other forms of communication including traditional face-to-face discourse as well as the large scale online interaction via social media. They also showed how language compensates for the reduced cue environment in which online interactions take place.

They described a two-tier analytic approach for detecting and classifying certain sociolinguistic behaviors exhibited by discourse participants, including topic control, task control, disagreement, and involvement, that serve as intermediate models from which presence the higher level social roles and states such as leadership and group cohesion may be inferred. The results of an initial phase of the work used a system of sociolinguistic tools called DSARMD (Detecting Social Actions and Roles in Multiparty Dialogue).

Several extensions of the basic DSARMD model move beyond recognition and understanding of social dynamics and attempt to quantify and measure the effects that sociolinguistic behaviors by individuals and groups have on other discourse participants. Potentially, autonomous artificial agents could be constructed capable of exerting influence and manipulating human behavior in certain situations. Such extended capabilities could possibly be deployed to increase accuracy of predicting online information cascades, persuasion campaigns, and even defend against certain forms of social engineering attacks.

The model and tools presented in the Webinar are interesting to consider in the detection and assessment of algorithmic bias.

Leave a Reply

Your email address will not be published. Required fields are marked *