3rd Workshop on Abusive Language Online (ALW3)

ACL 2019 (Florence, Italy), August 01 or 02, 2019

Submission deadline: April 25, 2019

Website: https://sites.google.com/view/alw3

Submission link: https://www.softconf.com/emnlp2018/ALW2/

Overview

Interaction amongst users on social networking platforms can enable constructive and insightful conversations and civic participation; however, on many sites that encourage user interaction, verbal abuse has become commonplace, leading to negative outcomes such as cyberbullying, hate speech, and scapegoating. In online contexts, aggressive behavior may be more frequent than in face-to-face interaction, which can poison the social climates within online communities. The last few years have seen a surge in such abusive online behavior, leaving governments, social media platforms, and individuals struggling to deal with the consequences.

For instance, in 2015, Twitter’s CEO publicly admitted that online abuse on their platform was resulting in users leaving the platform, and in some cases even having to leave their homes. More recently, Facebook, Twitter, YouTube and Microsoft pledged to remove hate speech from their platforms within 24 hours in accordance with the EU commission code of conduct and face fines of up to €50M in Germany if they systematically fail to remove abusive content within 24 hours. While governance demands the ability to respond quickly and at scale, we do not yet have effective human or technical processes that can address this need. Abusive language can often be extremely subtle and highly context dependent. Thus we are challenged to develop scalable computational methods that can reliably and efficiently detect and mitigate the use of abusive language online within variable and evolving contexts.

As a field that works directly with computational analysis of language, NLP (Natural Language Processing) is in a unique position to address this problem. Recently there have been a greater number of papers dealing with abusive language in the computational linguistics community. Abusive language is not a stable or simple target: misclassification of regular conversation as abusive can severely impact users’ freedom of expression and reputation, while misclassification of abusive conversations as unproblematic on the other hand maintains the status quo of online communities as unsafe environments. Clearly, there is still a great deal of work to be done in this area. More practically, as research into detecting abusive language is still in its infancy, the research community has yet to agree upon a suitable typology of abusive content as well as upon standards and metrics for proper evaluation, where research in media studies, rhetorical analysis, and cultural analysis can offer many insights.

In this second edition of this workshop, we continue to emphasize the computational detection of abusive language as informed by interdisciplinary scholarship and community experience. We invite paper submissions describing unpublished work from relevant fields including, but not limited to: natural language processing, law, psychology, network analysis, gender and women’s studies, and critical race theory.

Paper Topics

We invite long and short papers on any of the following general topics:

related to developing computational models and systems:

  • NLP models and methods for detecting abusive language online, including, but not limited to hate speech, cyberbullying etc.
  • Application of NLP tools to analyze social media content and other large data sets
  • NLP models for cross-lingual abusive language detection
  • Computational models for multi-modal abuse detection
  • Development of corpora and annotation guidelines
  • Critical algorithm studies with a focus on abusive language moderation technology
  • Human-Computer Interaction for abusive language detection systems
  • Best practices for using NLP techniques in watchdog settings

or related to legal, social, and policy considerations of abusive language online:

  • The social and personal consequences of being the target of abusive language and targeting others with abusive language
  • Assessment of current non-NLP methods of addressing abusive language
  • Legal ramifications of measures taken against abusive language use
  • Social implications of monitoring and moderating unacceptable content
  • Considerations of implemented and proposed policies for dealing with abusive language online and the technological means of dealing with it.

In addition, in this one-day workshop, we will have

  1. a multidisciplinary panel discussion and
  2. a forum for plenary discussion on the issues that researchers and practitioners face in efforts to work with abusive language detection

We seek to have a greater focus on policy aspects of online abuse through invited speakers and panels.

Unshared task

In order to encourage focused contributions, we encourage researchers to consider using one or more of the following datasets in their experiments:

  • StackOverflow Offensive Comments
  • Twitter Data Set [Waseem and Hovy, NAACL 2016]
  • German Twitter Data Set [Ross et al. NLP4CMC 2016]
  • Greek News Data Set [Pavlopoulos et al., EMNLP 2017]
  • Wikimedia Toxicity Data Set [Wulczyn et al., WWW 2017]
  • SFU Opinion and Comment Corpus [Kolhatkar et al., In Review]

Submission Information

We will be using the EMNLP 2018 Submission Guidelines. Authors are invited to submit a full paper of up to 8 pages of content with up to 2 additional pages for references. We also invite short papers of up to 4 pages of content, including 2 additional pages for references.

Accepted papers will be given an additional page of content to address reviewer comments.  We also invite papers which describe systems. If you would like to present a demo in addition to presenting the paper, please make sure to select either “full paper + demo” or “short paper + demo” under “Submission Category” in the START submission page.

Previously published papers cannot be accepted. The submissions will be reviewed by the program committee. As reviewing will be blind, please ensure that papers are anonymous. Self-references that reveal the author’s identity, e.g., “We previously showed (Smith, 1991) …”, should be avoided. Instead, use citations such as “Smith previously showed (Smith, 1991) …”.

We have also included conflict of interest in the submission form. You should mark all potential reviewers who have been authors on the paper, are from the same research group or institution, or who have seen versions of this paper or discussed it with you.

We will be using the START conference system to manage submissions:

https://www.softconf.com/ACL2019/ALW3/

Important Dates

Submission due: April 25, 2019

Author Notification: May 17, 2019

Camera Ready: May 29, 2019

Workshop Date: Aug 01 or 02, 2019

Organizing Committee

  • Vinodkumar Prabhakaran, Stanford University
  • Sarah T. Roberts, University of California, Los Angeles
  • Joel Tetreault, Grammarly
  • Zeerak Waseem, University of Sheffield

Program Committee/Reviewers

  • Swati Agarwal, BITS Pilani, Goa Campus, India
  • Mark Alfano, Delft University of Technology, Netherlands
  • Hind Almerekhi, Qatar Foundation,  HBKU, Qatar
  • Jisun An, Qatar Computing Research Institute, Hamad Bin Khalifa University, Qatar
  • Ion Androutsopoulos, Athens University of Economics and Business, Greece
  • Pinkesh Badjatiya, IIIT Hyderabad, India
  • Alistair Baron, Lancaster University, United Kingdom
  • Elizabeth Belding, UC Santa Barbara, United States
  • Joachim Bingel, University of Copenhagen, Denmark
  • Kalina Bontcheva , University of Sheffield, UK
  • Houda Bouamor, Fortia Financial Solutions, France
  • Peter Bourgonje, DFKI GmbH, Germany
  • Pedro Calais, UFMG, Brazil, Brazil
  • Bogdan Carbunar, FIU, USA
  • Eshwar Chandrasekharan, Georgia Tech, USA
  • Jonathan Chang, Cornell University, United States
  • Aron Culotta, Illinois Institute of Technology, USA
  • Mona Diab, Amazon AWS AI , United States
  • Lucas Dixon, Google/Jigsaw, France
  • Nemanja Djuric, Uber ATG, United States
  • Jacob Eisenstein, Facebook AI Research, USA
  • May ElSherif, UC Santa Barbara, United States
  • Elisabetta Fersini, University of Milano-Bicocca, Italy
  • Darja Fiser, University of Ljubljana, Slovenia
  • Lucie Flekova, Alexa AI, Germany
  • Maya Indira Ganesh, Leuphana University, Lueneburg,  Germany, Germany
  • Lei Gao, DMAI, USA
  • Sara Garza, Universidad Autonoma de Nuevo Leon, Mexico
  • Ryan Georgi, University of Washington, United States
  • Tanton Gibbs, Facebook, United States
  • Lee Gillam, University of Surrey, UK
  • Jen Golbeck, University of Maryland, USA
  • Erica Greene, Canopy, USA
  • Seda Gurses, KU Leuven, Belgium
  • Alex Hanna, Google, United States
  • Claire Hardaker, Lancaster University, United Kingdom
  • Christopher Homan, RIT, United States
  • Manoel Horta Ribeiro, UFMG, Brazil
  • Hossein Hosseini, University of Washington, Iran
  • Veronique Hoste, Ghent University, Belgium
  • Dirk Hovy, Bocconi University, Italy
  • Ruihong Huang, Texas A&M University, USA
  • Dan Jurafsky, Stanford, USA
  • Anna Kasunic, Carnegie Mellon University, USA
  • Geoff Kaufman, Carnegie Mellon University, Human-Computer Interaction Institute, USA
  • George Kennedy, Intel, United States
  • Neza Kogovsek Salamon, Peace Institute, Slovenia
  • VIVEK KULKARNI, U C Santa Barbara, United States
  • Haewoon Kwak, Qatar Computing Research Institute, Qatar
  • Els Lefever, LT3, Ghent University, Belgium
  • Shuhua Liu, Arcada University of Applied Sciences, Finland
  • Walid Magdy, University of Edinburgh, UK
  • Prodromos Malakasiotis, Athens University of Economics and Business, Greece
  • Shervin Malmasi, Harvard Medical School, USA
  • Diana Maynard, University of Sheffield, United Kingdom
  • Mainack Mondal, University of Chicago, USA
  • Manuel Montes-y-Gómez, INAOE, Mexico
  • Smruthi Mukund, Amazon , United States
  • Courtney Napoles, Grammarly, USA
  • Chikashi Nobata, Apple Inc., United States
  • Lilja Øvrelid, University of Oslo, Norway
  • Viviana Patti, University of Turin, Italy
  • Umashanthi Pavalanathan, Georgia Tech, United States
  • John (Ioannis) Pavlopoulos, Athens University of Economics and Business, Greece
  • Christopher Potts, Stanford Linguistics, USA
  • Daniel Preotiuc-Pietro, Bloomberg LP, United States
  • Michal Ptaszynski, Kitami Institute of Technology, Japonia
  • Jacek Pyżalski, Adam Mickiewicz University in Poznan, Poland
  • Georg Rehm, DFKI GmbH, Germany
  • Carolyn Rose, Carnegie Mellon University, United States
  • Björn Ross, University of Duisburg-Essen, Germany
  • Paolo Rosso, UNIVERSITAT POLITECNICA DE VALENCIA, Spain
  • Masoud Rouhizadeh, Johns Hopkins University, USA
  • Niloofar Safi Samghabadi, University of Houston, USA
  • Tyler Schnoebelen, Integrate.ai, United States
  • Alexandra Schofield, Cornell, United States
  • Qinlan Shen, Carnegie Mellon University, United States
  • Utpal Kumar Sikdar, Flytxt, India
  • Jeffrey Sorensen, Jigsaw (Google), USA
  • Maite Taboada, Simon Fraser University, Canada
  • Sajedul Talukder, Florida International University, USA
  • Linnet Taylor, Tilburg University, Netherlands, Netherlands
  • Anna Vartapetiance, University of Surrey / Securium LTD, United Kingdom
  • Rob Voigt, Stanford University, United States
  • Ingmar Weber, Qatar Computing Research Institute, Qatar
  • Michael Wojatzki, LTL, University of Duisburg-Essen, Germany
  • Torsten Zesch, University of Duisburg-Essen, Germany

Related posts

Leave a Comment