Doctorant F/H Hate Speech, Toxicity and Opinions Detection and Analysis on Social Media Conversations

Type de contrat : CDD

Contrat renouvelable : Oui

Niveau de diplôme exigé : Bac + 5 ou équivalent

Fonction : Doctorant

Contexte et atouts du poste

The PhD will be supervised by Chloé Clavel at Inria (Inria Paris centre) in the ALMAnaCH project-team (http://almanach.inria.fr/index-en.html) and Jean-Philippe Cointet  (Medialab Sciences Po). It will be financed by the ANR Sinnet project. 

Mission confiée

This research project investigates the influence of sociological variation in a paradigmatic NLP task: the measurement of hate speech and toxicity in online discussions about political conflict. Hate speech can be defined as any communication that expresses hate or encourages violence towards a person or a target group based on some characteristic such as race, religion, ethnicity, gender, nationality, or sexual orientation. Toxic language includes a wider range of offensive discourses, including prosecutable discourses (hate speech, threats, abuse, and defamations), but also other “socially unacceptable discourse” such as immoral insults and obscenities. By using large-scale data sources from social media and media platforms (primarily Reddit), we aim to detect and measure hate and toxicity in online discourse and understand their role in reinforcing divergent perspectives and fueling societal divisions.

Whether language use expresses toxicity or constitutes hate speech depends on multiple contextual factors, including the status relations of participants in conversation; the organizational context where conversation occurs; the political goals that people bring into social exchanges; and multiple cultural logics that mark any episode of communication as normatively appropriate (or not) in its time and place. Moreover, definitions and measures of toxicity and hate speech themselves have become crucial resources in digitally mediated political conflict, and exploiting these ambiguities to veil expressions of hatred has become an increasingly popular discursive strategy online. These factors make it difficult to detect and measure toxicity and hate speech automatically: the same phrase may carry substantially different meanings in different settings, and seemingly innocuous phrases or terms may signal hate to a canny audience, but text datasets rarely contain the contextual information needed to infer these forms of semantic variation.

The primary objective of the project is to automatically extract semantic and emotional information that indicates an effort to propagate hate online. We propose integrating information about the speakers' social and political context to understand how toxic language and hate speech are constructed in different ways depending on the position of the speaker and listener. Leveraging the conversational structure of Reddit data, the project will examine the impact of conflicting stances on discourse dynamics by comparing the dynamics of emotions, the use of toxic language, and related semantic choices present in homophilous (like-minded) versus heterophilous (differing) conversations.

The project outcomes will help the understanding of online opinions and discourse dynamics in the context of political conflict. The project will explore state-of-the-art models (both supervised machine learning and generative approaches using large language models (LLMs) with appropriate prompting) in natural language processing for hate speech classification tasks and further our understanding of the trade-offs of these models. The project contributions will also include original datasets and annotation guides, evaluation protocols, and benchmarks of model biases. By conducting an in-depth analysis of model biases in the context of a paradigmatic NLP problem (hate speech and toxicity detection) this project will advance methods of identifying, describing, and mitigating biases for sensitive classification tasks generally.

 

Principales activités

The candidate’s main activities will include:

  • keeping up-to-date with related work on the topic with regular reading
  • carrying out research on the topic outlined above, both in the development of new ideas, positioning with respect to related work and validation of the methodology via experiments and analysis
  • the presentation of work both internally to colleagues and externally in the form of conference/journal/workshop papers and in the final PhD thesis
  • interacting and exchanging with colleagues on NLP topics

The PhD position is a 3-year funded position to start from the 1st November 2024.

Compétences

Candidates should have a good level in programming (python), experience with neural networks and an interest in natural language processing. A good written and spoken level of English and French is required.

Avantages

  • Restauration subventionnée
  • Transports publics remboursés partiellement
  • Congés: 7 semaines de congés annuels + 10 jours de RTT (base temps plein) + possibilité d'autorisations d'absence exceptionnelle (ex : enfants malades, déménagement)
  • Possibilité de télétravail et aménagement du temps de travail
  • Équipements professionnels à disposition (visioconférence, prêts de matériels informatiques, etc.)
  • Prestations sociales, culturelles et sportives (Association de gestion des œuvres sociales d'Inria)
  • Accès à la formation professionnelle
  • Sécurité sociale