A Neuro-Symbolic Approach to Deal with Implicit Arguments

Contract type : Internship

Level of qualifications required : Graduate degree or equivalent

Fonction : Internship Research

Level of experience : Recently graduated

About the research centre or Inria department

The Inria center at Université Côte d'Azur includes 42 research teams and 9 support services. The center’s staff (about 500 people) is made up of scientists of different nationalities, engineers, technicians and administrative staff. The teams are mainly located on the university campuses of Sophia Antipolis and Nice as well as Montpellier, in close collaboration with research and higher education laboratories and establishments (Université Côte d'Azur, CNRS, INRAE, INSERM ...), but also with the regional economic players.

With a presence in the fields of computational neuroscience and biology, data science and modeling, software engineering and certification, as well as collaborative robotics, the Inria Centre at Université Côte d'Azur  is a major player in terms of scientific excellence through its results and collaborations at both European and international levels.

Context

General presentation of the Master 2 internship (duration = 6 months):

  • In the context of a partnership between University College London, IRIT in Toulouse and INRIA in Sophia Antipolis, the aim is to develop a learning model dedicated to assessing the quality of explicability of an argument.
  • Depending on the results obtained, this research will be submitted to a top ranking journal in artificial intelligence.
  • In addition, if the student has a good record, this research topic could be continued in a thesis if funding is obtained.

General Scientific Context:

This research lies in the field of Artificial Intelligence (AI), and more particular in Argumentation. A variety of research questions are explored in AI-Argumentation, either in:

  1. symbolic AI frameworks, ranging from knowledge representation (addressing challenges such as contradictory information, incompleteness, uncertainty, temporality, etc.) to automatic and explainable reasoning; or in
  2. connectionist AI frameworks, whose task is to classify or generate different information (e.g. the premise/claim structure of an argument, the stance of an argument, whether the argument is fallacious, what type of fallacy it is, or the relations between arguments: either positive relations, such as support, or negative relations, such as contradiction).

Argumentation systems are powerful theories and tools for representing and managing contradictory informa- tion in an explainable way. They can be highly valuable, for example, in understanding and analyzing political debates, or providing decision support to doctors when generating and evaluating medical diagnoses, or assisting judges in evaluating different legal defenses in a court of law.

 

Specific Context:
This internship is based on a current paper proposing to tackle a new and so far unstudied research question:
how to evaluate the quality of a decoded argument with respect to an initial implicit argument.
Here is the abstract of the article to be investigated:
An argument can be seen as a pair consisting of a set of premises and a claim supported by them. Arguments used by humans are often enthymemes, i.e., some premises are implicit. To better understand, evaluate, and compare enthymemes, it is essential to decode them, i.e., to find the missing premisses. Many enthymeme decodings are possible. We need to distinguish between reasonable decodings and unreasonable ones. However, there is currently no research in the literature on “How to evaluate decodings?”. To pave the way and achieve this goal, we introduce seven criteria related to decoding, based on different research areas. Then, we introduce the notion of criterion measure, the objective of which is to evaluate a decoding with regard to a certain criterion. Since such measures need to be validated, we introduce several desirable properties for them, called axioms. Another main contribution of the paper is the construction of certain criterion measures that are validated by our axioms. Such measures can be used to identify the best enthymemes decodings.


In the figure below* we illustrate with an example the process carried out in our approach, i.e. from an enthymeme and a set of decodings, we evaluate (for each pair enthymeme/decoding) individually each aspect (e.g., minimality, inference, similarity, granularity, weight, etc.) then we aggregate all these values to obtain a global quality score.

* http://www-sop.inria.fr/members/Victor.David/schema_stage_Arg_Implicit.jpg

In this example, we can see that the enthymeme (implicit argument) E has different information that can justify why Bob is happy. Then we have 3 decodings justifying why Bob is happy according to different reasons (D1 uses the fact that Bob is a researcher, D2 that Bob gives and receives love, while D3 justifies his happiness according to his wealth). Furthermore, we can see that each piece of information, in logic, contains a probability that can be learned from the data, and we use them to find the most faithful explanation of what is implied in an enthymeme.

To sum up, to assess the quality of an enthymeme/decoding pair (i.e. to evaluate whether the decoding explains explicitly what is implied in the enthymeme) we need to:

  1. Define / use measures that evaluate whether a decoding satisfies a criterion (we have a total of 7 criteria to check). Note that all these criteria measures can be parameterised, depending on a user’s preferences (for example, whether a detailed or concise explanation is preferred, etc.).
  2. Once all these measures have been defined and calculated, we need to aggregate all these values to obtain a final quality score, between 0 and 1. Here again there are different strategies for aggregation, with different properties, depending on the user’s preferences (for example, some criteria may be mandatory while others are optional, which may therefore modify the final aggregation differently when a criterion is evaluated at 0, etc.).

Assignment

Assignments:

With the support of Victor DAVID, Jonathan BEN-NAIM, and Anthony HUNTER, the hired candidate will develop a dataset and a learning model.

Building on a recent paper (http://arxiv.org/abs/2411.04555) that formally defines a methodology with multiple parameters to accurately evaluate the quality of an argument decoding an implicit argument, we aim to extend this theoretical research into a practical experiment. In this experiment, we intend to use learning models to identify the optimal parameterizations based on evaluations provided by human.

For a better knowledge of the proposed research subject:

For further reading on the general scientific context (AI-Argumentation), consider the following references:

  • For an overview of argumentation research and practical applications, see: “Towards artificial argumentation.”; 2017; Atkinson, Katie, Pietro Baroni, Massimiliano Giacomin, Anthony Hunter, Henry Prakken, Chris Reed, Guillermo Simari, Matthias Thimm, and Serena Villata.; AI Magazine 38, no. 3: 25-36.
  • Research on connectionist AI argumentation is often termed “argument mining.” For more information, see:“Argument Mining: A Survey.”; 2020; Lawrence, John, and Chris Reed.; Computational Linguistics 45, no. 4: 765-818.

 

Main activities

 

 

 

Benefits package

  • Subsidized meals
  • Partial reimbursement of public transport costs
  • Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
  • Possibility of teleworking (after 6 months of employment) and flexible organization of working hours
  • Professional equipment available (videoconferencing, loan of computer equipment, etc.)
  • Social, cultural and sports events and activities
  • Access to vocational training
  • Contribution to mutual insurance (subject to conditions)

Remuneration

Traineeship grant depending on attendance hours.