- A3.2. Knowledge
- A3.2.1. Knowledge bases
- A3.2.2. Knowledge extraction, cleaning
- A3.2.4. Semantic Web
- A3.2.5. Ontologies
- A3.2.6. Linked data
- A6.1.3. Discrete Modeling (multi-agent, people centered)
- A7.2. Logic in Computer Science
- A9. Artificial intelligence
- A9.1. Knowledge
- A9.9. Distributed AI, Multi-agent
- B8.5. Smart society
- B9. Society and Knowledge
- B9.5.1. Computer science
- B9.7.2. Open data
- B9.8. Reproducibility
1 Team members, visitors, external collaborators
- Jérôme Euzenat [Team leader, Inria, Senior Researcher, HDR]
- Manuel Atencia Arcas [Univ Grenoble Alpes, Associate Professor]
- Jérôme David [Univ Grenoble Alpes, Associate Professor]
- Yasser Bourahla [Inria]
- Alban Flandin [Univ Grenoble Alpes, from Nov 2020, MIAI]
- Khadija Jradeh [Univ Grenoble Alpes, Elker]
- Andrea Kalaitzakis [Univ Grenoble Alpes, from Oct 2020, MIAI]
- Line van den Berg [Univ Grenoble Alpes]
Interns and Apprentices
- Khadidja Mehdi [Inria, from Sep 2020]
- Julia Di Toro [Inria, from Oct 2020]
2 Overall objectives
Human beings are apparently able to communicate knowledge. However, it is impossible for us to know if we share the same representation of knowledge.
mOeX addresses the evolution of knowledge representations in individuals and populations. We deal with software agents and formal knowledge representation. The ambition of the mOeX project is to answer, in particular, the following questions:
- How do agent populations adapt their knowledge representation to their environment and to other populations?
- How must this knowledge evolve when the environment changes and new populations are encountered?
- How can agents preserve knowledge diversity and is this diversity beneficial?
We study them chiefly in a well-controlled computer science context.
For that purpose, we combine knowledge representation and cultural evolution methods. The former provides formal models of knowledge; the latter provides a well-defined framework for studying situated evolution.
We consider knowledge as a culture and study the global properties of local adaptation operators applied by populations of agents by jointly:
- experimentally testing the properties of adaptation operators in various situations using experimental cultural evolution, and
- theoretically determining such properties by modelling how operators shape knowledge representation.
We aim at acquiring a precise understanding of knowledge evolution through the consideration of a wide range of situations, representations and adaptation operators.
In addition, we still investigate rdf data interlinking with link keys, a way to link entities from different data sets.
3 Research program
3.1 Knowledge representation semantics
We work with semantically defined knowledge representation languages (like description logics, conceptual graphs and object-based languages). Their semantics is usually defined within model theory initially developed for logics.
We consider a language as a set of syntactically defined expressions (often inductively defined by applying constructors over other expressions). A representation () is a set of such expressions. It may also be called an ontology. An interpretation function () is inductively defined over the structure of the language to a structure called the domain of interpretation (). This expresses the construction of the “meaning” of an expression in function of its components. A formula is satisfied by an interpretation if it fulfills a condition (in general being interpreted over a particular subset of the domain). A model of a set of expressions is an interpretation satisfying all the expressions. A set of expressions is said consistent if it has at least one model, inconsistent otherwise. An expression () is then a consequence of a set of expressions () if it is satisfied by all of their models (noted ).
The languages dedicated to the semantic web (rdf and owl) follow that approach 14. rdf is a knowledge representation language dedicated to the description of resources; owl is designed for expressing ontologies: it describes concepts and relations that can be used within rdf.
A computer must determine if a particular expression (taken as a query, for instance) is the consequence of a set of axioms (a knowledge base). For that purpose, it uses programs, called provers, that can be based on the processing of a set of inference rules, on the construction of models or on procedural programming. These programs are able to deduce theorems (noted ). They are said to be sound if they only find theorems which are indeed consequences and to be complete if they find all the consequences as theorems.
3.2 Data interlinking with link keys
Vast amounts of rdf data are made available on the web by various institutions providing overlapping information. To be fully exploited, different representations of the same object across various data sets, often using different ontologies, have to be identified. When different vocabularies are used for describing data, it is necessary to identify the concepts they define. This task is called ontology matching and its result is an alignment , i.e. a set of correspondences relating entities and of two different ontologies by a particular relation (which may be equivalence, subsumption, disjointness, etc.) 4, 8.
At the data level, data interlinking is the process of generating links identifying the same resource described in two data sets. Parallel to ontology matching, from two datasets ( and ) it generates a link set, made of pairs of resource identifier.
stating that whenever an instance of the class has the same values for the property as an instance of class has for the property and they share at least one value for their property and , then they denote the same entity. More precisely, a link key is a structure such that:
- and are sets of pairs of property expressions;
- is a pair of class expressions (or a correspondence).
Such a link key holds if and only if for any pair of resources belonging to the classes in correspondence such that the values of their property in are pairwise equal and the values of those in pairwise intersect, the resources are the same. Link keys can then be used for finding equal individuals across two data sets and generating the corresponding : links. Link keys take into account the non functionality of rdf data and have to deal with non literal values. In particular, they may use arbitrary properties and class expressions. This renders their discovery and use difficult.
3.3 Experimental cultural knowledge evolution
Cultural evolution considers how culture spreads and evolves with human societies 23. It applies a generalised version of the theory of evolution to culture. In computer science, cultural evolution experiments are performed through multi-agent simulation: a society of agents adapts its culture through a precisely defined protocol 19: agents perform repeatedly and randomly a specific task, called game, and their evolution is monitored. This aims at discovering experimentally the states that agents reach and the properties of these states.
Experimental cultural evolution has been successfully and convincingly applied to the evolution of natural languages 25, 24. Agents play language games and adjust their vocabulary and grammar as soon as they are not able to communicate properly, i.e. they misuse a term or they do not behave in the expected way. It showed its capacity to model various such games in a systematic framework and to provide convincing explanations of linguistic phenomena. Such experiments have shown how agents can agree on a colour coding system or a grammatical case system.
Work has recently been developed for evolving alignments between ontologies. It can be used to repair alignments better than blind logical repair 22, to create alignments based on entity descriptions 17, to learn alignments from dialogues framed in interaction protocols 18, 21, or to correct alignments until no error remains 203 and to start with no alignment 2. Each study provides new insights and opens perspectives.
We adapt this experimental strategy to knowledge representation 3. Agents use their, shared or private, knowledge to play games and, in case of failure, they use adaptation operators to modify this knowledge. We monitor the evolution of agent knowledge with respect to their ability to perform the game (success rate) and with respect to the properties satisfied by the resulting knowledge itself. Such properties may, for instance, be:
- Agents converge to a common knowledge representation (a convergence property).
- Agents converge towards different but compatible (logically consistent) knowledge (a logical epistemic property), or towards closer knowledge (a metric epistemic property).
- That under the threat of a changing environment, agents that have operators that preserve diverse knowledge recover faster from the changes than those that have operators that converge towards a single representation (a differential property under environment change).
Our goal is to determine which operators are suitable for achieving desired properties in the context of a particular game.
4 Application domains
Our work on data interlinking aims at application to linked data offered in RDF on the web. It has found applications in thesaurus and bibliographical data interlinking (see previous years' report).
mOeX's work on cultural knowledge evolution is not directly applied and rather aims at extracting general principles of knowledge evolution. However, we foresee its potential impact in the long term in fields such as smart cities, the internet of things or social robotics in which the knowledge acquired by autonomous agents will have to be adapted to changing situations.
5 New software and platforms
5.1 New software
- Name: Lazy lavender
- Keywords: Reproducibility, Multi-agent, Simulation
- Scientific Description: Lazy lavender aims at supporting mOeX's research on simulating knowledge evolution. It is not a general purpose simulator. However, it features some methodological innovations in term of facilitating publication, recording, and replaying of experiments.
- Functional Description: Lazy Lavender is a simulation environment for cultural knowledge evolution, i.e. running randomised experiments with agent adjusting their knowledge while attempting to communicate. It can generate detailed report and data from the experiments and directions to repeat them.
Lazy is continuously evolving and do not feature stable releases.
Instead, use git hashes to determine which version is used in a simulation.
- News of the Year: In 2020, we introduced the capability to describe full factorial experiment plans. We also developed agent capability to learn decision trees, extract ontologies from these decision trees and adapt their ontologies through interacting.
gitlab. inria. fr/ moex/ lazylav/
- Publications: hal-01661140, hal-01661139, hal-01180916
- Authors: Jérôme Euzenat, Irina Dragoste
- Contact: Jérôme Euzenat
- Participants: Jérôme Euzenat, Yasser Bourahla, Iris Lohja, Fatme Danash, Irina Dragoste
5.1.2 Alignment API
- Keywords: Ontologies, Alignment, Ontology engineering, Knowledge representation
The API itself is a Java description of tools for accessing the common format. It defines five main interfaces (OntologyNetwork, Alignment, Cell, Relation and Evaluator).
We provide an implementation for this API which can be used for producing transformations, rules or bridge axioms independently from the algorithm that produced the alignment. The proposed implementation features: - a base implementation of the interfaces with all useful facilities, - a library of sample matchers, - a library of renderers (XSLT, RDF, SKOS, SWRL, OWL, C-OWL, SPARQL), - a library of evaluators (various generalisation of precision/recall, precision/recall graphs), - a flexible test generation framework that allows for generating evaluation data sets, - a library of wrappers for several ontology APIs , - a parser for the format.
To instanciate the API , it is sufficient to refine the base implementation by implementing the align() method. Doing so, the new implementation will benefit from all the services already implemented in the base implementation.
- Functional Description: Using ontologies is the privileged way to achieve interoperability among heterogeneous systems within the Semantic web. However, as the ontologies underlying two systems are not necessarily compatible, they may in turn need to be reconciled. Ontology reconciliation requires most of the time to find the correspondences between entities (e.g. classes, objects, properties) occurring in the ontologies. We call a set of such correspondences an alignment.
See release notes.
This is the last release made from gforge svn repository. After it, the Alignment API is hosted by gitlab and versioned with git. It may well be the last formal release, clone from the repo instead.
The Alignment API compiles in Java 11 (jars are still compiled in Java 8).
- News of the Year: Link keys are fully supported by the EDOAL language. In particular it can transform them into SPARQL queries.
moex. gitlabpages. inria. fr/ alignapi/
- Publications: hal-00825931, hal-00781018
- Authors: Nicolas Guillouet, Jérôme David, Maria-Elena Rosoiu, Jérôme Euzenat, Chan Le Duc, Jérôme Pierson
- Contact: Jérôme Euzenat
- Participants: Armen Inants, Chan Le Duc, Jérôme David, Jérôme Euzenat, Jérôme Pierson, Luz Maria Priego-Roche, Nicolas Guillouet
- Keywords: LOD - Linked open data, Data interlinking, Formal concept analysis
- Functional Description: LinkEx implements link key candidate extraction with our initial algorithms, formal concept analysis or pattern structures. It can extract link key expressions with inverse and composed properties and generate compound link keys. Extracted link key expressions may be evaluated using various measures, including our discriminability and coverage. It can also evaluate them according to an input link sample. The set of candidates can be rendered within the Alignment API's EDOAL language or in dot.
gitlab. inria. fr/ moex/ linkex
- Publications: hal-02168775, hal-01179166
- Author: Jérôme David
- Contact: Jérôme David
6 New results
6.1 Cultural knowledge evolution
In 2020, we obtained results on cultural knowledge evolution applied to ontologies as opposed to alignments. Moreover, we introduced learning of the ontologies within the protocol.
Finally, we published and extended our theoretical study of cultural alignment repair through dynamic epistemic logics. This led to new work on signature awareness in dynamic epistemic logic.
6.1.1 Evolving learned ontologies
Participants: Manuel Atencia, Yasser Bourahla, Jérôme Euzenat.
So far our experiments in cultural knowledge evolution dealt with adapting alignments. However, agent knowledge is primarily represented in their ontologies which may also be adapted. In order to study ontology evolution, we designed a two-stage experiment in which (1) agents learn ontologies based on examples of decision making, and (2) then they interactively compare their decisions on different objects and adapt their ontologies when they disagree. This framework may be instantiated in many ways. We used decision tree induction as learning method and an approximation of ontology accuracy to guide adaptation.
In this scenario, fundamental questions arise: Do agents achieve successful interaction (increasingly consensual decisions)? Can this process improve knowledge correctness? Do all agents end up with the same ontology? We showed that agents indeed reduce interaction failure, most of the time they improve the accuracy of their knowledge about the environment, and they do not necessarily opt for the same ontology.
This work is part of the PhD thesis of Yasser Bourahla.
6.1.2 Modelling cultural evolution in dynamic epistemic logic
Participants: Manuel Atencia, Jérôme Euzenat, Line van den Berg.
In the Alignment Repair Game (ARG) 3 cultural knowledge evolution is achieved via adaptation operators. ARG was evaluated experimentally and the experiments showed that agents converge towards successful communication and improve their alignments. However, the logical properties of such operators were not established. We explored how closely these operators relate to logical dynamics. For that purpose, we developed a dynamic epistemic framework. It is based on DEOL, a variant of Dynamic Epistemic Logic to capture the dynamics of the cultural alignment repair game. The ontologies are modelled as knowledge and alignments as beliefs in a variant of plausibility-based dynamic epistemic logic. The dynamics of the game is achieved through (public) announcement of the game issue and the adaptation operators are defined through conservative upgrades, i.e. modalities that transform models by reordering world-plausibility. This framework allows us 10: (1) to express the ontologies and alignments used, (2) to model the ARG adaptation operators through announcements and conservative upgrades and (3) to formally establish the correctness, partial redundancy and incompleteness of the adaptation operators in ARG.
In the DEOL modelling, agents are aware of the vocabulary that the other agents may use (we call this Public signature awareness). However, assuming that agents are fully aware of each other's signatures prevents them from adapting their vocabularies to newly gained information, from the environment or learned through agent communication. Therefore this is not realistic for open multi-agent systems. We have proposed a novel way to model awareness with partial valuations 11. Partial Dynamic Epistemic Logic allows agents to use their own vocabularies to reason and talk about the world. These vocabularies may be extended through a new modality for raising awareness. We gave a first view on defining the dynamics of raising awareness on this framework. We also started investigating an associated forgetting operator 12.
This work is part of the PhD thesis of Line van den Berg. Line also published her master's work on unreliable gossip 13.
6.2 Link keys
The link key exploration is continued following two directions (§3.2):
- Extracting link keys;
- Reasoning with link keys.
6.2.1 Link key extraction with relational concept analysis
Participants: Manuel Atencia, Jérôme David, Jérôme Euzenat.
We recently showed that it is possible to encode the link key extraction problem in relational concept analysis (RCA) to extract the optimal link keys even in the presence of cyclic dependencies 5.
We also used pattern structures, an extension of formal concept analysis (FCA) with ordered structures, to reformulate the simple link key extraction problem. The strategies used to automatically discover link keys are not able to select the pair of classes on which the link key candidate applies. Indeed, a link key may be relevant for some pair of classes and not relevant for another. Then, discovering link keys for one pair of classes at a time may be computationally expensive if all pairs have to be considered. To overcome this limitation, we have introduced a specific and original pattern structure where link keys can be discovered in one pass while specifying the pair of classes associated with each link key, focusing on the discovery process and allowing more flexibility 9. This approach has been implemented in the linkex tool with a modified version of the AddIntent algorithm.
6.2.2 An ExpTime tableau algorithm for the logic
Participants: Manuel Atencia, Jérôme Euzenat, Khadija Jradeh.
Link keys can be thought of as axioms in a description logic. As such, they can contribute to infer ABox axioms, such as links, terminological axioms or other link keys. This has important practical applications such as link key inference and link key consistency and redundancy checking.
We previously extended the standard tableau method designed for the description logic to support reasoning with link keys and proved that it terminates, is sound, complete, and that its complexity is 2ExpTime. This year, we have designed a new algorithm based on the techniques of compressed tableau and proved that it terminates, is sound, complete, and that its complexity is ExpTime.
This work is part of the PhD thesis of Khadija Jradeh, co-supervised by Chan Le Duc (LIMICS).
7 Partnerships and cooperations
7.1 European Initiatives
7.1.1 FP7 & H2020 Projects
- Program: H2020 ICT-48
- Project acronym: Tailor
- Project title: Foundations of Trustworthy AI integating Learning, Optimisation and Reasoning
- Web site: https://
liu. se/ en/ research/ tailor
- Duration: 2020 - 2023
- Coordinator: Linköping University
- Involvement within the Université Grenoble Alpes partner
- Participants: Manuel Atencia Arcas, Jérôme David, Jérôme Euzenat
- Other partners: INRIA
- Abstract: The purpose of TAILOR is to build the capacity of providing the scientific foundations for Trustworthy AI in Europe by developing a network of research excellence centres leveraging and combining learning, optimization and reasoning:
7.2 National Initiatives
7.2.1 ANR Elker
- Program: ANR-PRC
- Project acronym: Elker
- Project title: Extending link keys: extraction and reasoning
- Web site: https://
project. inria. fr/ elker/
- Duration: October 2017 - September 2021
- Coordinator: LIG/Manuel Atencia
- Participants: Manuel Atencia Arcas, Jérôme David, Jérôme Euzenat
- Other partners: INRIA Lorraine, Université de Vincennes+Université Paris 13
- Abstract: The goal of Elker is to extend the foundations and algorithms of link keys (see §3.2) in two complementary ways: extracting link keys automatically from datasets and reasoning with link keys.
7.2.2 ANR MIAI
- Program: ANR-3IA
- Project acronym: Miai
- Project title: Multidisciplinary institute in artificial intelligence
- Web site: https://
miai. univ-grenoble-alpes. fr
- Duration: July 2019 - December 2023
- Coordinator: Université Grenoble Alpes
- Participants: Manuel Atencia Arcas, Jérôme David, Jérôme Euzenat
- Abstract: The MIAI Knowledge communication and evolution chair aims at understanding and developing mechanisms for seamlessly improving knowledge. It studies the evolution of knowledge in a society of people and AI systems by applying evolution theory to knowledge representation.
7.2.3 PEPS RegleX-LD
- Program: Projets Exploratoires Premier Soutien (CNRS, INS2I)
- Project acronym: RegleX-LD
- Project title: Découverte de règles expressives de correspondances complexes et de liage de données
- Duration: January 2019 – December 2019 (extended to December 2021)
- Coordinator: IRIT/Cássia Trojahn
- Participants: Manuel Atencia Arcas, Jérôme David, Jérôme Euzenat
- Other partners: IRIT Toulouse, INRA Paris, LRI Orsay
- Abstract: RegleX-LD aims at discovering expressive ontology correspondences and data interlinking patterns using unsupervised or weakly supervised methods.
8.1 Promoting Scientific Activities
8.1.1 Scientific Events: Organisation
Member of the Organizing Committees
- Jérôme Euzenat had been organiser of the 15th Ontology matching workshop of the 20th iswc, held online this year, 2020 (with Pavel Shvaiko, Ernesto Jiménez Ruiz, Cássia Trojahn dos Santos and Oktie Hassanzadeh) 15
8.1.2 Scientific Events: Selection
Chair of Conference Program Committees
- Jérôme Euzenat was “sister conference” co-chairperson (with Juanzi Li) of the “International semantic web conference (ISWC)” 2020.
Member of the Conference Program Committees
- Manuel Atencia and Jérôme David have been programme committee members of the “International Joint Conference on Artificial Intelligence (ijcai)”.
- Manuel Atencia and Jérôme Euzenat have been programme committee member of the “European Conference on Artificial Intelligence (ecai)”.
- Jérôme David and Jérôme Euzenat have been programme committee member of the “Web Conference (www)”.
- Jérôme David has been programme committee members of the “International semantic web conference (iswc)”.
- Jérôme David has been programme committee members of the “European semantic web conference (eswc)”.
- Jérôme Euzenat has been programme committee member of the “International Conference on Conceptual Structures (iccs)”.
- Jérôme Euzenat has been programme committee member of the “International conference on knowledge engineering and knowledge management (ekaw)”.
- Jérôme Euzenat has been programme committee member of the “Journées Françaises d'intelligence artificielle fondamentale (jiaf)”.
- Jérôme David have been programme committee members of the “Extraction et Gestion des connaissances (egc)”.
Member of the Editorial Boards
- Jérôme Euzenat is member of the editorial board of Journal of web semantics (area editor), Journal on data semantics (associate editor) and the Semantic web journal.
Reviewer - Reviewing Activities
- Manuel Atencia had been reviewer for Autonomous agents and multi-agent systems.
- Jérôme David had been reviewer for Applied ontology and Artificial intelligence.
- Jérôme Euzenat had been reviewer for Knowledge and information systems.
8.1.4 Invited Talks
- Jérôme David gave a talk on “Several link keys are better than one, or Extracting disjunctions of link key candidates” to the Journées Raisonner sur les Données (RoD). 2020-07-06.
8.1.5 Leadership within the Scientific Community
- Jérôme Euzenat is member of the scientific council of the cnrs GDR on Formal and Algorithmic Aspects of Artificial intelligence.
- Jérôme Euzenat is EurAI fellow.
- Jérôme David is member of the board of the Extraction and gestion des connaissances (Knowledge extraction and management) conference series.
8.1.6 Scientific Expertise
- Jérôme Euzenat has been evaluation panel member for the Emmy Noether independent junior research groups in the field of artificial intelligence methods of the Deutsche Forschungsgemeinschaft (DFG)
8.1.7 Research Administration
- Jérôme Euzenat is member of the COS (Scientific Orientation Committee) of INRIA Grenoble Rhône-Alpes
8.2 Teaching - Supervision - Juries
- Jérôme David is coordinator of the Master “Mathematiques et informatiques appliquées aux sciences humaines et sociales” (Univ. Grenoble Alpes);
- Manuel Atencia is co-responsible of the 2nd year of Master “Mathematiques et informatiques appliquées aux sciences humaines et sociales” (Univ. Grenoble Alpes);
- Manuel Atencia is coordinator of the “Web, Informatique et Connaissance” option of the master M2 “Mathematicques et informatiques appliquées aux sciences humaines et sociales” (Univ. Grenoble Alpes);
- Licence: Jérôme David, Algorithmique et programmation par objets, 70h/y, L2 MIASHS, UGA, France
- Licence: Jérôme David, Système, L3 MIASHS, 18h/y, UGA, France
- Licence: Manuel Atencia, Introduction aux technologies du Web, 60h/y, L3 MIASHS, UGA, France
- Master: Jérôme David, Programmation Java 2, 30h/y, M1 MIASHS, UGA, France
- Master: Jérôme David, JavaEE, 30h/y, M2 MIASHS, UGA, France
- Master: Jérôme David, Web sémantique, 3h/y, M2 MIASHS, UGA, France
- Master: Manuel Atencia, Formats de données du web, 30h/y, M1 MIASHS, UGA, France
- Master: Manuel Atencia, Introduction à la programmation web, 42h/y, M1 MIASHS, UGA, France
- Master: Manuel Atencia, Intelligence artificielle, 7.5h/y, M1 MIASHS, UGA, France
- Master: Manuel Atencia, Web sémantique, 27h/y, M2 MIASHS, UGA, France
- Master: Jérôme David, Stage de programmation, 10h/y, M2 MIASHS, UGA, France
- Master: Jérôme Euzenat, Semantics of distributed knowledge, 22.5h/y, M2R MoSIG, UGA, France
- Nacira Abbas, “Link key extraction and relational concept analysis”, in progress since 2018-10-01 (Jérôme David and Amedeo Napoli)
- Khadija Jradeh, “Reasoning with link keys”, in progress since 2018-10-01 (Manuel Atencia and Chan Le Duc)
- Line van den Berg, “Knowledge Evolution in Agent Populations”, in progress since 2018-10-01 (Manuel Atencia and Jérôme Euzenat)
- Yasser Bourahla, “Evolving ontologies through communication”, in progress since 2019-10-01 (Manuel Atencia and Jérôme Euzenat)
- Andreas Kalaitzakis, “Effects of collaboration and specialisation on agent knowledge evolution”, in progress since 2020-10-01 (Jérôme Euzenat)
- Alban Flandin, “The benefits of forgetting knowledge”, in progress since 2020-11-01 (Jérôme Euzenat and Yves Demazeau)
- Manuel Atencia had been member of the PhD panel of Kemo Adrian (Universitat Autònoma de Barcelona): A computational model for mutual intelligibility in argumentation-based multi-agent systems
- Jérôme Euzenat had been reviewer and panel member of the computer science habilitation (HDR) of Konstantin Todorov (Université de Montpellier): Towards a web of structured knowledge: methods, applications and perspectives
Class? We are developing mediation material for explaining to the general public what knowledge representation is and how it may evolve. Its main goal is to show children that the same individuals may be classified in different and evolving ways and that it is possible to communicate such classifications without expressing them. For that purpose, we have designed a card game called Class?1 which allows players to guess the hidden ontology of another player. In 2020, we have started designing a more progressive session allowing to not just play the game but understand how to classify, learn decision tree, and find ontology alignment.
9 Scientific production
9.1 Major publications
inproceedings'Data interlinking through robust linkkey extraction'.Proc. 21
steuropean conference on artificial intelligence (ECAI), Praha (CZ)Amsterdam (NL)IOS press2014, 15-20URL: ftp://ftp.inrialpes.fr/pub/exmo/publications/atencia2014b.pdf
- 2 inproceedings'Crafting ontology alignments from scratch through agent communication'.PRIMA 2017: Principles and Practice of Multi-Agent SystemsNice, FranceSpringer Verlag2017, 245-262
- 3 inproceedings'Interaction-based ontology alignment repair with expansion and relaxation'.Proc. 26th International Joint Conference on Artificial Intelligence (IJCAI), Melbourne (VIC AU)2017, 185--191
- 4 book'Ontology matching'.Heidelberg (DE)Springer-Verlag2013, URL: http://book.ontologymatching.org
9.2 Publications of the year
International peer-reviewed conferences
Scientific book chapters
Edition (books, proceedings, special issue of a journal)
9.3 Cited publications
- 17 inproceedings'Aligning experientially grounded ontologies using language games'.Proc. 4th international workshop on graph structure for knowledge representation, Buenos Aires (AR)2015, 15-31
- 18 article'An interaction-based approach to semantic alignment'.Journal of web semantics1312012, 131-147
- 19 article'The dissemination of culture: a model with local convergence and global polarization'.Journal of conflict resolution4121997, 203-226
- 20 inproceedings'Attuning ontology alignments to semantically heterogeneous multi-agent interactions'.Proc. 22nd European conference on artificial intelligence (ECAI), The Hague (NL)2016, 871-879
- 21 article'Vocabulary alignment in openly specified interactions'.Journal of artificial intelligence research682020, 69-107
- 22 inproceedings'First experiments in cultural alignment repair (extended version)'.Proc. ESWC 2014 satellite events revised selected papersLecture notes in computer science87982014, 115-130
- 23 article'Towards a unified science of cultural evolution'.Behavioral and brain sciences2942006, 329-383
- 24 book 'The evolution of grounded spatial language'. Language science press, Berlin (DE) 2016
- 25 book L. Steels 'Experiments in cultural language evolution'. John Benjamins, Amsterdam (NL) 2012