The goal of DANTE is to develop **novel models, algorithms and methods to analyse the dynamics of large-scale networks**, (*e.g. social networks, technological networks such as the Web and hyperlinks, Articles and co-citation, email exchanges, economic relations, bacteria/virus propagation in human networks...*). Large datasets describing such networks are nowadays more "accessible" due to the emergence of online activities and new techniques of data collection. These advantages provide us an unprecedent avalanche of large data sets, recording the digital footprints of millions of entities (*e.g.* individuals, computers, documents, stocks, memes etc.) and their temporal interactions.

Our main challenge is to propose **generic methodologies and concepts to develop relevant formal tools to model, analyse the dynamics and evolution of such networks, that is, to formalise the dynamic properties of both structural and temporal interactions of network entities/relations**:

**Ask** application domains relevant questions, to learn something new about such domains instead of merely playing with powerful computers on huge data sets.

**Access** and collect data with adapted and efficient tools. This includes a reflexive step on the biases of the data collected and their relations to real activities/application domain.

**Model** the dynamics of networks by analysing their structural and temporal properties jointly, inventing original approaches combining graph theory with signal processing. A key point is to capture temporal features in the data, which may reveal meaningful insights on the evolution of the networks. Subsequently the aim is to infer the observed structural and temporal features from the model processes, and match the emerging statistical properties (probability densities, dependencies, conditionals) to characterise the dynamical behaviour of the targeted systems (*e.g.*, non-stationarity, scaling laws, burstiness...).

**Interpret** the results, make the knowledge robust and useful in order to be able to control, optimise and (re)-act on the network structure itself and on the protocols exchange/interactions in order to tune the performance of the global system.

The challenge is to solve a major scientific puzzle, common to several application domains (*e.g.*, sociology, information technology, epidemiology) and central in network science: how to understand the causality between the evolution of macro-structures and individuals, at local and global scales?

We propose a novel model for representing finite discrete Time-Varying Graphs (TVGs).
The major application of such a model is for the modelling and representation of dynamic networks.
In our proposed model, an edge is able to connect a node

We show that the contiguity and linearity of co-graphs on n vertices are both O(log n). Moreover, we show that this bound is tight for contiguity as there exists a family of cographs on n vertices whose contiguity is Omega(log n). We also provide an Omega(log n / log log n) lower bound on the maximum linearity of co-graphs on n vertices. As a by-product of our proofs, we obtain a min-max theorem, which is worth of interest in itself, stating equality between the rank of a tree and the minimum height of one of its path partitions. (See )

Parameters of the diffusion and of the mutations of nosocomial bacteria strains are still today not completely understood. The macroscopic mechanisms involved during the diffusion are opposed to microscopic mechanisms which are well known and understood. At the scale of an hospital, this is a complex system that needs to be be simplified and modelled before an epidemiological study of the whole system. We aim at giving an answer to the question of whether there exists a correlation between the contact graph (dynamic network) and the microbiological diffusion of the strains of Staphylococcus Aureus bacteria. For that purpose, the research project MOSAR (Mastering hOSpital Antimicrobial Resistance) and the i-Bird group (Individual Based Investigation of Resistance Dissemination) designed a large scale experiment that has been carried out at the Hospital of Berck-sur-Mer (FRANCE). Our work focuses on comparing the diffusion of some selected strains to the results obtained with wavelets on the aggregated contact graph, the selection being made such as the strains show a clear diffusion over time. We study the correlation between the spatial diffusion of the wavelets and the spatio-temporal diffusion of those strains.

IEEE 802.11 is implemented in many wireless networks, including multi-hop networks where communications between nodes are conveyed along a chain. We present a modelling framework to evaluate the performance of flows conveyed through such a chain. Our framework is based on a hierarchical modelling composed of two levels. The lower level is dedicated to the modelling of each node, while the upper level matches the actual topology of the chain. Our approach can handle different topologies, takes into account Bit Error Rate and can be applied to multi-hop flows with rates ranging from light to heavy workloads. We assess the ability of our model to evaluate loss rate, throughput, and end-to-end delay experienced by flows on a simple scenario, where the number of nodes is limited to three. Numerical results show that our model accurately approximates the performance of flows with a relative error typically less than 10%.

Hurst Exponent IntraPartum Fetal Heart Rate: Impact of Decelerations was granted the best paper award in the 26th IEEE International Symposium on Computer-Based Medical Systems (CBMS).

Indeed, their dynamics is typically characterized by non standard and intricate statistical properties, such as non-stationarity, long range memory effects, intricate space and time correlations.

Analysing, modelling, and even defining adapted concepts for dynamic graphs is at the heart of DANTE. This is a largely open question that has to be answered by keeping a balance between specificity (solutions triggered by specific data sets) and generality (universal approaches disconnected from social realities). We will tackle this challenge from a graph-based signal processing perspective involving signal analysts and computer scientists, together with experts of the data domain application. One can distinguish two different issues in this challenge, one related to the graph-based organisation of the data and the other to the time dependency that naturally exits in the dynamic graph object. In both cases, a number of contributions can be found in the literature, albeit in different contexts. In our application domain, high-dimensional data "naturally reside" on the vertices of weighted graphs. The emerging field of signal processing on graphs merges algebraic and spectral graph theoretic concepts with computational harmonic analysis to process such signals on graphs .

As for the first point, adapting well-founded signal processing techniques to data represented as graphs is an emerging, yet quickly developing field which has already received key contributions. Some of them are very general and delineate ambitious programs aimed at defining universal, generally unsupervised methods for exploring high-dimensional data sets and processing them. This is the case for instance of the « diffusion wavelets » and « diffusion maps » pushed forward at Yale and Duke . Others are more traditionally connected with standard signal processing concepts, in the spirit of elaborating new methodologies via some bridging between networks and time series, see, *e.g.*, ( and references therein). Other viewpoints can be found as well, including multi-resolution Markov models , Bayesian networks or distributed processing over sensor networks . Such approaches can be particularly successful for handling static graphs and unveiling aspects of their organisation in terms of dependencies between nodes, grouping, etc. Incorporating possible time dependencies within the whole picture calls however for the addition of an extra dimension to the problem "as it would be the case when switching from one image to a video sequence", a situation for which one can imagine to take advantage of the whole body of knowledge attached to non-stationary signal processing .

We need to focus on intrinsic properties of evolving/dynamic complex networks. New notions (as opposed to classical static graph properties) have to be introduced: rate of vertices or links appearances or disappearances, the duration of link presences or absences. Moreover, more specific properties related to the dynamics have to be defined and are somehow related to the way to model a dynamic graph.

Through the systematic analysis and characterisation of static network representations of many different systems, researchers of several disciplines have unveiled complex topologies and heterogeneous structures, with connectivity patterns statistically characterised by heavy-tails and large fluctuations, scale-free properties and non trivial correlations such as high clustering and hierarchical ordering .
A large amount of work has been devoted to the development of new tools for statistical characterisation and modelling of networks, in order to identify their most relevant properties, and to understand which growth mechanisms could lead to these properties. Most of those contributions have focused on static graphs or on dynamic process (*e.g.* diffusion) occurring on static graphs.
This has called forth a major effort in developing the methodology to characterise the topology and temporal behaviour of complex networks , , , , to describe the observed structural and temporal heterogeneities , , , to detect and measure emerging community structures , , , to see how the functionality of networks determines their evolving structure , and to determine what kinds of correlations play a role in their dynamics , , .

The challenge is now to extend this kind of statistical characterisation to dynamical graphs. In other words, links in dynamic networks are temporal events, called contacts, which can be either punctual or last for some period of time. Because of the complexity of this analysis, the temporal dimension of the network is often ignored or only roughly considered. Therefore, fully taking into account the dynamics of the links into a network is a crucial and highly challenging issue.

Another powerful approach to model time-varying graphs is via activity driven network models. In this case the bottom line assumption is taken only about the distribution of activity rates of interacting entities. The activity rate is realistically broadly distributed and refers to the probability that an entity becomes active and creates a connection with another entity within a unite time step . Even the generic model is already capable to recover some realistic features of the emerging graph, its main advantage is to provide a general framework to study various type of correlations present in real temporal networks. By synthesising such correlations (*e.g.* memory effects, preferential attachment, triangular closing mechanisms, ...) from the real data, we are able to extend the general mechanism and receive a temporal network model, which shows certain realistic feature in a controlled way. This can be used to study the effect of selected correlations on the evolution of the emerging structure and its co-evolution with ongoing processes like spreading phenomena, synchronisation, evolution of consensus, random walk etc. , . This approach allows also to develop control and immunisation strategies by fully considering the temporal nature of the backgrounding network.

First, the dynamic network object itself trigger original algorithmic questions. It mainly concerns distributed algorithms that should be designed and deployed to efficiently measure the object itself and get an accurate view of its dynamic behaviour. Such distributed measure should be "transparent", that is, it should not introduce bias or at least it should be controllable and corrigible. Such problem is encountered in all distributed metrology measures / distributed sondes: P2P, sensor network, wireless network, QoS routing... This question raises naturally the intrinsic notion of adaptation and control of the dynamic network itself since it appears that autonomous networks and traffic aware routing are becoming crucial.

A case in the point for dynamic networks are communication networks which are known to potentially undergo high dynamicity.
The dynamicity exhibited by these networks results from several factors including, for instance, changes in the topology and varying workload conditions.
Although most implemented protocols and existing solutions in the literature can cope with a dynamic behaviour, the evolution of their behaviour operate identically whatever the actual properties of the dynamicity.
For instance, parameters of the routing protocols (*e.g.* hello packets transmission frequency) or routing methods (*e.g.* reactive / proactive) are commonly hold constant regardless of the nodes mobility.
Similarly, the algorithms ruling CSMA/CA (*e.g.* size of the contention window) are tuned identically and they do not change according to the actual workload and observed topology.

Dynamicity in computer networks tends to affect a large number of performance parameters (if not all) coming from various layers (viz. physical, link, routing and transport). To find out which ones matters the most for our intended purpose, we expect to rely on the tools developed by the two former axis. These quantities should capture and characterise the actual network dynamicity. Our goal is to take advantage of this latter information in order to refine existing protocols, or even to propose new solutions. More precisely, we will attempt to associate “fundamental” changes occurring in the underlying graph of a network (reported through graph-based signal tools) to quantitative performance that are matter of interests for networking applications and the end-users. We expect to rely on available testbeds such as Senslab and FIT to experiment our solutions and ultimately validate our approach.

In parallel to the advances in modern medicine, health sciences and public health policy, epidemic models aided by computer simulations and information technologies offer an increasingly important tool for the understanding of transmission dynamics and of epidemic patterns. The increased computational power and use of Information and Communication Technologies makes feasible sophisticated modelling approaches augmented by detailed in vivo data sets, and allow to study a variety of possible scenarios and control strategies, helping and supporting the decision process at the scientific, medical and public health level. The research conducted in the DANTE project finds direct applications in the domain of LSH since modelling approaches crucially depend on our ability to describe the interactions of individuals in the population. In the MOSAR project we are collaborating with the team of Pr. Didier Guillemot (Inserm/Institut. Pasteur/Université de Versailles). Within the TUBEXPO and ARIBO projects, we are collaborating with Pr. Jean-Christopge Lucet (Professeur des université Paris VII ? Praticien hospitalier APHP).

In the last ten years, the study of complex networks has received an important boost with large interdisciplinary efforts aimed at their analysis and characterisation. Two main points explain this large activity: on the one hand, many systems coming from very different disciplines (from biology to computer science) have a convenient representation in terms of graphs; on the other hand, the ever-increasing availability of large data sets and computer power have allowed their storage and manipulation. Many maps have emerged, describing many networks of practical interest in social science, critical infrastructures, networking, and biology. The DANTE project targets the study of dynamically evolving networks, from the point both of their structure and of the dynamics of processes taking place on them.

As a outcomes of the ANR SensLAB project and the Inria ADT SensTOOLS and SensAS, several softwares (from low level drivers to OSes) were delivered and made available to the research community. The main goal is to lower the cost of developing/deploying a large scale wireless sensor network application. All software are gathered under the SensLAB web site: http://www.senslab.info/ web page where one can find:

low C-level drivers to all hardware components;

ports of the main OS, mainly TinyOS, FreeRTOS and Contiki;

ports and development of higher level library like routing, localization.

Queueing models, steady-state solution, online tool, web interface
Online tool: http://

This tool aims at providing an ergonomic web-based interface to promote the use of our proposed solutions to numerically solve classical queueing systems.
This tool was launched in 2011 and presented at the conference . It attracts each month hundreds of visitors scattered accross the world.
Its initial implementation only includes the solution for a queue with multiple servers, general arrivals, exponential services and a possibly finite buffer (*i.e.*, *i.e.*, *i.e.*,

This contribution is part of the PhD work of S. Roy (Dec. 2010 – March 2014) on probabilistic resource management in the context of highly volatile workloads. We proposed a Markovian model that can reproduce the workload volatility occurring in real-life VoD systems, such as Video On Demand (VoD). We derived an original MCMC based identification procedure to calibrate model on real data. We assess the accuracy of the proposed procedure in terms of bias and variance through several numerical experiments, and we compared its outcome with a former ad-hoc method that we had designed. We also compared the performance of our approach to that of other existing models examining the goodness-of-fit of the steady state distribution and of the autocorrelation function of real workload traces. Results show that the combination of out model and its MCMC based calibration clearly outperforms the existing state-of-the art. (See , )

peta

This contribution is part of the PhD work of M. Sokol (EPI MAESTRO, Oct. 2009 – May 2014), co-supervised with K. Avrachenkov and Ph. Nain, on the classification of content and users in peer-to-peer networks using graph-based semi-supervised learning methods. Semi-supervised learning methods constitute a category of machine learning methods which use labelled points together with unlabelled data to tune the classifier. The main idea of the semi-supervised methods is based on an assumption that the classification function should change smoothly over a similarity graph, which represents relations among data points. This idea can be expressed using kernels on graphs such as graph Laplacian. Different semi-supervised learning methods have different kernels which reflect how the underlying similarity graph influences the classification results. In a recent work, we analysed a general family of semi-supervised methods, provided insights about the differences among the methods and gave recommendations for the choice of the kernel parameters and labelled points. In particular, it appeared that it was preferable to choose a kernel based on the properties of the labelled points. We illustrated our general theoretical conclusions with an analytically tractable characteristic example, clustered preferential attachment model and classification of content in P2P networks. (See )

Intrapartum fetal heart rate monitoring constitutes an important stake aiming at early acidosis detection. Measuring heart rate variability is often considered a powerful tool to assess the intrapartum health status of fetus and has has been envisaged using various techniques. In the present contribution, the power of scale invariance parameters, such as the Hurst exponent and the global regular- ity exponent, estimated from wavelet coefficients of intrapartum fetal heart rate time series, to evaluate the health status of fetuses is quantified from a case study database, constituted at a French Academic Hospital in Lyon. Notably, the ability of such parameters to discriminate subjects incorrectly classified according to FIGO rules as abnormal will be discussed. Also, the impact of the occurrence of decelerations identified as complicated by obstetricians on the values taken by Hurst parameter is investigated in detail. (See )

IEEE 802.11 is implemented in many wireless networks, including multi-hop networks where communications between nodes are conveyed along a chain. We present a modelling framework to evaluate the performance of flows conveyed through such a chain. Our framework is based on a hierarchical modelling composed of two levels. The lower level is dedicated to the modelling of each node, while the upper level matches the actual topology of the chain. Our approach can handle different topologies, takes into account Bit Error Rate and can be applied to multi-hop flows with rates ranging from light to heavy workloads. We assess the ability of our model to evaluate loss rate, throughput, and end-to-end delay experienced by flows on a simple scenario, where the number of nodes is limited to three. Numerical results show that our model accurately approximates the performance of flows with a relative error typically less than 10%. (See )

We propose an adaptation of the collision probability used in the measure of the available bandwidth designed for Mobile Ad hoc Networks (MANETs) and which is used in ABE. Instead, we propose a new ABE+ that includes a new function to estimate the probability of losses. This new function has been specially designed for Vehicular Ad hoc Networks, to be suited to the high mobility and variable density in vehicular environments. In this new solution, we do not only consider the packet size, but also other metrics, such as, density and speed of the nodes. We include the ABE+ algorithm in the forwarding decisions of the GBSR-B protocol, which is an improvement of the well-known GPSR protocol. Finally through simulations, we compare the performance of our new ABE+ compared to the original ABE. These results show that ABE+ coupled with GBSR-B achieves a good trade-off in terms of packet losses and packet end-to-end delay. (See )

This contribution stems from a long-existing collaboration with Pr. Brandwajn (UCSC), which is devoted to innovative numerical solution of classical queueing systems.
Many real-life systems can be modelled using the classical *i.e.*, exhibits combinatorial growth as the number of servers and/or phases increases.
To circumvent this complexity issue, we propose to use a reduced state description in which the state of only one server is represented explicitly, while the other servers are accounted for through their rate of completions. The accuracy of the resulting approximation is generally good and, moreover, tends to improve as the number of servers in the system increases. Its computational complexity in terms of the number of states grows only linearly in the number of servers and phases. (See )

Wireless mesh network offers a simple and costless solution to deploy wireless based infrastructure network.
They are particularly suitable when the network is deployed temporarily, such as substitution networks (studied in the ANR RESCUE project).
In order to ensure an important capacity, the mesh nodes may be equipped with several 802.11 network interfaces.
The classical approach to assign 802.11 channels to these interfaces aim to minimise global interference, *i.e.* minimise the conflict graph.
Our proposition is two folds.
We define a new benefit function that describes the network capacity rather than interference/conflicts.
Also, we derive an efficient algorithm that maximises this function.
Simulation results show that the proposed function is very close to the measured end-to-end throughputs, empirically proving that it is the good function to optimise.
Moreover, the channel assignation algorithm based on this optimisation presents an important throughput increase compared to the classical approaches.

We consider the problem of aggregating a temporal contact series into a series of graph. This consists in slicing time into time-windows of equal length and forming for each window the graph of the contacts occurred within it. The length chosen for the windows has a great impact on the properties of the graph series obtained. Then the key question that arises is: how one should choose the length of aggregation windows? In the master insternship of Yannick Léo (spring 2013), we designed a method to do so, by using the occupation rate of paths in the graph series. We have applied this method on several real-world data and obtained very good results. The method has also greatly beneficiated of a new notion of shortest dynamic paths that we developed during the master internship of Pierre-Alain Scribot (spring 2013).

We analysed a huge and very precise trace of contact data collected during 6 months on the entire population of a rehabilitation hospital. We investigated the graph structure of the average daily contact network, and we unveiled striking properties of this structure in the considered hospital, as a very strong introversion of services, the key role of the contacts between patients and staff in connecting those introverted services all together, and very different pattern of contacts during one day between patients and staffs. The methodology we designed to lead these analysis is very general and can be applied for analysing any dynamic complex network where nodes are classified into groups. Those results are part of Lucie Martinet's PhD thesis.

We design the first linear-time algorithm for computing the prime decomposition of a digraph G with regard to the cartesian product. A remarkable feature of our solution is that it computes the decomposition of G from the decomposition of its underlying undirected graph, for which there exists a linear-time algorithm. First, this allows our algorithm to remain conceptually very simple and in addition, it provides new insight into the connexions between the directed and undirected versions of cartesian product of graphs

We consider a graph parameter called *contiguity* which aims at encoding a graph by a linear ordering of its vertices. The purpose is to obtain very compact encoding of a graph which still answers in optimal time to neighbourhood queries on the graph (*i.e.* list the neighbours of a given vertex). This allows to deal with very large instances of graphs by loading them entirely into the memory, without penalising the running time of algorithms treating those instances. We designed a linear time algorithm for computing a constant-ratio approximation of the contiguity of an arbitrary co-graph. Our algorithm does not only give an approximation of the parameter, but also provides an encoding of the co-graph realising this value

We propose a novel model for representing finite discrete Time-Varying Graphs (TVGs).
The major application of such a model is for the modelling and representation of dynamic networks.
In our proposed model, an edge is able to connect a node

A bilateral contract has been signed between the DANTE Inria team and ACT750 to formalise their collaboration in the context of churn prediction.

A bilateral contract has been signed between the DANTE Inria team and KRDS to formalise their collaboration in the context of Facebook marketing / cascade analysis.

A bilateral contract has been signed between the DANTE Inria team and HiKoB to formalise their collaboration in the context of the Equipex FIT (Futur Internet of Things) FIt is one of 52 winning projects in the Equipex research grant program. It will set up a competitive and innovative experimental facility that brings France to the forefront of Future Internet research. FIT benefits from 5.8 euros million grant from the French government Running from 22.02.11 – 31.12.2019. The main ambition is to create a first-class facility to promote experimentally driven research and to facilitate the emergence of the Internet of the future.

Network Science

The main scientific objectives of network science are:

to design efficient tools for measuring specific properties of large scale complex networks and their dynamics;

to propose accurate graph and dynamics models (*e.g.*, generators of random graph fulfilling measured properties);

to use this knowledge with an algorithmic perspectives, for instance, for improving the QoS of routing schemes, the speed of information spreading, the selection of a target audience for advertisements, etc.

The ADR will focus on:

Network sampling

Epidemics in networks

Search in networks

Clustering of networks

Detecting network central nodes

Network evolution and anomaly detection

Equipex FIT (Futur Internet of Things) FIt is one of 52 winning projects in the Equipex research grant program. It will set up a competitive and innovative experimental facility that brings France to the forefront of Future Internet research. FIT benefits from 5.8€¨ million grant from the French government Running from 22.02.11 – 31.12.2019. The main ambition is to create a first-class facility to promote experimentally driven research and to facilitate the emergence of the Internet of the future.

As proposed by initiatives in Europe and worldwide, enabling an open, general-purpose, and sustainable large-scale shared experimental facility will foster the emergence of the Future Internet. There is an increasing demand among researchers and production system architects to federate testbed resources from multiple autonomous organisations into a seamless/ubiquitous resource pool, thereby giving users standard interfaces for accessing the widely distributed and diverse collection of resources they need to conduct their experiments. The F-Lab project builds on a leading prototype for such a facility: the OneLab federation of testbeds. OneLab pioneered the concept of testbed federation, providing a federation model that has been proven through a durable interconnection between its flagship testbed PlanetLab Europe (PLE) and the global PlanetLab infrastructure, mutualising over five hundred sites around the world. One key objective of F-Lab is to further develop an understanding of what it means for autonomous organisations operating heterogeneous testbeds to federate their computation, storage and network resources, including defining terminology, establishing universal design principles, and identifying candidate federation strategies. On the operational side, F-Lab will enhance OneLab with the contribution of the unique sensor network testbeds from SensLAB, and LTE based cellular systems. In doing so, F-Lab continues the expansion of OneLab's capabilities through federation with an established set of heterogeneous testbeds with high international visibility and value for users, developing the federation concept in the process, and playing a major role in the federation of national and international testbeds. F-Lab will also develop tools to conduct end-to-end experiments using the OneLab facility enriched with SensLAB and LTE.

F-Lab is a unique opportunity for the French community to play a stronger role in the design of federation systems, a topic of growing interest; for the SensLAB testbed to reach an international visibility and use; and for pioneering testbeds on LTE technology.

ANR RESCUE started in December 2010:
Access and metropolitan networks are much more limited in capacity than core networks. While the latter operate in over-provisioning mode, access and metropolitan networks may experience high overload due to evolution of the traffic or failures. In wired networks, some failures (but not all) are handled by rerouting the traffic through a backup network already in place. In developed countries, backup networks are adopted wherever possible (note that
this is generally not the case for the links between end users and their local DSLAM). Such a redundant strategy may not be possible in emerging countries because of cost issues. When dedicated backup networks are not available, some operators use their 3G infrastructure to recover some specific failures; although such an alternative helps avoid full network outage, it is a costly solution. Furthermore, availability of 3G coverage is still mainly concentrated
in metropolitan zones. When no backup networks are available, it would be interesting to deploy, for a limited time corresponding to the period of the problem (*i.e.*, failure or traffic overload), a substitution network to help the base network keep providing services to users.

In the RESCUE project (2010-2013), we investigate both the underlying mechanisms and the deployment of a substitution network composed of a fleet of dirigible wireless mobile routers. Unlike many projects and other scientific works that consider mobility as a drawback, in RESCUE we use the controlled mobility of the substitution network to help the base network reduce contention or to create an alternative network in case of failure. The advantages of an on-the-fly substitution network are manifold: Reusability and cost reduction; Deployability; Adaptability.

The RESCUE project addresses both the theoretical and the practical aspects of the deployment of a substitution network. From a theoretical point of view, we will propose a two-tiered architecture including the base network and the substitution network. This architecture will describe the deployment procedures of the mobile routing devices, the communication stack, the protocols, and the services. The design of this architecture will take into account some constraints such as quality of service and energy consumption (since mobile devices are autonomous), as we want the substitution network to provide more than a best effort service. From a practical point of view, we will provide a proof of concept, the architecture linked to this concept, and the
necessary tools (*e.g.*, traffic monitoring, protocols) to validate the concept and mechanisms of on-the-fly substitution networks. At last but not least, we will validate the proposed system both in laboratory testbeds and in a real-usage scenario.

ANR PETAFLOW (Appel Blanc International) started in march 2010 and ended in october 2013. It is a collaborative project between the GIPSA Lab (Grenoble), MOAIS (Inria Grenoble), DANTE (Inria Grenoble), the University of Osaka (the Cybermedia Center and the Department of Information Networking) and the University of Kyoto (Visualisation Laboratory).

The aim of this collaboration was to propose network solutions to guarantee the Quality of Service (in terms of reliability level and of transfer delay properties) of a high speed, long-distance connection used in an interactive, high performance computing application. Another specificity of this application was the peta-scale volume of the treated data corresponding to the upper airway flow modelling.

ANR CONTINT CODDDE accepted in December 2013: It is a collaborative project between the ComplexNetwork team at LIP6/UPMC; Linkfluence and Inria Dante. The CODDDE project aims at studying critical research issues in the field of real-world complex networks study:

How do these networks evolve over time?

How does information spread on these networks?

How can we detect and predict anomalies in these networks?

In order to answer these questions, an essential feature of complex networks will be exploited: the existence of a community structure among nodes of these networks. Complex networks are indeed composed of densely connected groups of that are loosely connected between themselves.

The CODDE project will therefore propose new community detection algorithms to reflect complex networks evolution, in particular with regards to diffusion phenomena and anomaly detection.

These algorithms and methodology will be applied and validated on a real-world online social network consisting of more than 10 000 blogs and French media collected since 2009 on a daily basis (the dataset comprises all published articles and the links between these articles).

ANR FETUSES: The goals of this ANR project consist in the development of statistical signal processing tools dedicated to per partum fetal heat rate characterisation and
acidosis detection, and are organised as follows: *e.g.* data driven) algorithms to separate data into trend (deceleration induced by contractions) and fluctuation (cardiac variability) components; *Hôpital Femme-Mère-Enfant* of Bron (Lyon). Fetuses started in january 2012.

ANR INFRA DISCO (DIstributed SDN COntrollers for rich and elastic network services) project: the DANTE team will explore the way SDN (Software Designed Network) can change network monitoring, control, urbanisation and abstract description of network resources for the optimisation of services. More specifically, the team will address the issues regarding the positioning of SDN controllers within the network, and the implementation of an admission control that can manage IP traffic prioritisation.

LNCC - Laboratório Nacional de Computação Científica (several collaboration, *e.g.*, STIC AMSUD and Inria/FAPERJ)

Academy of Science and Technology, Vietnam (collaboration via CNSR PEPS)

Department of Mathematics/Naxys, University of Namur, Belgium (Student exchanges)

Department of Biomedical Engineering and Computational Science, Aalto University, Finland

DANTE is part of a FAPERJ/Inria collaborative project: Complex Dynamic Networks Acronym (CoDyN). The collaboration is done with the Mechanisms and ARchitectures for TeleINformatics (MARTIN) team (http://

Artur Ziviani and Klaus Wehmuth from LNCC spent several weeks at IXXI.

Dr. Gerardo Iñiguez from Aalto University (Finland) spent time in the DANTE team and was hosted by IXXI.

**Arashpreet Singh Mor** master student from Indian Institute of Technology Dehli (India) did his M1 internship with DANTE from May to August 2013.

**Thibaud Trolliet** L3 student at the department of physics of ENS Lyon did a 2 months internship with the team DANTE (june-july 2013).

**ANH Ha Pham The** Master student at IFI (Institut de la Francophonie pour l'Informatique - Hanoi Vietnam) did his M2 internship with DANTE from May to November 2013.

Christophe Crespelle, 2 months in January-February 2013, Vietnam Institute for Advanced Study in Mathematics (VIASM), Hanoï.

Christophe Crespelle, 1 month in June-July 2013, Institute of Mathematics, Vietnam Academy of Science and Technology, Hanoï.

Eric Fleury visited the team of Jose Ignacio Alvarez-Hamelin at Buenos Aeres, Argentina in collaboration with Artur Ziviani.

Éric Fleury is President of the expert committee for the ANR INFRA call

Éric Fleury is Co-chair of the Networking group ResCom of the CNRS GDR ASR. He is also a member of the scientific committee of the GDR ASR.

Éric Fleury is in the in the steering committee of the IXXI – Rhône-Alpes Complex Systems Institute.

Éric Fleury has been an expert for the Fund for Scientific Research - FNRS.

Éric Fleury was PC member of 4th Workshop on Complex Networks (CompleNet 2013)

Éric Fleury is Vice-Chairman of the projects committee of Inria Grenoble Rhône-Alpes research center

Éric Fleury was president of the AERES visiting committee of the PRISM laboratory.

Paulo Gonçalves is scientific correspondent of the International Relations for Inria Grenoble - Rhône-Alpes

Paulo Gonçalves is scientific correspondent of the International Relations for the Computer Science Department at ENS Lyon.

Paulo Gonçalves is officer of the local liaison board of EURASIP

Paulo Gonçalves was organiser of the 4th Workshop on High Speed Network and Computing Environments (COMPSAC 2013, Kyoto)

Paulo Gonçalves is PC member of IWCMC - TRAC 2014.

Thomas Begin is PC member of ACM PE-WASUN 2013.

Thomas Begin was an expert for ANR on program INFRA.

Isabelle Guérin Lassous is a member of the editorial board of: Computer Communications (Elsevier), Ad Hoc Networks (Elsevier) and Discrete Mathematics & Theoretical Computer Science.

Isabelle Guérin Lassous is program co-chair of ACM PE-WASUN 2013.

Isabelle Guérin Lassous is a member of the following program committees in 2013: IPDPS, MSWiM, ITC, ICC, Globecom, ISCC, IWCMC.

Isabelle Guérin Lassous is member of the CNRS National Committee for section 06 (Computer Science).

Christophe Crespelle is in the in the steering committee of the IXXI – Rhône-Alpes Complex Systems Institute.

Christophe Crespelle was PC member of AlgoTel 2013

Christophe Crespelle was an expert for ANR on program JCJC - SIMI 2 Information Science and Applications.

Master : Responsible for the teaching axis “Models and Optimization for Emergent Infrastructure". M1/M2 of the Department of Computer Sciences at ENS Lyon *(Informatique fondamentale)*

Eng. school : Signal processing, lab classes (4th year), CPE Lyon, France

Thomas Begin is an Assistant Professor at Université Claude Bernard Lyon 1 in the Computer Science since 2009. He mostly lectures at the University, though he has a teaching activity at ENS Lyon as well.

Licence : "Networks" (L3), University Lyon 1, France

Master : "Networking" (M1), University Lyon 1, France

Master : "Advanced Networks" (M2), University Lyon 1, France

Master : "Computer Networks" (M1), ENS de Lyon, France

Master 2 Computer Science, University Lyon 1: Responsible of the speciality Réseaux.

Professor at the computer science department of University Lyon 1, teaching in Master 1 and Master 2, Networking, Quality of Service, Wireless Networks, Multimedia networking applications.

Christophe Crespelle is an Assistant Professor at Université Claude Bernard Lyon 1 (UCBL) in the Computer Science department since 2010. He mostly lectures at UCBL, though he has a teaching activity at ENS Lyon as well.

Master : "Calculability and Complexity" (M1), UCBL, France

Master : "Network Security Architecture" (M2), UCBL, France

Master : "Security" (M2), UCBL, France

Master : "Future Networks" (M2), UCBL, France

Master : "Complex Networks" (M2), ENS Lyon, France

Eric FLeury is a full professor at ENS de Lyon. He was the head of the Computer Science Department until July 2013. ENS de Lyon is one of the four Écoles normales supérieures in France.

Licence : "Introduction to Algorithm" (L3), ENS de Lyon, France

PhD in progress : Lucie Martinet, iBird: Individual Based Investigation of Resistance Dissemination, September 2011, Éric Fleury & Christophe Crespelle

PhD in progress : Benjamin Girault, Ondelettes et graphe d'interactions dynamique : échelle temporelle et spatiale, September 2012, Éric Fleury & Paulo Gonçalves

PhD in progress : Thiago Abreu, Integration of Traffic Awareness in Substitution Networks, March 2011, Isabelle Guérin Lassous & Thomas Begin

PhD in progress : Roy Shubhabrata, Measurements in the framework of Virtual Networks Development, October 2010, Paulo Gonçalves & Thomas Begin

PhD in progress: Marina Sokol, Clustering and learning techniques for traffic / users classification, October 2010, Philippe Nain (Inria MAESTRO) and Paulo Gonçalves. M. Sokol has interrupted her PhD and his currently on maternity leave (until October 2013).

PhD in progress: Elie Rotenberg, Complex Network Metrology, September 2010, Matthieu Latapy and Christophe Crespelle

PhD in progress: Anh Tuan GIANG, Modeling and Improving the Capacity of Vehicular Ad hoc network, April 2011, Anthony Busson (registered at University Paris XI).

PhD in progress: Sabrina Naimi, Mobility metrics in wireless mobile networks, September 2010, Véronique Vèque and Anthony Busson (registered at University Paris XI).

PhD in progress : Laurent Reynaud, Optimized mobility stratgegy in wireless networks for reliability and energy consumption, March 2013, Isabelle Guérin Lassous.

PhD in progress : Yannick Léo, Diffusion Processes and Community Structures in Dynamic Complex Networks, September 2013, Éric Fleury & Christophe Crespelle

Paulo Gonçalves was member of the Ph.D. jury of Maude Pasquier (Université de Grenoble, Inria).

Isabelle Guérin Lassous was a member of the following Ph.D. jurys: Frédéric Besse (Université de Toulouse, ISAE, reviewer), Ichrak Amdouni (Université Pierre et Marie Curie, Inria, reviewer), Muhammad Yousaf (University of Engineering and Technology, Taxila, Pakistan, reviewer), Nicolas Gouvy (Université Lille 1, reviewer), Ana Bildea (Université de Grenoble, reviewer), Karen Miranda (Université Lille 1, president).

Isabelle Guérin Lassous was a member of the following HDR jury: Céline Robardet (Université Lyon 1, INSA de Lyon, member).

Éric Fleury was member of the following Ph.D. jurys : Jury of Tony Ducrocq (Université des Sciences et technologies de Lille, reviewer); Jury of Robin Lamarche-Perrin (Université de Grenoble, Reviewer); Jury of Anh Dung Nguyen (Université de Toulouse, ISAE, reviewer).