SequeL means “Sequential Learning”. As such, SequeL focuses on the task of learning in artificial systems (either hardware, or software) that gather information along time. Such systems are named *(learning) agents* (or learning machines) in the following.
These data may be used to estimate some parameters of a model, which in turn, may be used for selecting actions in order to perform some long-term optimization task.

For the purpose of model building, the agent needs to represent information collected so far in some compact form and use it to process newly available data.

The acquired data may result from an observation process of an agent in interaction with its environment (the data thus represent a perception). This is the case when the agent makes decisions (in order to attain a certain objective) that impact the environment, and thus the observation process itself.

Hence, in SequeL, the term **sequential** refers to two aspects:

The **sequential acquisition of data**, from which a model is learned (supervised and non supervised learning),

the **sequential decision making task**, based on the learned model (reinforcement learning).

Examples of sequential learning problems include:

tasks deal with the prediction of some response given a certain set of observations of input variables and responses. New sample points keep on being observed.

tasks deal with clustering objects, these latter making a flow of objects. The (unknown) number of clusters typically evolves during time, as new objects are observed.

tasks deal with the control (a policy) of some system which has to be optimized (see ). We do not assume the availability of a model of the system to be controlled.

In all these cases, we mostly assume that the process can be considered stationary for at least a certain amount of time, and slowly evolving.

We wish to have any-time algorithms, that is, at any moment, a prediction may be required/an action may be selected making full use, and hopefully, the best use, of the experience already gathered by the learning agent.

The perception of the environment by the learning agent (using its sensors) is generally neither the best one to make a prediction, nor to take a decision (we deal with Partially Observable Markov Decision Problem). So, the perception has to be mapped in some way to a better, and relevant, state (or input) space.

Finally, an important issue of prediction regards its evaluation: how wrong may we be when we perform a prediction? For real systems to be controlled, this issue can not be simply left unanswered.

To sum-up, in SequeL, the main issues regard:

the learning of a model: we focus on models that map some
input space

the observation to state mapping,

the choice of the action to perform (in the case of sequential decision problem),

the performance guarantees,

the implementation of usable algorithms,

all that being understood in a *sequential* framework.

In 2013, Crazy Stone won the 6th edition of the UEC Cup and the first edition of the Denseisen. Crazy Stone is a Go-playing program developed by Rémi Coulom since 2005, based on the Monte Carlo Tree Search method. The UEC Cup is the most important international computer-Go competition, organized yearly by the University of Electro-Communications in Tokyo, Japan. The Denseisen is a match between the winner of the UEC Cup and a top Japanese profesionnal Go player. This year Crazy Stone won a game with 4 stones of handicap against 9-dan profesionnal player Yoshio Ishida.

The International Machine Learning Society selects
SequeL to organize the 32^{nd} International
Conference on Machine Learning in 2015 at Lille. ICML is the most
important conference in the field of machine learning.

SequeL is primarily grounded on two domains:

the problem of decision under uncertainty,

statistical analysis and statistical learning, which provide the general concepts and tools to solve this problem.

To help the reader who is unfamiliar with these questions, we briefly present key ideas below.

The phrase “Decision under uncertainty” refers to the problem of taking decisions when we do not have a full knowledge neither of the situation, nor of the consequences of the decisions, as well as when the consequences of decision are non deterministic.

We introduce two specific sub-domains, namely the Markov decision processes which models sequential decision problems, and bandit problems.

Sequential decision processes occupy the heart of the SequeL project; a detailed presentation of this problem may be found in Puterman's book .

A Markov Decision Process (MDP) is defined as the tuple

In the MDP (

The history of the process up to time

We move from an MD process to an MD problem by formulating the goal of the agent, that is what the sought policy

where

In order to maximize a given functional in a sequential framework, one usually applies Dynamic Programming (DP) , which introduces the optimal value function

We say that a policy *i.e.*, if

We say that a (deterministic stationary) policy

where

The goal of Reinforcement Learning (RL), as well as that of dynamic programming, is to design an optimal policy (or a good approximation of it).

The well-known Dynamic Programming equation (also called the Bellman equation) provides a relation between the optimal value function at a state

The benefit of introducing this concept of optimal value function relies on the property that, from the optimal value function

In short, we would like to mention that most of the reinforcement learning methods developed so far are built on one (or both) of the two following approaches ( ):

Bellman's dynamic programming approach, based on the introduction of the value function. It consists in learning a “good” approximation of the optimal value function, and then using it to derive a greedy policy w.r.t. this approximation. The hope (well justified in several cases) is that the performance **Approximate dynamic programming** addresses the problem of estimating performance bounds (*e.g.* the loss in performance

Pontryagin's maximum principle approach, based on sensitivity analysis of the performance measure w.r.t. some control parameters. This approach, also called **direct policy search** in the Reinforcement Learning community aims at directly finding a good feedback control law in a parameterized policy space without trying to approximate the value function. The method consists in estimating the so-called **policy gradient**, *i.e.* the sensitivity of the performance measure (the value function) w.r.t. some parameters of the current policy. The idea being that an optimal control problem is replaced by a parametric optimization problem in the space of parameterized policies. As such, deriving a policy gradient estimate would lead to performing a stochastic gradient method in order to search for a local optimal parametric policy.

Finally, many extensions of the Markov decision processes exist, among which the Partially Observable MDPs (POMDPs) is the case where the current state does not contain all the necessary information required to decide for sure of the best action.

Bandit problems illustrate the fundamental difficulty of decision making in the face of uncertainty: A decision maker must choose between what seems to be the best choice (“exploit”), or to test (“explore”) some alternative, hoping to discover a choice that beats the current best choice.

The classical example of a bandit problem is deciding what treatment to give each patient in a clinical trial when the effectiveness of the treatments are initially unknown and the patients arrive sequentially. These bandit problems became popular with the seminal paper , after which they have found applications in diverse fields, such as control, economics, statistics, or learning theory.

Formally, a K-armed bandit problem (*i.e.*, when the arm giving the highest expected reward is pulled all the time.

The name “bandit” comes from imagining a gambler playing with K slot machines. The gambler can pull the arm of any of the machines, which produces a random payoff as a result: When arm k is pulled, the random payoff is drawn from the distribution associated to k. Since the payoff distributions are initially unknown, the gambler must use exploratory actions to learn the utility of the individual arms. However, exploration has to be carefully controlled since excessive exploration may lead to unnecessary losses. Hence, to play well, the gambler must carefully balance exploration and exploitation. Auer *et al.* introduced the algorithm UCB (Upper Confidence Bounds) that follows what is now called the “optimism in the face of uncertainty principle”. Their algorithm works by computing upper confidence bounds for all the arms and then choosing the arm with the highest such bound. They proved that the expected regret of their algorithm increases at most at a logarithmic rate
with the number of trials, and that the algorithm achieves the smallest possible regret up to some sub-logarithmic factor (for the considered family of distributions).

Many of the problems of machine learning can be seen as extensions of classical problems of mathematical statistics to their (extremely) non-parametric and model-free cases. Other machine learning problems are founded on such statistical problems. Statistical problems of sequential learning are mainly those that are concerned with the analysis of time series. These problems are as follows.

Given a series of observations

Alternatively, rather than making some assumptions on the data, one can change the goal: the predicted probabilities should be asymptotically as good as those given by the best reference predictor from a certain pre-defined set.

Another dimension of complexity in this problem concerns the nature of observations

Given a series of observations of

The problem of hypothesis testing can also be studied in its general formulations: given two (abstract) hypothesis

A stochastic process is generating the data. At some point, the process distribution changes. In the “offline” situation, the statistician observes the resulting sequence of outcomes and has to estimate the point or the points at which the change(s) occurred. In online setting, the goal is to detect the change as quickly as possible.

These are the classical problems in mathematical statistics, and probably among the last remaining statistical problems not adequately addressed by machine learning methods. The reason for the latter is perhaps in that the problem is rather challenging. Thus, most methods available so far are parametric methods concerning piece-wise constant distributions, and the change in distribution is associated with the change in the mean. However, many applications, including DNA analysis, the analysis of (user) behaviour data, etc., fail to comply with this kind of assumptions. Thus, our goal here is to provide completely non-parametric methods allowing for any kind of changes in the time-series distribution.

The problem of clustering, while being a classical problem of mathematical statistics, belongs to the realm of unsupervised learning. For time series, this problem can be formulated as follows: given several samples

The online version of the problem allows for the number of observed time series to grow with time, in general, in an arbitrary manner.

Semi-supervised learning (SSL) is a field of machine learning that studies learning from both labeled and unlabeled examples. This learning paradigm is extremely useful for solving real-world problems, where data is often abundant but the resources to label them are limited.

Furthermore, *online* SSL is suitable for adaptive machine learning
systems.
In the classification case, learning is viewed as a repeated game against a
potentially adversarial nature. At each step

The challenge of the game is that we only exceptionally observe the true label

Before detailing some issues in these fields, let us remind the definition of a few terms.

refers to a system capable of the autonomous acquisition and integration of knowledge. This capacity to learn from experience, analytical observation, and other means, results in a system that can continuously self-improve and thereby offer increased efficiency and effectiveness.

is an approach to machine intelligence that is based on statistical modeling of data. With a statistical model in hand, one applies probability theory and decision theory to get an algorithm. This is opposed to using training data merely to select among different algorithms or using heuristics/“common sense” to design an algorithm.

applies to data that could be seen as observations in the more general meaning of the term. These data may not only come from classical sensors but also from any *device* recording information. From an operational point of view, like for statistical learning, uncertainty about the data is modeled by a probability measure thus defining the so-called likelihood functions. This last one depend upon parameters defining the state of the world we focus on for decision purposes. Within the Bayesian framework the uncertainty about these parameters is also modeled by probability measures, the priors that are subjective probabilities. Using probability theory and decision theory, one then defines new algorithms to estimate the parameters of interest and/or associated decisions. According to the International Society for Bayesian Analysis (source: http://

Generally speaking, a kernel function is a function that maps a couple of points to a real value. Typically, this value is a measure of dissimilarity between the two points. Assuming a few properties on it, the kernel function implicitly defines a dot product in some function space. This very nice formal property as well as a bunch of others have ensured a strong appeal for these methods in the last 10 years in the field of function approximation. Many classical algorithms have been “kernelized”, that is, restated in a much more general way than their original formulation. Kernels also implicitly induce the representation of data in a certain “suitable” space where the problem to solve (classification, regression, ...) is expected to be simpler (non-linearity turns to linearity).

The fundamental tools used in SequeL come from the field of statistical learning . We briefly present the most important for us to date, namely, kernel-based non parametric function approximation, and non parametric Bayesian models.

In statistics in general, and applied mathematics, the approximation of a multi-dimensional real function given some samples is a well-known problem (known as either regression, or interpolation, or function approximation, ...). Regressing a function from data is a key ingredient of our research, or to the least, a basic component of most of our algorithms. In the context of sequential learning, we have to regress a function while data samples are being obtained one at a time, while keeping the constraint to be able to predict points at any step along the acquisition process. In sequential decision problems, we typically have to learn a value function, or a policy.

Many methods have been proposed for this purpose. We are looking for suitable ones to cope with the problems we wish to solve. In reinforcement learning, the value function may have areas where the gradient is large; these are areas where the approximation is difficult, while these are also the areas where the accuracy of the approximation should be maximal to obtain a good policy (and where, otherwise, a bad choice of action may imply catastrophic consequences).

We particularly favor non parametric methods since they make quite a few assumptions about the function to learn. In particular, we have strong interests in

Numerous problems may be solved efficiently by a Bayesian approach. The use of Monte-Carlo methods allows us to handle non–linear, as well as non–Gaussian, problems. In their standard form, they require the formulation of probability densities in a parametric form. For instance, it is a common usage to use Gaussian likelihood, because it is handy. However, in some applications such as Bayesian filtering, or blind deconvolution, the choice of a parametric form of the density of the noise is often arbitrary. If this choice is wrong, it may also have dramatic consequences on the estimation quality. To overcome this shortcoming, one possible approach is to consider that this density must also be estimated from data. A general Bayesian approach then consists in defining a probabilistic space associated with the possible outcomes of the *object* to be estimated. Applied to density estimation, it means that we need to define a probability measure on the probability density of the noise: such a measure is
called a *random measure*. The classical Bayesian inference procedures can then been used. This approach being by nature non parametric, the associated frame is called *Non Parametric Bayesian*.

In particular, mixtures of Dirichlet processes provide a very powerful formalism. Dirichlet Processes are a possible random measure and Mixtures of Dirichlet Processes are an extension of well-known finite mixture models. Given a mixture density

where

Given a set of observations, the estimation of the parameters of a mixture of Dirichlet processes is performed by way of a Monte Carlo Markov Chain (MCMC) algorithm. Dirichlet Process Mixture are also widely used in clustering problems. Once the parameters of a mixture are estimated, they can be interpreted as the parameters of a specific cluster defining a class as well. Dirichlet processes are well known within the machine learning community and their potential in statistical signal processing still need to be developed.

In the general multi-sensor multi-target Bayesian framework, an unknown (and possibly varying) number of targets whose states *sets* and not vectors.

The random finite set theory provides a powerful framework to deal with these issues. Mahler's work on finite sets statistics (FISST) provides a mathematical framework to build multi-object densities and derive the Bayesian rules for state prediction and state estimation. Randomness on object number and their states are encapsulated into random finite sets (RFS), namely multi-target(state) sets

where:

*i.e.* a finite set of elements

*i.e.* a collection of measures

SequeL aims at solving problems of prediction, as well as problems of optimal and adaptive control. As such, the application domains are very numerous.

The application domains have been organized as follows:

adaptive control,

signal processing and functional prediction,

medical applications,

web mining,

computer games.

Adaptive control is an important application of the research being done in SequeL. Reinforcement learning (RL) precisely aims at controling the behavior of systems and may be used in situations with more or less information available. Of course, the more information, the better, in which case methods of (approximate) dynamic programming may be used . But, reinforcement learning may also handle situations where the dynamics of the system is unknown, situations where the system is partially observable, and non stationary situations. Indeed, in these cases, the behavior is learned by interacting with the environment and thus naturally adapts to the changes of the environment. Furthermore, the adaptive system may also take advantage of expert knowledge when available.

Clearly, the spectrum of potential applications is very wide: as far as an agent (a human, a robot, a virtual agent) has to take a decision, in particular in cases where he lacks some information to take the decision, this enters the scope of our activities. To exemplify the potential applications, let us cite:

game softwares: in the 1990's, RL has been the basis of a very successful Backgammon program, TD-Gammon that learned to play at an expert level by basically playing a very large amount of games against itself. Today, various games are studied with RL techniques.

many optimization problems that are closely related to operation research, but taking into account the uncertainty, and the stochasticity of the environment: see the job-shop scheduling, or the cellular phone frequency allocation problems, resource allocation in general

we can also foresee that some progress may be made by using RL to design adaptive conversational agents, or system-level as well as application-level operating systems that adapt to their users habits.

More generally, these ideas fall into what adaptive control may bring to human beings, in making their life simpler, by being embedded in an environment that is made to help them, an idea phrased as “ambient intelligence”.

The sensor management problem consists in determining the best way to task several sensors when each sensor has many modes and search patterns. In the detection/tracking applications, the tasks assigned to a sensor management system are for instance:

detect targets,

track the targets in the case of a moving target and/or a smart target (a smart target can change its behavior when it detects that it is under analysis),

combine all the detections in order to track each moving target,

dynamically allocate the sensors in order to achieve the previous three tasks in an optimal way. The allocation of sensors, and their modes, thus defines the action space of the underlying Markov decision problem.

In the more general situation, some sensors may be localized at the same place while others are dispatched over a given volume. Tasking a sensor may include, at each moment, such choices as where to point and/or what mode to use. Tasking a group of sensors includes the tasking of each individual sensor but also the choice of collaborating sensors subgroups. Of course, the sensor management problem is related to an objective. In general, sensors must balance complex trade-offs between achieving mission goals such as detecting new targets, tracking existing targets, and identifying existing targets. The word “target” is used here in its most general meaning, and the potential applications are not restricted to military applications. Whatever the underlying application, the sensor management problem consists in choosing at each time an action within the set of available actions.

sequential decision processes are also very well-known in economy. They may be used as a decision aid tool, to help in the design of social helps, or the implementation of plants (see , for such applications).

Applications of sequential learning in the field of signal processing are also very numerous. A signal is naturally sequential as it flows. It usually comes from the recording of the output of sensors but the recording of any sequence of numbers may be considered as a signal like the stock-exchange rates evolution with respect to time and/or place, the number of consumers at a mall entrance or the number of connections to a web site. Signal processing has several objectives: predict , estimate, remove noise, characterize or classify. The signal is often considered as sequential: we want to predict, estimate or classify a value (or a feature) at time

Signals may be processed in several ways. One of the best–known way is the time-frequency analysis in which the frequencies of each signal are analyzed with respect to time. This concept has been generalized to the time-scale analysis obtained by a wavelet transform. Both analysis are based on the projection of the original signal onto a well-chosen function basis. Signal processing is also closely related to the probability field as the uncertainty inherent to many signals leads to consider them as stochastic processes: the Bayesian framework is actually one of the main frameworks within which signals are processed for many purposes. It is worth noting that Bayesian analysis can be used jointly with a time-frequency or a wavelet analysis. However, alternatives like belief functions came up these last years. Belief functions were introduced by Demspter few decades ago and have been successfully used in the few past years in fields where probability had, during many years, no alternatives like in classification. Belief functions can be viewed as a generalization of probabilities which can capture both imprecision and uncertainty. Belief functions are also closely related to data fusion.

One of the initial motivations of the multi-arm bandit theory stems from clinical trials when one researches the effects of different treatments while maximizing the improvement of the patients' health states.

Medical health-care and in particular
patient-management
is up today one of the most
important applications of the sequential decision making.
This is because the treatment of the more complex health problems
is typically sequential: A physician repeatedly observes the current state of
the patient and makes the decision in order to improve the health condition
as measured for example by *qualys*
(quality adjusted life years).

Moreover, machine learning methods may be used for at least two means in neuroscience:

as in any other (experimental) scientific domain, the machine learning methods relying heavily on statistics, they may be used to analyse experimental data,

dealing with induction learning, that is the ability to generalize from facts which is an ability that is considered to be one of the basic components of “intelligence”, machine learning may be considered as a model of learning in living beings. In particular, the temporal difference methods for reinforcement learning have strong ties with various concepts of psychology (Thorndike's law of effect, and the Rescorla-Wagner law to name the two most well-known).

We work on the news/ad recommendation. These online learning algorithms reached a critical importance over the last few years due to these major applications. After designing a new algorithm, it is critical to be able to evaluate it without having to plug it into the real application in order to protect user experiences or/and the company's revenue. To do this, people used to build simulators of user behaviors and try to achieve good performances against it. However designing such a simulator is probably much more difficult than designing the algorithm itself! An other common way to evaluate is to not consider the exploration/exploitation dilemma (also known as “Cold Start” for recommender systems). Lately data-driven methods have been developed. We are working on building automatic replay methodology with some theoretical guarantees. This work also exhibits strong link with the choice of the number of contexts to use with recommender systems wrt your audience.

An other point is that web sites must forecast Web page views in order to plan computer resource allocation and estimate upcoming revenue and advertising growth. In this work, we focus on extracting trends and seasonal patterns from page view series.
We investigate Holt-Winters/ARIMA like procedures and some regularized models for making short-term prediction (3-6 weeks) wrt to logged data of several big media websites.
We work on some news event related webpages and we feel that kind of time series deserves a particular attention. Self-similarity is found to exist at multiple time scales of network traffic, and can be exploited for prediction. In particular, it is found that Web page views exhibit strong impulsive changes occasionally. The impulses cause large prediction errors long after their occurrences and can sometimes be predicted (*e.g.*, elections, sport events, editorial changes,holidays) in order to improve accuracies. It also seems that some promising model could arise from using global trends shift in the population.

The problem of artificial intelligence in games consists in choosing actions of players in order to produce artificial opponents. Most games can be formalized as Markov decision problems, so they can be approached with reinforcement learning.

In particular, SequeL was a pioneer of Monte Carlo Tree Search, a technique that obtained spectacular successes in the game of Go. Other application domains include the game of poker and the Japanese card game of hanafuda.

**Crazy Stone*** is a top-level Go-playing program that has been developed by Rémi Coulom since 2005. Crazy Stone won several major international Go tournaments in the past. In 2013, a new version was released in Japan. This new version won the 6th edition of the UEC Cup (the most important international computer-Go tournament). It also won the first edition of the Denseisen, by winning a 4-stone handicap game against 9-dan professional player Yoshio Ishida. It is distributed as a commercial product by Unbalance Corporation (Japan). 6-month work in 2013. URL: http:// remi.coulom.free.fr/CrazyStone/*

**Kifu Snap*** is an Android image-recognition app. It can automatically recognize a Go board from a picture, and analyze it with Crazy Stone. It was released on Google Play in November, 2013. 6-month work in 2013. URL: http:// remi.coulom.free.fr/kifu-snap/*

*
Minimax PAC bounds on the sample complexity of reinforcement learning with a generative model
*

We consider the problem of learning the optimal action-value function in discounted-reward Markov decision processes (MDPs). We prove new PAC bounds on the sample-complexity of two well-known model-based reinforcement learning (RL) algorithms in the presence of a generative model of the MDP: value iteration and policy iteration. The first result indicates that for an MDP with N state-action pairs and the discount factor

*
Regret Bounds for Reinforcement Learning with Policy Advice
*

In some reinforcement learning problems an agent may be provided with a set of input policies, perhaps learned from prior experience or provided by advisors. We present a reinforcement learning with policy advice (RLPA) algorithm which leverages this input set and learns to use the best policy in the set for the reinforcement learning task at hand. We prove that RLPA has a sub-linear regret of

*
Optimistic planning for belief-augmented Markov decision processes
*

This paper presents the Bayesian Optimistic Planning (BOP) algorithm, a novel model-based Bayesian reinforcement learning approach. BOP extends the planning approach of the Optimistic Planning for Markov Decision Processes (OP-MDP) algorithm [10], [9] to contexts where the transition model of the MDP is initially unknown and progressively learned through interactions within the environment. The knowledge about the unknown MDP is represented with a probability distribution over all possible transition models using Dirichlet distributions, and the BOP algorithm plans in the belief-augmented state space constructed by concatenating the original state vector with the current posterior distribution over transition models. We show that BOP becomes Bayesian optimal when the budget parameter increases to infinity. Preliminary empirical validations show promising performance.

*
Aggregating optimistic planning trees for solving markov decision processes
*

This paper addresses the problem of online planning in Markov decision processes using a generative model and under a budget constraint. We propose a new algorithm, ASOP, which is based on the construction of a forest of single successor state planning trees, where each tree corresponds to a random realization of the stochastic environment. The trees are explored using a "safe" optimistic planning strategy which combines the optimistic principle (in order to explore the most promising part of the search space first) and a safety principle (which guarantees a certain amount of uniform exploration). In the decision-making step of the algorithm, the individual trees are aggregated and an immediate action is recommended. We provide a finite-sample analysis and discuss the trade-off between the principles of optimism and safety. We report numerical results on a benchmark problem showing that ASOP performs as well as state-of-the-art optimistic planning algorithms.

*
Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning
*

We consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. This process is not assumed to be a Markov Decision Process (MDP). Rather, the agent has several representations (mapping histories of past interactions to a discrete state space) of the environment with unknown dynamics, only some of which result in an MDP. The goal is to minimize the average regret criterion against an agent who knows an MDP representation giving the highest optimal reward, and acts optimally in it. Recent regret bounds for this setting are of order

*
Competing with an Infinite Set of Models in Reinforcement Learning
*

We consider a reinforcement learning setting where the learner also has to deal with the problem of finding a suitable state-representation function from a given set of models. This has to be done while interacting with the environment in an online fashion (no resets), and the goal is to have small regret with respect to any Markov model in the set. For this setting, recently the BLBãlgorithm has been proposed, which achieves regret of order

*
A review of optimistic planning in Markov decision processes
*

We review a class of online planning algorithms for deterministic and stochastic optimal control problems, modeled as Markov decision processes. At each discrete time step, these algorithms maximize the predicted value of planning policies from the current state, and apply the first action of the best policy found. An overall receding-horizon algorithm results, which can also be seen as a type of model-predictive control. The space of planning policies is explored optimistically, focusing on areas with largest upper bounds on the value - or upper confidence bounds, in the stochastic case. The resulting optimistic planning framework integrates several types of optimism previously used in planning, optimization, and reinforcement learning, in order to obtain several intuitive algorithms with good performance guarantees. We describe in detail three recent such algorithms, outline the theoretical guarantees on their performance, and illustrate their behavior in a numerical example.

*
Automatic motor task selection via a bandit algorithm for a brain-controlled button
*

Objective. Brain-computer interfaces (BCIs) based on sensorimotor rhythms use a variety of motor tasks, such as imagining moving the right or left hand, the feet or the tongue. Finding the tasks that yield best performance, specifically to each user, is a time-consuming preliminary phase to a BCI experiment. This study presents a new adaptive procedure to automatically select (online) the most promising motor task for an asynchronous brain-controlled button. Approach. We develop for this purpose an adaptive algorithm UCB-classif based on the stochastic bandit theory and design an EEG experiment to test our method. We compare (offline) the adaptive algorithm to a naïve selection strategy which uses uniformly distributed samples from each task. We also run the adaptive algorithm online to fully validate the approach. Main results. By not wasting time on inefficient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more efficient use of the BCI training session. More precisely, the offline analysis reveals that the use of this algorithm can reduce the time needed to select the most appropriate task by almost half without loss in precision, or alternatively, allow us to investigate twice the number of tasks within a similar time span. Online tests confirm that the method leads to an optimal task selection. Significance. This study is the first one to optimize the task selection phase by an adaptive procedure. By increasing the number of tasks that can be tested in a given time span, the proposed method could contribute to reducing 'BCI illiteracy'.

*
Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation
*

We consider optimal sequential allocation in the context of the so-called stochastic multi-armed bandit model. We describe a generic index policy, in the sense of Gittins (1979), based on upper confidence bounds of the arm payoffs computed using the Kullback-Leibler divergence. We consider two classes of distributions for which instances of this general idea are analyzed: The kl-UCB algorithm is designed for one-parameter exponential families and the empirical KL-UCB algorithm for bounded and finitely supported distributions. Our main contribution is a unified finite-time analysis of the regret of these algorithms that asymptotically matches the lower bounds of Lai and Robbins (1985) and Burnetas and Katehakis (1996), respectively. We also investigate the behavior of these algorithms when used with general bounded rewards, showing in particular that they provide significant improvements over the state-of-the-art.

*
Sequential Transfer in Multi-armed Bandit with Finite Set of Models
*

Learning from prior tasks and transferring that experience to improve future performance is critical for building lifelong learning agents. Although results in supervised and reinforcement learning show that transfer may significantly improve the learning performance, most of the literature on transfer is focused on batch learning tasks. In this paper we study the problem of *sequential transfer in online learning*, notably in the multi–armed bandit framework, where the objective is to minimize the total regret over a sequence of tasks by transferring knowledge from prior tasks. Under the assumption that the tasks are drawn from a stationary distribution over a finite set of models, we define a novel bandit algorithm based on a method-of-moments approach for the estimation of the possible tasks and derive regret bounds for it. We introduce a novel bandit algorithm based on a method-of-moments approach for estimating the possible tasks and derive regret bounds for it. Finally, we report preliminary empirical results confirming the theoretical findings.

*
Optimizing P300-speller sequences by RIP-ping groups apart
*

So far P300-speller design has put very little emphasis on the design of optimized flash patterns, a surprising fact given the importance of the sequence of flashes on the selection outcome. Previous work in this domain has consisted in studying consecutive flashes, to prevent the same letter or its neighbors from flashing consecutively. To this effect, the flashing letters form more random groups than the original row-column sequences for the P300 paradigm, but the groups remain fixed across repetitions. This has several important consequences, among which a lack of discrepancy between the scores of the different letters. The new approach proposed in this paper accumulates evidence for individual elements, and optimizes the sequences by relaxing the constraint that letters should belong to fixed groups across repetitions. The method is inspired by the theory of Restricted Isometry Property matrices in Compressed Sensing, and it can be applied to any display grid size, and for any target flash frequency. This leads to P300 sequences which are shown here to perform significantly better than the state of the art, in simulations and online tests.

*
Stochastic Simultaneous Optimistic Optimization
*

We study the problem of global maximization of a function f given a finite number of evaluations perturbed by noise. We consider a very weak assumption on the function, namely that it is locally smooth (in some precise sense) with respect to some semi-metric, around one of its global maxima. Compared to previous works on bandits in general spaces (Kleinberg et al., 2008; Bubeck et al., 2011a) our algorithm does not require the knowledge of this semi-metric. Our algorithm, StoSOO, follows an optimistic strategy to iteratively construct upper confidence bounds over the hierarchical partitions of the function domain to decide which point to sample next. A finite-time analysis of StoSOO shows that it performs almost as well as the best specifically-tuned algorithms even though the local smoothness of the function is not known.

*
Toward optimal stratification for stratified monte-carlo integration
*

We consider the problem of adaptive stratified sampling for Monte Carlo integration of a noisy function, given a finite budget n of noisy evaluations to the function. We tackle in this paper the problem of adapting to the function at the same time the number of samples into each stratum and the partition itself. More precisely, it is interesting to refine the partition of the domain in area where the noise to the function, or where the variations of the function, are very heterogeneous. On the other hand, having a (too) refined stratification is not optimal. Indeed, the more refined the stratification, the more difficult it is to adjust the allocation of the samples to the stratification, i.e. sample more points where the noise or variations of the function are larger. We provide in this paper an algorithm that selects online, among a large class of partitions, the partition that provides the optimal trade-off, and allocates the samples almost optimally on this partition

*
Thompson sampling for one-dimensional exponential family bandits
*

Thompson Sampling has been demonstrated in many complex bandit models, however the theoretical guarantees available for the parametric multi-armed bandit are still limited to the Bernoulli case. Here we extend them by proving asymptotic optimality of the algorithm using the Jeffreys prior for 1-dimensional exponential family bandits. Our proof builds on previous work, but also makes extensive use of closed forms for Kullback-Leibler divergence and Fisher information (and thus Jeffreys prior) available in an exponential family. This allow us to give a finite time exponential concentration inequality for posterior distributions on exponential families that may be of interest in its own right. Moreover our analysis covers some distributions for which no optimistic algorithm has yet been proposed, including heavy-tailed exponential families.

*
Finite-Time Analysis of Kernelised Contextual Bandits
*

We tackle the problem of online reward maximisation over a large finite set of actions described by their contexts. We focus on the case when the number of actions is too big to sample all of them even once. However we assume that we have access to the similarities between actions' contexts and that the expected reward is an arbitrary linear function of the contexts' images in the related reproducing kernel Hilbert space (RKHS). We propose KernelUCB, a kernelised UCB algorithm, and give a cumulative regret bound through a frequentist analysis. For contextual bandits, the related algorithm GP-UCB turns out to be a special case of our algorithm, and our finite-time analysis improves the regret bound of GP-UCB for the agnostic case, both in the terms of the kernel-dependent quantity and the RKHS norm of the reward function. Moreover, for the linear kernel, our regret bound matches the lower bound for contextual linear bandits.

*
From Bandits to Monte-Carlo Tree Search: The Optimistic Principle Applied to Optimization and Planning
*

This work covers several aspects of the optimism in the face of uncertainty principle applied to large scale optimization problems under finite numerical budget. The initial motivation for the research reported here originated from the empirical success of the so-called Monte-Carlo Tree Search method popularized in computer-go and further extended to many other games as well as optimization and planning problems. Our objective is to contribute to the development of theoretical foundations of the field by characterizing the complexity of the underlying optimization problems and designing efficient algorithms with performance guarantees. The main idea presented here is that it is possible to decompose a complex decision making problem (such as an optimization problem in a large search space) into a sequence of elementary decisions, where each decision of the sequence is solved using a (stochastic) multi-armed bandit (simple mathematical model for decision making in stochastic environments). This so-called hierarchical bandit approach (where the reward observed by a bandit in the hierarchy is itself the return of another bandit at a deeper level) possesses the nice feature of starting the exploration by a quasi-uniform sampling of the space and then focusing progressively on the most promising area, at different scales, according to the evaluations observed so far, and eventually performing a local search around the global optima of the function. The performance of the method is assessed in terms of the optimality of the returned solution as a function of the number of function evaluations. Our main contribution to the field of function optimization is a class of hierarchical optimistic algorithms designed for general search spaces (such as metric spaces, trees, graphs, Euclidean spaces, ...) with different algorithmic instantiations depending on whether the evaluations are noisy or noiseless and whether some measure of the ”smoothness” of the function is known or unknown. The performance of the algorithms depend on the local behavior of the function around its global optima expressed in terms of the quantity of near-optimal states measured with some metric. If this local smoothness of the function is known then one can design very efficient optimization algorithms (with convergence rate independent of the space dimension), and when it is not known, we can build adaptive techniques that can, in some cases, perform almost as well as when it is known.

*
Nonparametric multiple change point estimation in highly dependent time series
*

Given a heterogeneous time-series sample, it is required to find the points in time (called change points) where the probability distribution generating the data has changed. The data is assumed to have been generated by arbitrary, unknown, stationary ergodic distributions. No modeling, independence or mixing are made. A novel, computationally efficient, nonparametric method is proposed, and is shown to be asymptotically consistent in this general framework; the theoretical results are complemented with experimental evaluations.

*
A Binary-Classification-Based Metric between Time-Series Distributions and Its Use in Statistical and Learning Problems
*

A metric between time-series distributions is proposed that can be evaluated using binary classification methods, which were originally developed to work on i.i.d. data. It is shown how this metric can be used for solving statistical problems that are seemingly unrelated to classification and concern highly dependent time series. Specifically, the problems of time-series clustering, homogeneity testing and the three-sample problem are addressed. Universal consistency of the resulting algorithms is proven under most general assumptions. The theoretical results are illustrated with experiments on synthetic and real-world data.

*
Learning from a Single Labeled Face and a Stream of Unlabeled Data
*

Face recognition from a single image per person is a challenging problem because the training sample is extremely small. We consider a variation of this problem. In our problem, we recognize only one person, and there are no labeled data for any other person. This setting naturally arises in authentication on personal computers and mobile devices, and poses additional challenges because it lacks negative examples. We formalize our problem as one-class classification, and propose and analyze an algorithm that learns a non-parametric model of the face from a single labeled image and a stream of unlabeled data. In many domains, for instance when a person interacts with a computer with a camera, unlabeled data are abundant and easy to utilize. This is the first paper that investigates how these data can help in learning better models in the single-image-per-person setting. Our method is evaluated on a dataset of 43 people and we show that these people can be recognized 90% of time at nearly zero false positives. This recall is 25+% higher than the recall of our best performing baseline. Finally, we conduct a comprehensive sensitivity analysis of our algorithm and provide a guideline for setting its parameters in practice.

*
Unsupervised model-free representation learning
*

Numerous control and learning problems face the situation where sequences of high-dimensional highly dependent data are available, but no or little feedback is provided to the learner. In such situations it may be useful to find a concise representation of the input signal, that would preserve as much as possible of the relevant information. In this work we are interested in the problems where the relevant information is in the time-series dependence. Thus, the problem can be formalized as follows. Given a series of observations

*
Time-series information and learning
*

Given a time series

*
Learning a common dictionary over a sensor network
*

We consider the problem of distributed dictionary learning, where a set of nodes is required to collectively learn a common dictionary from noisy measurements. This approach may be useful in several contexts including sensor networks. Diffusion cooperation schemes have been proposed to solve the distributed linear regression problem. In this work we focus on a diffusion-based adaptive dictionary learning strategy: each node records independent observations and cooperates with its neighbors by sharing its local dictionary. The resulting algorithm corresponds to a distributed alternate optimization. Beyond dictionary learning, this strategy could be adapted to many matrix factorization problems in various settings. We illustrate its efficiency on some numerical experiments.

*
Distributed dictionary learning over a sensor network
*

We consider the problem of distributed dictionary learning, where a set of nodes is required to collec- tively learn a common dictionary from noisy measure- ments. This approach may be useful in several con- texts including sensor networks. Diffusion cooperation schemes have been proposed to solve the distributed linear regression problem. In this work we focus on a diffusion-based adaptive dictionary learning strategy: each node records observations and cooperates with its neighbors by sharing its local dictionary. The resulting algorithm corresponds to a distributed block coordi- nate descent (alternate optimization). Beyond dictio- nary learning, this strategy could be adapted to many matrix factorization problems and generalized to var- ious settings. This article presents our approach and illustrates its efficiency on some numerical examples.

*
Outlier detection for patient monitoring and alerting
*

We develop and evaluate a data-driven approach for detecting unusual (anomalous) patient-management decisions using past patient cases stored in electronic health records (EHRs). Our hypothesis is that a patient-management decision that is unusual with respect to past patient care may be due to an error and that it is worthwhile to generate an alert if such a decision is encountered. We evaluate this hypothesis using data obtained from EHRs of 4486 post-cardiac surgical patients and a subset of 222 alerts generated from the data. We base the evaluation on the opinions of a panel of experts. The results of the study support our hypothesis that the outlier-based alerting can lead to promising true alert rates. We observed true alert rates that ranged from 25% to 66% for a variety of patient-management actions, with 66% corresponding to the strongest outliers.

*
A confidence-set approach to signal denoising
*

The problem of filtering of finite-alphabet stationary ergodic time series is considered. A method for constructing a confidence set for the (unknown) signal is proposed, such that the resulting set has the following properties. First, it includes the unknown signal with probability γ, where γ is a parameter supplied to the filter. Second, the size of the confidence sets grows exponentially with a rate that is asymptotically equal to the conditional entropy of the signal given the data. Moreover, it is shown that this rate is optimal. We also show that the described construction of the confidence set can be applied to the case where the signal is corrupted by an erasure channel with unknown statistics.

*
Quantification adaptative pour la stéganalyse d'images texturées
*

Nous cherchons à améliorer les performances d'un schéma de stéganalyse (i.e. la détection de messages cachées) pour des images texturées. Le schéma de stéganographie étudié consiste à modifier certains pixels de l'image par une perturbation +/-1, et le schéma de stéganalyse utilise les caractéristiques construites à partir de la probabilité conditionnelle empirique de différences de 4 pixels voisins. Dans sa version originale, la stéganalyse n'est pas trés efficace sur des images texturées et ce travail vise àâ€ explorer plusieurs techniques de quantification en utilisant d'abord un pas de quantification plus important puis une quantification adaptative scalaire ou vectorielle. Les cellules de la quantification adaptative sont générées en utilisant un K-means ou un K-means ”équilibré” de manière à ce chaque cellule quantifie approximativement le même nombre d'échantillon. Nous obtenons un gain maximal de classification de 3% pour un pas de quantification uniforme de 3. En utilisant l'algorithme K-means équilibré sur [-18,18], le gain par rapport à la version de base est de 4.7%.

*
Cost-sensitive Multiclass Classification Risk Bounds
*

A commonly used approach to multiclass classification is to replace the 0-1 loss with a convex surrogate so as to make empirical risk minimization computationally tractable. Previous work has uncovered sufficient and necessary conditions for the consistency of the resulting procedures. In this paper, we strengthen these results by showing how the 0-1 excess loss of a predictor can be upper bounded as a function of the excess loss of the predictor measured using the convex surrogate. The bound is developed for the case of cost-sensitive multiclass classification and a convex surrogate loss that goes back to the work of Lee, Lin and Wahba. The bounds are as easy to calculate as in binary classification. Furthermore, we also show that our analysis extends to the analysis of the recently introduced "Simplex Coding" scheme.

*
Approximate Dynamic Programming Finally Performs Well in the Game of Tetris
*

Tetris is a video game that has been widely used as a benchmark for various optimization techniques including approximate dynamic programming (ADP) algorithms. A look at the literature of this game shows that while ADP algorithms that have been (almost) entirely based on approximating the value function (value function based) have performed poorly in Tetris, the methods that search directly in the space of policies by learning the policy parameters using an optimization black box, such as the cross entropy (CE) method, have achieved the best reported results. This makes us conjecture that Tetris is a game in which good policies are easier to represent, and thus, learn than their corresponding value functions. So, in order to obtain a good performance with ADP, we should use ADP algorithms that search in a policy space, instead of the more traditional ones that search in a value function space. In this paper, we put our conjecture to test by applying such an ADP algorithm, called classification-based modified policy iteration (CBMPI), to the game of Tetris. Our experimental results show that for the first time an ADP algorithm, namely CBMPI, obtains the best results reported in the literature for Tetris in both small

*
A Generalized Kernel Approach to Structured Output Learning
*

We study the problem of structured output learning from a regression perspective. We first provide a general formulation of the kernel dependency estimation (KDE) problem using operator-valued kernels. We show that some of the existing formulations of this problem are special cases of our framework. We then propose a covariance-based operator-valued kernel that allows us to take into account the structure of the kernel feature space. This kernel operates on the output space and encodes the interactions between the outputs without any reference to the input space. To address this issue, we introduce a variant of our KDE method based on the conditional covariance operator that in addition to the correlation between the outputs takes into account the effects of the input variables. Finally, we evaluate the performance of our KDE approach using both covariance and conditional covariance kernels on two structured output problems, and compare it to the state-of-the-art kernel-based structured output regression methods.

*
Gossip-based distributed stochastic bandit algorithms
*

The multi-armed bandit problem has attracted remarkable attention in the machine learning community and many efficient algorithms have been proposed to handle the so-called exploitation-exploration dilemma in various bandit setups. At the same time, significantly less effort has been devoted to adapting bandit algorithms to particular architectures, such as sensor networks, multi-core machines, or peer-to-peer (P2P) environments, which could potentially speed up their convergence. Our goal is to adapt stochastic bandit algorithms to P2P networks. In our setup, the same set of arms is available in each peer. In every iteration each peer can pull one arm independently of the other peers, and then some limited communication is possible with a few random other peers. As our main result, we show that our adaptation achieves a linear speedup in terms of the number of peers participating in the network. More precisely, we show that the probability of playing a suboptimal arm at a peer in iteration t=

*
Sur quelques problèmes non-supervisés impliquant des séries temporelles hautement dèpendantes
*

Cette thèse est consacrée à l'analyse théorique de problèmes non supervisés impliquant des séries temporelles hautement dépendantes. Plus particulièrement, nous abordons les deux problèmes fondamentaux que sont le problème d'estimation des points de rupture et le partitionnement de séries temporelles. Ces problèmes sont abordés dans un cadre extrêmement général oùles données sont générées par des processus stochastiques ergodiques stationnaires. Il s'agit de l'une des hypothèses les plus faibles en statistiques, comprenant non seulement, les hypothèses de modèles et les hypothèses paramétriques habituelles dans la littérature scientifique, mais aussi des hypothèses classiques d'indépendance, de contraintes sur l'espace mémoire ou encore des hypothèses de mélange. En particulier, aucune restriction n'est faite sur la forme ou la nature des dépendances, de telles sortes que les échantillons peuvent être arbitrairement dépendants. Pour chaque problème abordé, nous proposons de nouvelles méthodes non paramétriques et nous prouvons de plus qu'elles sont, dans ce cadre, asymptotiquement consistantes. Pour l'estimation de points de rupture, la consistance asymptotique se rapporte à la capacité de l'algorithme à produire des estimations des points de rupture qui sont asymptotiquement arbitrairement proches des vrais points de rupture. D'autre part, un algorithme de partitionnement est asymptotiquement consistant si le partitionnement qu'il produit, restreint à chaque lot de séquences, coïncides, à partir d'un certain temps et de manière consistante, avec le partitionnement cible. Nous montrons que les algorithmes proposés sont implémentables efficacement, et nous accompagnons nos résultats théoriques par des évaluations expérimentales. L'analyse statistique dans le cadre stationnaire ergodique est extrêmement difficile. De manière générale, il est prouvé que les vitesses de convergence sont impossibles à obtenir. Dès lors, pour deux échantillons générés indépendamment par des processus ergodiques stationnaires, il est prouvé qu'il est impossible de distinguer le cas où les échantillons sont générés par le même processus de celui où ils sont générés par des processus différents. Ceci implique que des problèmes tels le partitionnement de séries temporelles sans la connaissance du nombre de partitions ou du nombre de points de rupture ne peut admettre de solutions consistantes. En conséquence, une tâche difficile est de découvrir les formulations du problème qui en permettent une résolution dans ce cadre général. La principale contribution de cette thèse est de démontrer (par construction) que malgré ces résultats d'impossibilités théoriques, des formulations naturelles des problèmes considérés existent et admettent des solutions consistantes dans ce cadre général. Ceci inclut la démonstration du fait que le nombre de points de rupture corrects peut être trouvé, sans recourir à des hypothèses plus fortes sur les processus stochastiques. Il en résulte que, dans cette formulation, le problème des points de rupture peut être réduit à du partitionnement de séries temporelles. Les résultats présentés dans ce travail formulent les fondations théoriques pour l'analyse des données séquentielles dans un espace d'applications bien plus large.

*
Actor-Critic Algorithms for Risk-Sensitive MDPs
*

In many sequential decision-making problems we may want to manage risk by minimizing some measure of variability in rewards in addition to maximizing a standard criterion. Variance-related risk measures are among the most common risk-sensitive criteria in finance and operations research. However, optimizing many such criteria is known to be a hard problem. In this paper, we consider both discounted and average reward Markov decision processes. For each formulation, we first define a measure of variability for a policy, which in turn gives us a set of risk-sensitive criteria to optimize. For each of these criteria, we derive a formula for computing its gradient. We then devise actor-critic algorithms for estimating the gradient and updating the policy parameters in the ascent direction. We establish the convergence of our algorithms to locally risk-sensitive optimal policies. Finally, we demonstrate the usefulness of our algorithms in a traffic signal control application.

*
Bayesian Policy Gradient and Actor-Critic Algorithms
*

Policy gradient methods are reinforcement learning algorithms that adapt a parameterized policy by following a performance gradient estimate. Many conventional policy gradient methods use Monte-Carlo techniques to estimate this gradient. The policy is improved by adjusting the parameters in the direction of the gradient estimate. Since Monte-Carlo methods tend to have high variance, a large number of samples is required to attain accurate estimates, resulting in slow convergence. In this paper, we first propose a Bayesian framework for policy gradient, based on modeling the policy gradient as a Gaussian process. This reduces the number of samples needed to obtain accurate gradient estimates. Moreover, estimates of the natural gradient as well as a measure of the uncertainty in the gradient estimates, namely, the gradient covariance, are provided at little extra cost. Since the proposed Bayesian framework considers system trajectories as its basic observable unit, it does not require the dynamic within each trajectory to be of any special form, and thus, can be easily extended to partially observable problems. On the downside, it cannot take advantage of the Markov property when the system is Markovian. To address this issue, we then extend our Bayesian policy gradient framework to actor-critic algorithms and present a new actor-critic learning model in which a Bayesian class of non-parametric critics, based on Gaussian process temporal difference learning, is used. Such critics model the action-value function as a Gaussian process, allowing Bayes' rule to be used in computing the posterior distribution over action-value functions, conditioned on the observed data. Appropriate choices of the policy parameterization and of the prior covariance (kernel) between action-values allow us to obtain closed-form expressions for the posterior distribution of the gradient of the expected return with respect to the policy parameters. We perform detailed experimental comparisons of the proposed Bayesian policy gradient and actor-critic algorithms with classic Monte-Carlo based policy gradient methods, as well as with each other, on a number of reinforcement learning problems.

Deezer, 2013-2014

A research project has started on June 2013 in collaboration with the Deezer company. The goal is to build a system which automatically recommends music to users. That goal is an extension of the bandit setting to the Collaborative Filtering problem.

Nuukik, 2013-2014

Nuukik is a start-up from Hub Innovation in Lille. It proposes a recommender systems for e-commerce based on matrix factorization. We worked with them specifically on the cold start problem (*i.e* when you have absolutely no data on a product or a customer). This led to promising result and allowed us to close the gap between bandits and matrix factorization. This work led to a patent submission in december 2013.

TBS, 2012-2013

A research project has started in September 2012 in collaboration with the TBS company. The goal is to understand and predict the audience of news related websites. These websites tend to present an ergodic frequentation with respect to a context. The main goal is to separate the effect of the context (big events, elections, ...) and the impact of the policies of the news websites. This work is based on data originating from major French media websites and also involves research of tendencies on the web (as Google Trends and Google Flu do). Used algorithms mix methods from time series prediction (ARIMA and MARSS models) and machine learning methods (L1 penalization, SVM).

Squoring Technologies, 2011-2014

Boris Baldassari has been hired by Squoring Technologies (Toulouse) as a PhD student in May 2011. He works on the use of machine learning to improve the quality of the software development process. During his first year as a PhD student, Boris investigated the existing norms and measures of quality of software development process. He also dedicated some time to gather some relevant datasets, which are made of either the sequence of source code releases over a multi-years period, or all the versions stored on an svn repository (svn or alike). Information from mailing-lists (bugs, support, ...) may also be part of these datasets. Tools in machine learning capable of dealing with this sort of data have also been investigated. Goals that may be reached in this endeavor have also been precised.

INTEL Corp., 2013 - 2014

This is a research project on Algorithmic Determination of IoT Edge Analytics Requirements. We are attempting to solve the problem of how to automatically predict the system requirements for edge node analytics in the Internet of Things (IoT). We envision that a flexible extensible system of edge analytics can be created for IoT management; however, edge nodes can be very different in terms of the systems requirements around: processing capability, wireless communication, security/cryptography, guaranteed responsiveness, guaranteed quality of service and on-board memory requirements. One of the challenges of managing a heterogeneous Internet of Things is determining the systems requirements at each edge node in the network.

We suggest exploiting opportunity of being able to automatically customize large scale IoT systems that could comprise heterogeneous edge nodes and allow a flexible and scalable component and firmware SoC systems to be matched to the individual need of enterprise/ government level IoT customers. We propose using large scale sequential decision learning algorithms, particularly contextual bandit modeling to automatically determine the systems requirements for edge analytics. These algorithms have an adaptive property that allows for the addition of new nodes and the re-evaluation of existing nodes under dynamic and potentially adversarial conditions.

*Title*: Learning Algorithms, Models and sPArse
representations for structured DAta

*Type*: National Research Agency (ANR-09-EMER-007)

*Coordinator*: Inria Lille – Nord Europe (Mostrare)

*Others partners*: Laboratoire d'Informatique
Fondamentale de Marseille; Laboratoire Hubert Curien à Saint
Etienne; Laboratoire d'Informatique de Paris 6.

*Web site*: http://

*Duration*: ends mid-2014

*Abstract*: Lampada is a fundamental research project
on machine learning and structured data. Lampada focuses on
scaling learning algorithms to handle large sets of complex
data. The main challenges are 1) high dimension learning problems,
2) large sets of data and 3) dynamics of data. We consider
evolving data. The representation of these data involves both
structure and content information and are typically large
sequences, trees and graphs. The main application domains are
web2, social networks and biological data.

The project proposes to study formal representations of such data together with incremental or sequential machine learning methods and similarity learning methods.

The representation research topic includes condensed data representation, sampling, prototype selection and representation of streams of data. Machine learning methods include edit distance learning, reinforcement learning and incremental methods, density estimation of structured data and learning on streams.

*Activity Report*:

Philippe Preux has collaborated with Ludovic Denoyer and Gabriel Dulac-Arnold from LIP'6 to investigate further the idea of datum-wise representation, introduced in 2011.

Mohammad Ghavamzadeh and Philippe Preux have collaborated with Hachem Kadri on an operator-based approach for structured output .

Daniil Ryabko has developed a theory for unsupervised learning of time-series dependence, where the time series are either coming from a stationary environment or are a result of interaction with a Markovian environment with a continuous state space. Danil Ryabko and Jeremie Mary have developed methods for using binary classification methods for solving various unsupervised learning problems about time series.

*Title*: Brain computer co-adaptation for better interfaces

*Type*: National Research Agency (ANR-09-EMER-002)

*Coordinator*: Maureen Clerc

*Other Partners*: Inria Odyssee project (Maureen
Clerc), the INSERM U821 team (Olivier Bertrand), the Laboratory of
Neurobiology of Cognition (CNRS) (Boris Burle) and the laboratory
of Analysis, topology and probabilities (CNRS and University of
Provence) (Bruno Torresani).

*Web site*:
https://

*Duration*: 2009-2014

*Abstract*: The aim of Co-Adapt is to propose new
directions for BCI design, by modeling explicitly the
co-adaptation taking place between the user and the system. The
goal of CoAdapt is to study the co-adaptation between a user and a
BCI system in the course of training and operation. The quality of
the interface will be judged according to several criteria
(reliability, learning curve, error correction, bit rate). BCI
will be considered under a joint perspective: the user's and the
system's. From the user's brain activity, features must be
extracted, and translated into commands to drive the BCI
system. From the point of view of the system, it is important to
devise adaptive learning strategies, because the brain activity is
not stable in time. How to adapt the features in the course of BCI
operation is a difficult and important topic of research. We will
investigate Reinforcement Learning (RL) techniques to address the
above questions.

*Activity Report*:
The performances of a BCI can vary greatly across users but also depend on the tasks used, making the problem of appropriate task selection an important issue. We develop an adaptive algorithm, UCB-classif, based on the stochastic bandit theory. This shortens the training stage, thereby allowing the exploration of a greater variety of tasks. By not wasting time on inefficient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more efficient use of the BCI training session. See and https://

*Title*: Multifractal Analysis and Applications to
Signal and Image Processing

*Type*: National Research Agency

*Coordinator*: Univ. Paris-Est-Créteil (S. Jaffard)

*Duration*: 2011-2015

*Other Partners*: Univ. Paris-Est Créteil,
Univ. Sciences et Technologies de Lille and Inria (Lille), ENST
(Telechom ParisTech), Univ. Blaise Pascal (Clermont-Ferrand), and
Univ. Bretagne Sud (Vannes), Statistical Signal Processing group
at the Physics Department at the Ecole Normale Supérieure de
Lyon, one researcher from the Math. Department of Institut
National des Sciences Appliquees de Lyon and two researchers from
the Laboratoire d'Analyse, Topologie et Probabilités (LAPT) of
Aix-Marseille University.

*Abstract*: Multifractal analysis refers to two
concepts of different natures: On the theoretical side, it
corresponds to pointwise singularity characterization and
fractional dimension determination ; on the applied side, it is
associated with scale invariance characterization, involving a
family of parameters, the scaling function, used in
classification or model selection. Following the seminal ideas of
Parisi and Frisch in the mid-80s, these two components are usually
related by a Legendre transform, stemming from a heuristic
argument relying on large deviation and statistical thermodynamics
principles: The multifractal formalism. This led to new
theoretical approaches for the study of singularities of
functions and measures, as well as efficient tools for
classification and models selection, that allowed to settle
longstanding issues (*e.g.*, concerning the modeling of fully
developed turbulence). Though this formalism has been shown to
hold for large classes of functions of widely different origins,
the generality of its level of validity remains an open
issue. Despite its popularity in applications, the interactions
between theoretical developments and applications are
unsatisfactory. Its use in image processing for instance is still
in its infancy. This is partly due to discrepancy between the
theoretical contributions mostly grounded in functional analysis
and geometric measure theory, and applications naturally implying
a stochastic or statistical framework. The AMATIS project aims at
addressing these issues, by proposing a consistent and documented
framework combining different theoretical approaches and bridging
the gap towards applications. To that end, it will both address a
number of challenging theoretical issues and devote significant
efforts to elaborating a WEB platform with softwares and
documentation. It will combine the efforts of mathematicians with
those of physicists and experts in signal and image
processing. Dissemination among and interactions between
scientific fields are also intended via the organization of summer
schools and workshop.

*Activity Report*: a collaboration with P. Bas (CR
CNRS, LAGIS) deals with the steganalysis of textured
images. While steganography aims at hiding a message within some
support, *e.g.* a numerical image, steganalysis aims at detecting
the presence or not of any hidden message in the
support. Steganalysis involves two main tasks: first identify
relevant features which may be sensitive to the presence of a
hidden message, then use supervised classification to build a
detector. While the steganalysis of usual images has been well
studied, the case of textured images, for which multifractal
models may be relevant, is much more difficult. Indeed, textured
images have a rich and disordered content which favors hiding
information in an unperceptible manner. A student internship of 8
months at Master level in 2012 has led us to consider a very fundamental question.
Steganalysis is usually proceeded to a classification based on histograms of
features (bag of words). We consider the problem of the optimization
of the bins of such histograms with respect to the performance of the classifier.
We have shown that a balanced version of K-means which fills each cell equally
yields an efficient quantization to this respect .

Laboratoire de Mathématiques d'Orsay, France.

Mylène Maïda *Collaborator*

Ph. Preux has collaborated with M. Maïda and co-advised a student of the École Centrale de Lille. The motivation of this collaboration is the study of random matrices and the potential use of this theory in machine learning.

LIF - CMI - Université de Provence.

Julien Audiffren *Collaborator*

M. Valko, A. Lazaric, and M. Ghavamzadeh work with Julien on Semi-Supervised Apprenticeship Learning. We have recently developed a maximum entropy algorithm that outperforms the approach without unlabeled data.

Laboratoire Lagrange, Université de Nice, France.

Cédric Richard *Collaborator*

We have had collaboration on the topic of *dictionary learning over a sensor network*. We have published 2 conference papers and .

Laboratoire de Mécanique de Lille, Université de Lille 1, France.

Jean-Philippe Laval *Collaborator*

We co-supervise a starting PhD student (Linh Van Nguyen) on the topic of *high resolution field reconstruction from low resolution measurements in turbulent flows*.

Biophotonics team at the Interdisciplinary Research Institute (IRI), Villeneuve d'Ascq, France.

Aymeric Leray *Collaborator*

We have co-supervised an intern student (Pierre Pfennig, 2 months) on the topic of *quantitative guarantees of a super resolution method via concentration inequalities*. A paper is submitted to ICASSP 2014.

LAGIS, Ecole Centrale Lille - Université de Lille 1, France.

Patrick Bas *Collaborator*

We have a collaboration on the topic of *adaptive quantization to optimize classification from histrograms of features with an applicaiton to the steganalysis of textured images*.

Type: COOPERATION

Defi: Composing Learning for Artificial Cognitive Systems

Instrument: Specific Targeted Research Project

Objectif: Cognitive Systems and Robotics

Duration: March 2011 - February 2015

Coordinator: University College London

Partner:

Centre for Computational Statistics and Machine Learning, University College London (United Kingdom)

Department of Computer Science, University of Bristol (United Kingdom)

Department of Computer Science, Royal Holloway, University of London (United Kingdom)

SNN Machine Learning, Radboud Universiteit Nijmegen (The Netherlands)

Institut für Softwaretechnik und Theoretische Informatik, TU Berlin (Germany)

University of Leoben (Austria)

Computer Science Department, Technische Universitaet Darmstadt (Germany)

Inria contact: Rémi MUNOS

Website: COMPLACS

Abstract: One of the aspirations of machine learning is to develop intelligent systems that can address a wide variety of control problems of many different types. However, although the community has developed successful technologies for many individual problems, these technologies have not previously been integrated into a unified framework. As a result, the technology used to specify, solve and analyse one control problem typically cannot be reused on a different problem. The community has fragmented into a diverse set of specialists with particular solutions to particular problems. The purpose of this project is to develop a unified toolkit for intelligent control in many different problem areas. This toolkit will incorporate many of the most successful approaches to a variety of important control problems within a single framework, including bandit problems, Markov Decision Processes (MDPs), Partially Observable MDPs (POMDPs), continuous stochastic control, and multi-agent systems. In addition, the toolkit will provide methods for the automatic construction of representations and capabilities, which can then be applied to any of these problem types. Finally, the toolkit will provide a generic interface to specifying problems and analysing performance, by mapping intuitive, human-understandable goals into machine-understandable objectives, and by mapping algorithm performance and regret back into human-understandable terms.

Alexandra Carpentier: University of Cambridge (UK).

Michal Valko collaborates with Alexandra on extreme event detection (such as network intrusion) with limited allocation capabilities.

Prof. Marcello Restelli and Prof. Nicola Gatti: Politecnico di Milano (Italy).

A. Lazaric continued his collaboration on transfer in reinforcement learning which is leading to an extended version of the last year work on transfer of samples in MDPs. Furthermore, we are going to submit an extended version of an application of multi-arm bandit in a strategic environment such as sponsored search auctions.

*Inria principal investigator*: Mohammad Ghavamzadeh and Rémi Munos

*Institution*: McGill university (Canada)

*Laboratory*: Reasoning and Learning Lab

*Principal investigator*:

Prof. Joelle Pineau *Collaborator*

Prof. Doina Precup *Collaborator*

Amir massoud Farahmand *Collaborator*

*Duration*: January 2013 - January 2015

Ronald Ortner and Peter Auer: Montanuniversität Leoben (Austria).

Reinforcement learning (RL) deals with the problem of interacting with an unknown stochastic environment that occasionally provides rewards, with the goal of maximizing the cumulative reward. The problem is well-understood when the unknown environment is a finite-state Markov process. This collaboration is centered around reducing the general RL problem to this case.

In particular, the following problems are considered: representation learning, learning in continuous-state environments, bandit problems with dependent arms, and pure exploration in bandit problems. On each of these problems we have successfully collaborated in the past, and plan to sustain this collaboration possibly extending its scopes.

eHarmony Research, California.

Václav Petříček *Collaborator*

Michal Valko has started to collaborate with eHarmony on sequential decision making for online dating and offline evaluation.

University of Alberta, Edmonton, Alberta, Canada.

Csaba Szepesvári and Bernardo Avila Pires *Collaborator*

We have been collaborating on the topic of *risk bounds in cost-sensitive multiclass classification* this
year. We have an accepted paper at
ICML.

Technion - Israel Institute of Technology, Haifa, Israel.

Odalric-Ambrym Maillard *Collaborator*

Daniil Ryabko has worked with Odalric Maillard on representation learning for reinforcement learning problems. It led to a paper in AISTATS .

School of Computer Science, Carnegie Mellon University, USA.

Prof. Emma Brunskill *Collaborator*

Mohammad Gheshlaghi Azar, PhD *Collaborator*

A. Lazaric started a profitable collaboration on transfer in multi-arm bandit and reinforcement learning which led to two publications at ECML and NIPS. We are currently working on extensions of the previous algorithms and development of novel regret minimisation algorithms in non-iid settings.

Technicolor Research, Palo Alto.

Branislav Kveton *Collaborator*

Michal Valko and Rémi Munos worked with Branislav on Spectral Bandits aimed at recommendation for the entertainment content recommendation. Michal continued the ongoing research on online semi-supervised learning and this year delivered the algorithm for a challenging single picture per person setting . Victor Gabillon has spent 6 month at Technicolor as an intern to work on the sequential learning with submodularity, which resulted in 1 accepted paper at NIPS and two submissions to ICML.

Daniele Calandriello, student at Politecnico di Milano, Italy

Period: since April 2013.

He is working with A. Lazaric on multi-task reinforcement learning.

Rémi Munos, since July 2013, Microsoft Research New-England, USA

Mohammad Ghavamzadeh, since November 2013, Adobe Research, San Jose, CA

Victor Gabillon visited Technicolor research lab, Palo Alto, from March to September 2013.

Azadeh Khaleghi visited Walt Disney Animation Studios, Burbank, from March to September 2013.

**Crazy Stone*** won the 6th edition of the UEC Cup (the most important international computer-Go tournament). It also won the first edition of the Denseisen, by winning a 4-stone handicap game against 9-dan professional player Yoshio Ishida.*

**Alexandra Carpentier*** obtained an AFIA ex-aequo accessit for her PhD, (french machine learning/artificial intelligence second price).*

Tutorial by Rémi Munos at AAAI 2013: From Bandits to Monte Carlo Tree Search: The optimistic principle applied to Optimization and Planning.

*Philippe Preux* and Marc Tommasi were the main
organizers of the Conférence sur l'Apprentissage Automatique
(CAP'13).

*Rémi Munos* was the main organizer of the 8*Marta Soare*,
*Raphael Fonteneau*, *Michal Valko* and
*Alessandro Lazaric*.

*Rémi Munos* was co-chair of the Algorithmic Learning Conference, in Singapore, 2013.

Daniil Ryabko gave a talk entitled “Time-series information and unsupervised representation learning” at SMILE seminar in Paris

Michal Valko gave an talk “Sequential Face Recognition with Minimal Feedback” which was opening talk of the series named 30 minutes of Science, a new format at Inria Lille to support intra-center collaboration.

Rémi Munos gave a course (6 hours) at the Summer School Netadis in Hillerod, Denmark in September 2013.

Rémi Munos was invited to give a talk at CMU in November 2013.

Alessandro Lazaric was invited to give a talk at CMU in March 2013.

Pierre Chainais gave a talk "Learning a common dictionary over a sensor network" at GDR Phénix - ISIS workshop about "Analysis and inference for networks" in Paris in november 2013.

Pierre Chainais gave a tutorial talk on "Multifractal analysis of images and applicaitons" at the "Groupe Image of the company TOTAL in Paris La Défense on sept. 11th, 2013.

Jérémie Mary gave a invited talk "Recommendation system from a bandit perspective" at GDR "Estimation et traitement statistique en grande dimension" on May 16th, 2013 - Télécom Paristech.

Jérémie Mary gave an invited talk "Bandit point of view on recommenders" at Large-scale Online Learning and Decision Making Workshop Cumberland Lodge, Windsor, UK in September, 2013.

Jeremie Mary gave an invited talk on recommender systems at "Journées rencontres AFIA/IHM" in may 2013.

*
Participation to the program committee of international conferences
*

International Conference on Pattern Recognition Applications and Methods (ICPRAM 2013)

Algorithmic Learning Theory (ALT 2013)

AAAI Conference on Artificial Intelligence (AAAI 2013)

European Workshop on Reinforcement Learning (EWRL 2013)

Annual Conference on Neural Information Processing Systems (NIPS 2013)

International Conference on Artificial Intelligence and Statistics (AISTATS 2013)

European Conference on Machine Learning (ECML 2013)

International Conference on Machine Learning (ICML 2013 and 2014)

International Conference on Uncertainty in Artificial Intelligence (UAI 2013)

French Conference on Planning, Decision-making, and Learning in Control Systems (JFPDA 2013)

IEEE FUSION 2013

IEEE Approximate Dynamic Programming and Reinforcement Learning (ADPRL 2013)

ICML workshop “Prediction with Sequential Models”

**International journal and conference reviewing activities*** (in addition to the conferences in which we belong to the PC)*

IEEE Transactions on Image Processing

Journal of Statistical Physics

Digital Signal Processing

IEEE Transactions on Information Theory

IEEE Statistical Signal Processing SSP'2013

European Signal Processing Conference EUSIPCO 2013

10th International Conference on Sampling Theory and Applications (SampTA 2013)

IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2013 & 2014)

Annual Conference on Neural Information Processing Systems (NIPS 2013)

International Conference on Machine Learning (ICML 2013)

European Conference on Machine Learning (ECML 2013)

Uncertainty in Artificial Intelligence (UAI 2013)

Machine Learning Journal (MLJ)

Journal of Machine Learning Research (JMLR)

Journal of Artificial Intelligence Research (JAIR)

IEEE Transactions on Automatic Control (TAC)

IEEE Transactions of Signal Processing

Journal of Autonomous Agents and Multi-Agent Systems (JAAMAS)

Mathematics of Operations Research (MOR)

*M. Ghavamzadeh* is in the Editorial Board Member of Machine Learning Journal (MLJ, 2011-present).

*M. Ghavamzadeh* is in the Steering Committee Member of the European Workshop on Reinforcement Learning (EWRL, 2011-present).

*P. Preux*, *R. Gaudel* and *J. Mary* are experts for *Crédit Impôt Recherche* (CIR).

*E. Duflos* is a project proposal reviewer for ANR.

*R. Munos* is a Member of the Belgium Commission Evaluation F.R.S-FNRS, 2013.

*R. Munos* was Vice Président du Comité des Projets at Inria Lille-Nord Europe, until July 2013.

*D. Ryabko* is a member of COST-GTRI committee at Inria.

*D. Ryabko* is a general advisor at Inria Lille.

*E. Duflos* is Director of Research of Ecole Centrale de Lille since September 2011.

*E. Duflos* is the Head of the Signal and Image Team of LAGIS (UMR CNRS 8219).

*R. Gaudel* is board member of LIFL.

*R. Gaudel* manages the proml mailing list. This mailing list gathers French-speaking researchers from Machine Learning community.

*P. Chainais* is a member of the administration council of GRETSI, the French association of researchers in signal and image processing.

*P. Chainais* is co-responsible for the action "Machine Learning" of the GDR ISIS which gathers french researchers in signal and image processing at the national level.

Ecole Centrale de Lille: *P. Chainais*, , “Machine Learning”, 36 hours, 3rd year.

Ecole Centrale de Lille: *P. Chainais*, “Wavelets and Applications”, 24 hours, 2nd year.

Ecole Centrale de Lille: *P. Chainais*, “Introduction to Matlab”, 16 hours, 3rd year.

Ecole Centrale de Lille: *P. Chainais*, “Signal processing”, 22 hours, 1st year.

Ecole Centrale de Lille: *P. Chainais*, “Data Compression”, 16 hours, 2nd year.

Ecole Centrale de Lille: *Ph. Preux*, “Data Data Data Data”, 2 hours, 3rd year.

*P. Chainais* is Responsible for a new 3rd year program called Decision making & Data analysis.

Master: *O. Pietquin*, “Decision under uncertainty”, 46 hours, M2, Master in Computer Science, Université de Lille 1.

Master: *A. Lazaric*, “Introduction to Reinforcement Learning”, 30h eq. TD, M2, Master “Mathematiques, Vision, Apprentissage”, ENS Cachan.

Master: *R. Gaudel*, “Data Mining”, 30h eq. TD, M2, Université Lille 3.

Master: *R. Gaudel*, “Web Mining”, 32h eq. TD, M2, Université Lille 3.

Master: *R. Gaudel*, “Algorithmic”, 19h eq. TD, M2, Université Lille 3.

Master: *Ph. Preux*, “Mathematics, Computer Science, and Modeling”, M1 of psychology, Université of Lille 3.

Master: *Ph. Preux*, “Algorithms, and programming in Python”, M1 MIASHS, Université of Lille 3.

Licence: *Ph. Preux*, “Algorithms, and programming in Python”, L3 MIASHS, Université of Lille 3.

Licence: *R. Gaudel*, “Programing”,

Licence: *R. Gaudel*, “Logic”,

Licence: *R. Gaudel*, “Information and Communication Technologies”,

Licence: *R. Gaudel*, “Artificial Intelligence”,

Licence: *R. Gaudel,*, “C2i”, 25h eq. TD, L1-3, Université Lille 3.

Licence: *R. Mary,*, “C2i”, 25h eq. TD, L1-3, Université Lille 3.

Master: *J. Mary*, “Programmation et analyse de donnée en R”, 24h eq TD, M1, Université de Lille 3, France.

Master: *J. Mary*, “Programmation web avancée”, 24h eq TD,M2, Université de Lille 3, France.

Master: *J. Mary*, “Programmation objet et Design Pattern”, 48h eq TD,M2, Université de Lille 3, France.

Master: *J. Mary*, “Algorithmique”, 12h eq TD,M1, Université de Lille 3, France.

Master (3rd year of Engineer School): *J. Mary*, “Machine Learning avec R" , 16 hours, M2, Option "Data Analysis and Decision", Ecole Centrale de Lille, France.

Master (3rd year of Engineer School): *E. Duflos*, “Advanced Estimation" , 20 hours, M2, Option "Data Analysis and Decision", Ecole Centrale de Lille, France.

Master (3rd year of Engineer School): *E. Duflos*, “Multi-Objects Filreting" , 16 hours, M2, Option "Data Analysis and Decision", Ecole Centrale de Lille, France.

PhD: *Azadeh Khaleghi*, “Sur Quelques Problèmes non
supervisés impliquant des séries temporelles hautement
dépendantes”, Nov. 2013, Université de Lille 1, advisor:
D. Ryabko.

PhD in progress: *Boris Baldassari*,
*Apprentissage automatique et développement logiciel*,
since May 2011, advisor: Ph. Preux.

PhD in progress: *Gabriel Dulac-Arnold*, *A
General Sequential Model for Constrained Classification*, since
Oct. 2011, advisor: Ph. Preux, L. Denoyer, P. Gallinari.

PhD in progress: *Victor Gabillon*, “Active Learning
in Classification-based Policy Iteration”, since Sep. 2009,
advisor: Ph. Preux, M. Ghavamzadeh.

PhD in progress: *Frédéric Guillou*, “Sequential
Recommender System”, since Oct. 2013, advisor: Ph. Preux,
J. Mary, R. Gaudel.

PhD in progress: *vicenzo Musco*, “Topology and
evolution of software graphs”, since Oct. 2013, advisor:
P. Preux, M. Monperrus

PhD in progress: *Olivier Nicol*, “Data-driven
evaluation of Contextual Bandit algorithms and applications to
Dynamic Recommendation”, since Nov. 2010, advisor: Ph. Preux, J. Mary.

PhD in progress: *Adrien Hoarau*, “Multi-arm Bandit
Theory”, since Oct. 2012, advisor: R. Munos.

PhD in progress: *Tomáš Kocák*,
“Sequential Learning with Similarities”, since Oct. 2013,
advisor: R. Munos, M. Valko

PhD in progress: *Emilie Kaufmann*, “Bayesian
Bandits”, since Oct. 2011, advisor: R. Munos, O. Cappé,
A. Garivier.

PhD in progress: *Amir Sani*, “Learning under
uncertainty”, Oct. 2011, since advisor: R. Munos, A. Lazaric.

PhD in progress: *Marta Soare*, “Pure Exploration in
Multi-arm Bandit”, since Oct. 2012, advisor: R. Munos,
A. Lazaric.

PhD in progress: *Hong Phuong Dang*,
*Bayesian non parametric methods for dictionary learning and inverse problems*,
since Oct. 2013, advisor: P. Chainais.

PhD in progress: *Linh Van Nguyen*,
*High resolution reconstruction from low resolution measurements of velocity fields in turbulent flows*,
since Oct. 2013, advisor: P. Chainais & J.p. Laval (Laboratoire de Mécanique de Lille).

member of the recruitement committee for an assistant professor position at Université de Lille 3: R. Gaudel, Ph. Preux

member of the recruitement committee for an assistant professor position at Université de Lille 1: P. Chainais

member of the recruitement committee for a professor position at Université de Paris 6: Ph. Preux

Member of the jury DR2 Inria 2013: R. Munos

Member of the jury CR2 Rocquencourt Inria 2013: R. Munos

“Small or big (data), make it sequentially!”, J. Mary, Ph. Preux, invited talk at Euratechnologies, March 2013.

Inria publishes an article about Face Recognition, Michal Valko,
http://

Jérémie Mary highlighted on TV and on Inria website: you are how you browse: http://