TONUS started in January 2014. It is a team of the Inria Nancy-Grand Est center. It is located in the mathematics institute (IRMA) of the university of Strasbourg.

The International Thermonuclear Experimental Reactor (ITER) is a large-scale scientific experiment that aims to demonstrate that it is possible to produce energy from fusion, by confining a very hot hydrogen plasma inside a toroidal chamber, called tokamak. In addition to physics and technology research, tokamak design also requires mathematical modeling and numerical simulations on supercomputers.

The objective of the TONUS project is to deal with such mathematical and computing
issues. We are mainly interested in kinetic and gyrokinetic simulations of
collisionless plasmas. In the TONUS project-team we are working on the development of new
numerical methods devoted to such simulations. We investigate several
classical plasma models, study new reduced models and new numerical
schemes adapted to these models.
We implement our methods in two software projects: Selalib

We have strong relations with the CEA-IRFM team and participate to the development of their gyrokinetic simulation software GYSELA. We are involved in two Inria Project Labs, respectively devoted to tokamak mathematical modeling and high performance computing on future exascale super-computers. We also collaborate with a small company in Strasbourg specialized in numerical software for applied electromagnetics.

Finally, our subjects of interest are at the interaction between mathematics, computer science, High Performance Computing, physics and practical applications.

The fundamental model for plasma physics is the coupled Vlasov-Maxwell kinetic model: the Vlasov equation describes the distribution function of particles (ions and electrons), while the Maxwell equations describe the electromagnetic field. In some applications, it may be necessary to take into account relativistic particles, which lead to consider the relativistic Vlasov equation, but generally, tokamak plasmas are supposed to be non relativistic. The distribution function of particles depends on seven variables (three for space, three for velocity and one for time), which yields a huge amount of computations.

To these equations we must add several types of source terms and boundary conditions for representing the walls of the tokamak, the applied electromagnetic field that confines the plasma, fuel injection, collision effects, etc.

Tokamak plasmas possess particular features, which require developing specialized theoretical and numerical tools.

Because the magnetic field is strong, the particle trajectories have a very fast rotation around the magnetic field lines. A full resolution would require prohibitive amount of calculations. It is then necessary to develop models where the cyclotron frequency tends to infinity in order to obtain tractable calculations. The resulting model is called a gyrokinetic model. It allows us to reduce the dimensionality of the problem. Such models are implemented in GYSELA and Selalib. Those models require averaging of the acting fields during a rotation period along the trajectories of the particles. This averaging is called the gyroaverage and requires specific discretizations.

The tokamak and its magnetics fields present a very particular geometry. Some authors have proposed to return to the intrinsic geometrical versions of the Vlasov-Maxwell system in order to build better gyrokinetic models and adapted numerical schemes. This implies the use of sophisticated tools of differential geometry: differential forms, symplectic manifolds, and Hamiltonian geometry.

In addition to theoretical modeling tools, it is necessary to develop numerical schemes adapted to kinetic and gyrokinetic models. Three kinds of methods are studied in TONUS: Particle-In-Cell (PIC) methods, semi-Lagrangian and fully Eulerian approaches.

In most phenomena where oscillations are present, we can establish a
three-model hierarchy:

The Strasbourg team has a long and recognized experience in numerical methods of Vlasov-type equations. We are specialized in both particle and phase space solvers for the Vlasov equation: Particle-in-Cell (PIC) methods and semi-Lagrangian methods. We also have a longstanding collaboration with the CEA of Cadarache for the development of the GYSELA software for gyrokinetic tokamak plasmas.

The Vlasov and the gyrokinetic models are partial differential equations that express the transport of the distribution function in the phase space. In the original Vlasov case, the phase space is the six-dimension position-velocity space. For the gyrokinetic model, the phase space is five-dimensional because we consider only the parallel velocity in the direction of the magnetic field and the gyrokinetic angular velocity instead of three velocity components.

A few years ago, Eric Sonnendrücker and his collaborators introduced a new family of methods for solving transport equations in the phase space. This family of methods are the semi-Lagrangian methods. The principle of these methods is to solve the equation on a grid of the phase space. The grid points are transported with the flow of the transport equation for a time step and interpolated back periodically onto the initial grid. The method is then a mix of particle Lagrangian methods and Eulerian methods. The characteristics can be solved forward or backward in time leading to the Forward Semi-Lagrangian (FSL) or Backward Semi-Lagrangian (BSL) schemes. Conservative schemes based on this idea can be developed and are called Conservative Semi-Lagrangian (CSL).

GYSELA is a 5D full gyrokinetic code based on a classical backward semi-Lagrangian scheme (BSL) for the simulation of core turbulence that has been developed at CEA Cadarache in collaboration with our team . Although GYSELA was carefully developed to be conservative at lowest order, it is not exactly conservative, which might be an issue when the simulation is under-resolved, which always happens in turbulence simulations due to the formation of vortices which roll up.

Historically PIC methods have been very popular for solving the Vlasov equations. They allow solving the equations in the phase space at a relatively low cost. The main disadvantage of the method is that, due to its random aspect, it produces an important numerical noise that has to be controlled in some way, for instance by regularizations of the particles, or by divergence correction techniques in the Maxwell solver. We have a longstanding experience in PIC methods and we started implement them in Selalib. An important aspect is to adapt the method to new multicore computers. See the work by Crestetto and Helluy .

As already said, kinetic plasmas computer simulations are very intensive, because of the gyrokinetic turbulence. In some situations, it is possible to make assumptions on the shape of the distribution function that simplify the model. We obtain in this way a family of fluid or reduced models.

Assuming that the distribution function has a Maxwellian shape, for instance, we obtain the MagnetoHydroDynamic (MHD) model. It is physically valid only in some parts of the tokamak (at the edges for instance). The fluid model is generally obtained from the hypothesis that the collisions between particles are strong. Fine collision models are mainly investigated by other partners of the IPL (Inria Project Lab) FRATRES. In our approach we do not assume that the collisions are strong, but rather try to adapt the representation of the distribution function according to its shape, keeping the kinetic effects. The reduction is not necessarily a consequence of collisional effects. Indeed, even without collisions, the plasma may still relax to an equilibrium state over sufficiently long time scales (Landau damping effect). Recently, a team at the Plasma Physics Institut (IPP) in Garching has carried out a statistical analysis of the 5D distribution functions obtained from gyrokinetic tokamak simulations . They discovered that the fluctuations are much higher in the space directions than in the velocity directions (see Figure ).

This indicates that the approximation of the distribution function could require fewer data while still achieving a good representation, even in the collisionless regime.

Our approach is different from the fluid approximation. In what follows
we call this the “reduced model” approach. A reduced model is
a model where the explicit dependency on the velocity variable is
removed. In a more mathematical way, we consider that in some regions
of the plasma, it is possible to exhibit a (preferably small) set
of parameters

In this case it is sufficient to solve for

Several approaches are possible: waterbag approximations, velocity space transforms,
*etc.*

An experiment made in the 60's exhibits in a spectacular
way the reversible nature of the Vlasov equations. When two perturbations
are applied to a plasma at different times, at first the plasma seems
to damp and reach an equilibrium. But the information of the perturbations
is still here and “hidden” in the high frequency microscopic oscillations
of the distribution function. At a later time a resonance occurs and
the plasma produces an echo. The time at which the echo occurs can
be computed (see Villani

More practically, this experiment and its theoretical framework show that it is interesting to represent the distribution function by an expansion on an orthonormal basis of oscillating functions in the velocity variables. This representation allows a better control of the energy transfer between the low frequencies and the high frequencies in the velocity direction, and thus provides more relevant numerical methods. This kind of approach is studied for instance by Eliasson in with the Fourier expansion.

In long time scales, filamentation phenomena result in high frequency oscillations in velocity space that numerical schemes cannot resolve. For stability purposes, most numerical schemes contain dissipation mechanisms that may affect the precision of the finest oscillations that can be resolved.

Another trend in scientific computing is to optimize the computation time through adaptive modeling. This approach consists in applying the more efficient model locally, in the computational domain, according to an error indicator. In tokamak simulations, this kind of approach could be very efficient, if we are able to choose locally the best intermediate kinetic-fluid model as the computation runs. This field of research is very promising. It requires developing a clever hierarchy of models, rigorous error indicators, versatile software architecture, and algorithms adapted to new multicore computers.

As previously indicated, an efficient method for solving the reduced models is the Discontinuous Galerkin (DG) approach. It is possible to make it of arbitrary order. It requires limiters when it is applied to nonlinear PDEs occurring for instance in fluid mechanics. But the reduced models that we intent to write are essentially linear. The nonlinearity is concentrated in a few coupling source terms.

In addition, this method, when written in a special set of variables, called the entropy variables, has nice properties concerning the entropy dissipation of the model. It opens the door to constructing numerical schemes with good conservation properties and no entropy dissipation, as already used for other systems of PDEs , , , .

A precise resolution of the electromagnetic fields is essential for proper plasma simulation. Thus it is important to use efficient solvers for the Maxwell systems and its asymptotics: Poisson equation and magnetostatics.

The proper coupling of the electromagnetic solver with the Vlasov solver is also crucial for ensuring conservation properties and stability of the simulation.

Finally plasma physics implies very different time scales. It is thus very important to develop implicit Maxwell solvers and Asymptotic Preserving (AP) schemes in order to obtain good behavior on long time scales.

The coupling of the Maxwell equations to the Vlasov solver requires some precautions. The most important is to control the charge conservation errors, which are related to the divergence conditions on the electric and magnetic fields. We will generally use divergence correction tools for hyperbolic systems presented for instance in (and included references).

As already pointed out, in a tokamak, the plasma presents several different space and time scales. It is not possible in practice to solve the initial Vlasov-Maxwell model. It is first necessary to establish asymptotic models by letting some parameters (such as the Larmor frequency or the speed of light) tend to infinity. This is the case for the electromagnetic solver and this requires implementing implicit time solvers in order to efficiently capture the stationary state, the solution of the magnetic induction equation or the Poisson equation.

The search for alternative energy sources is a major issue for the future. Among others, controlled thermonuclear fusion in a hot hydrogen plasma is a promising possibility. The principle is to confine the plasma in a toroidal chamber, called a tokamak, and to attain the necessary temperatures to sustain nuclear fusion reactions. The International Thermonuclear Experimental Reactor (ITER) is a tokamak being constructed in Cadarache, France. This was the result of a joint decision by an international consortium made of the European Union, Canada, USA, Japan, Russia, South Korea, India and China. ITER is a huge project. As of today, the budget is estimated at 20 billion euros. The first plasma shot is planned for 2020 and the first deuterium-tritium operation for 2027.

Many technical and conceptual difficulties have to be overcome before the actual exploitation of fusion energy. Consequently, much research has been carried out around magnetically confined fusion. Among these studies, it is important to carry out computer simulations of the burning plasma. Thus, mathematicians and computer scientists are also needed in the design of ITER. The reliability and the precision of numerical simulations allow a better understanding of the physical phenomena and thus would lead to better designs. TONUS's main involvement is in such research.

The required temperatures to attain fusion are very high, of the order of a hundred million degrees. Thus it is imperative to prevent the plasma from touching the tokamak inner walls. This confinement is obtained thanks to intense magnetic fields. The magnetic field is created by poloidal coils, which generate the toroidal component of the field. The toroidal plasma current also induces a poloidal component of the magnetic field that twists the magnetic field lines. The twisting is very important for the stability of the plasma. The idea goes back to research by Tamm and Sakharov, two Russian physicists, in the 1950's. Other devices are essential for the proper operation of the tokamak: divertor for collecting the escaping particles, microwave heating for reaching higher temperatures, fuel injector for sustaining the fusion reactions, toroidal coils for controlling instabilities, etc.

The software and numerical methods that we develop can also be applied to other fields of physics or of engineering.

For instance, we have a collaboration with the company AxesSim in Strasbourg for the development of efficient Discontinuous Galerkin (DG) solvers on hybrid computers. The applications is electromagnetic simulations for the conception of antenna, electronic devices or aircraft electromagnetic compatibility.

The acoustic conception of large rooms requires huge numerical simulations. It is not always possible to solve the full wave equation and many reduced acoustic models have been developed. A popular model consists in considering "acoustic" particles moving at the speed of sound. The resulting Partial Differential Equation (PDE) is very similar to the Vlasov equation. The same modeling is used in radiation theory. We have started to work on the reduction of the acoustic particles model and realized that our reduction approach perfectly applies to this situation. A new PhD with CEREMA (Centre d'études et d'expertise sur les risques, l'environnement, la mobilité et l'aménagement) has started in October 2015 (thesis of Pierre Gerhard). The objective is to investigate the model reduction and to implement the resulting acoustic model in our DG solver.

SCHNAPS: Solveur pour les lois de Conservation Hyperboliques Non-linéaires Appliqué aux PlasmaS

Scientific Description

The future computers will be made of a collection of thousands of interconnected multicore processors. Globally, it appears as a classical distributed memory MIMD machine. But at a lower level, each of the multicore processors is itself made of a shared memory MIMD unit (a few classical CPU cores) and a SIMD unit (a GPU or Xeon Phi). When designing new algorithms, it is important to adapt them to this kind of architecture. Practically, we use the MPI library for managing the coarse grain parallelism, while the OpenCL library efficiently operate the fine grain parallelism.

We have invested for several years until now into scientific computing on GPUs, using the open standard OpenCL (Open Computing Language). We were recently awarded a prize in the international AMD OpenCL innovation challenge thanks to an OpenCL two-dimensional Vlasov-Maxwell solver that fully runs on a GPU. OpenCL is a very interesting tool because it is an open standard now available on almost all brands of multicore processors and GPUs. The same parallel program can run on a GPU or a multicore processor without modification. OpenCL programs are quite complicated to construct. For instance it is difficult to distribute efficiently the computation or memory operations on the different available accelerators. StarPU http://

Because of the envisaged applications, which may be either academic or commercial, it is necessary to conceive a modular framework. The kernel of the library is made of generic parallel algorithms for solving conservation laws. The parallelism can be both fine-grained (oriented towards GPUs and multicore processors) and coarse-grained (oriented towards GPU clusters). The separate modules allow managing the meshes and some specific applications. With our partner AxesSim, we also develop a C++ specific version of SCHNAPS for electromagnetic applications.

Since the middle of the year a specific version of SCHNAPS (called KIRSCH for Kinetic Representation of SChnaps) has been developed to handle Lattice Boltzmann schemes for MHD and fluid simulations.

Functional Description

SCHNAPS and KIRSCH are a generic Discontinuous Galerkin solver and an implicit Lattice Boltzmann solver, written in C, based on the OpenCL, MPI and StarPU frameworks.

Partner: AxesSim

Contact: Philippe Helluy

Keywords: Plasma physics - Semi-Lagrangian method - PIC - Parallel computing - Plasma turbulence

Scientific Description

The objective of the Selalib project (SEmi-LAgrangian LIBrary) is to develop a well-designed, organized and documented library implementing several numerical methods for kinetic models of plasma physics. Its ultimate goal is to produce gyrokinetic simulations.

Another objective of the library is to provide to physicists easy-to-use gyrokinetic solvers, based on the semi-Lagrangian techniques developed by Eric Sonnendrücker and his collaborators in the past CALVI project. The new models and schemes from TONUS are also intended to be incorporated into Selalib.

Functional Description

Selalib is a collection of modules conceived to aid in the development of plasma physics simulations, particularly in the study of turbulence in fusion plasmas. Selalib offers basic capabilities from general and mathematical utilities and modules to aid in parallelization, up to pre-packaged simulations.

Partners: Max Planck Institute - Garching - IRMA, Université de Strasbourg - IRMAR, Université Rennes 1 - LJLL, Université Paris 6

Contact: Michel Mehrenberger

Scientific description:

The JOREK code is one of the most important MHD codes in Europe. This code written 15 years ago allows to simulate the MHD instabilities which appear in the Tokamak. Using this code the physicists have obtained some important results. However to run larger and more complex test cases it is necessary to extend the numerical methods used.

In 2014, the DJANGO code has been created, the aim of this code is twofold: have a numerical library to implement, test and validate new numerical methods for MHD, fluid mechanics and Electromagnetic equations in the finite element context and prepare the future new JOREK code. This code is a 2D-3D code based on implicit time schemes and IsoGeometric (B-Splines, Bezier curves) for the spatial discretization.

Functional description:

DJANGO is a finite element implicit solver written in Fortran 2008 with a Basic MPI framework.

Authors:

Ahmed Ratnani (Max Planck Institut of Plasma Physic, Garching, Germany), Boniface Nkonga (University of Nice and Inria Sophia-Antipolis, France), Emmanuel Franck (Inria Nancy Grand Est, TONUS Team)

Contributors:

Mustafa Gaja, Jalal Lakhlili, Matthias Hoelzl and Eric Sonnendrücker (Max Planck Institut of Plasma Physic, Garching, Germany), Ayoub Iaagoubi (ADT Inria Nice), Hervé Guillard (University of Nice and Inria Sophia-Antipolis, France), Virginie Grandgirard, Guillaume Latu (CEA Cadarache, France)

Year 2016:

Between the years 2015 and 2016 the code has been partially rewritten using Fortran 2008 to prepare the implementation of new methods (compatible finite element spaces, 3D B-Splines meshes). The different models, hyperbolic, parabolic and elliptic introduced in the previous version of the code have been rewritten and validated. Actually, we will begin to introduce the Maxwell equations for the coupling with kinetic equations and the nonlinear fluid models ( first step for the MHD simulations). A large effort of optimization and parallelization in the matrices assembly has been made and new preconditioning for elliptic models has been introduced.

Partners: Max Planck Institute - Garching - IRMA, Université de Strasbourg - Inria Nice Sofia- Antipolis

Contact: Emmanuel Franck

The finite element code JOREK use currently a classical implicit solver for reduced MHD model coupled with a block Jacobi preconditioning. For the future full MHD code we propose to change the solver in time to reduce the memory consumption and improve the robustness. During this year two directions have been followed. The first one is based on the classical physics-based preconditioning proposed by L. Chacon. Firstly, we have generalized this method by rewriting the preconditioning as a splitting scheme which separates the advection terms and the acoustic part and by generalizing the splitting algorithm. We obtain different solutions with different advantages. These different splitting schemes have been tested on simplified models and are currently tested on the Euler equations. The second direction is to use a relaxation scheme which allows to rewrite a nonlinear system as a linear hyperbolic system (larger that the previous one) and a nonlinear local source term. Using a splitting scheme we obtain a very simple method where in the first step we solve independent linear transport problems and in a second step we have some nonlinear projections. With a good parallelism and good solver for the transport subproblems the algorithm is very efficient compared to the classical one.

The different algorithms to discretize in time the MHD or to design preconditioning use solvers for a lot of elliptic operators like Laplacian. For high order finite elements like B-Splines the classical multi-grid methods are not very efficient. Indeed the number of iterations to converge increases strongly when the polynomial order increases. Using a theory called GLT, proposed by S. Serra-Capizzano, we have implemented and validated a smoother for multi-grid, able to obtain the convergence quasi independent of the polynomial degree. This method is also efficient as a preconditioning for mass matrices. We obtain at the end, very robust solvers for these simple problems and allows to perform the time algorithm for fluid models. The next step is to extend this method for more complex problems like vectorial elliptic problems.

Many systems of conservation laws can be written under a lattice-kinetic form. A lattice-kinetic model is made of a finite set of transport equations coupled through a relaxation source term. Such representation is very useful:

easy stability analysis, possibility to add second order terms in a natural way;

can be solved by a splitting strategy;

easy-to-implement implicit schemes, avoiding CFL constraint;

high parallelism.

We have started to work on such approaches for solving the MHD equation inside a tokamak (postdoc of David Coulette). We have programmed a generic parallel lattice-kinetic solver in Kirsch, using the StarPU runtime. It presents a very good parallel efficiency. We have also started studying more theoretical aspects: stability of kinetic models, higher order time-integration, viscous terms modeling.

In order to harness hybrid computers architecture, we have
developed software and algorithms that are well adapted to
CPU/GPU computing. For instance we have applied a task-graph
approach for computing electromagnetic waves (https://

Using the discontinuous Galerkin solver of Schnaps, we have implemented a numerical scheme for the drift-kinetic model (in a cylinder geometry). The equation is written as an hyperbolic system after reduction in velocity (using spectral finite element). The code is parallelized on a multi-CPU or GPU architecture using OpenCL instructions. To solve the quasineutral equation (for the electric potential), the elliptic solver (already present in Schnaps) has been extended to be used slice by slice (of the cylinder). We have started by validating the code on the 2D guiding-center model and the diocotron instability test-case: we observe that the geometry approximation of the computational domain has a major impact on the precision of the numerical simulations.

In the quasi-neutrality equation in GYSELA,
we are now able to treat correctly the inner radius thanks
to a simple trick by taking the inner
radius

We have worked at CEMRACS 2016 on an algorithm that handles
both the Particle in Cell method and the
Semi-Lagrangian method in the context of a

We have worked at CEMRACS2016 on a new variant for the interpolation method to handle both mesh singularity at the origin and non circular geometry. It is based on a non uniform number of points for each closed flux line (intersection of the flux surfaces with the poloidal plane), which are concentric circles in the case of the circular geometry. This strategy, following previous works on curvilinear geometry and hexagonal meshes, should allow to generalize the work in to non circular tokamaks.

A theoretical justification of the field align method is provided in the simplified context of constant advection on a 2D periodic domain: unconditional stability is proven, and error estimates are given which highlight the advantages of field-aligned interpolation. The same methodology is successfully applied to the solution of the gyrokinetic Vlasov equation, for which we present the ion temperature gradient (ITG) instability as a classical test-case: first we solve this in cylindrical geometry (screw-pinch), and next in toroidal geometry (circular Tokamak). A paper has been submitted .

In the context of the Lattice Boltzmann or relaxation methods (-), it is interesting to obtain a very high order implicit splitting. For this, we have considered a time splitting discretization of the BGK model with 3 velocities. First and second order schemes are studied before using Strang splitting coupled with a Semi Lagrangian or a Cranck-Nicholson DG scheme. Using complex time steps and composition methods, we obtain 4th order time step, unconditionally stable for the discrete BGK models. These results could be used with the Lattice Boltzmann method, the relaxation method and also the kinetic model.

In the work [3], we implement in Selalib an efficient, regarding the memory access, Particle-In-Cell method which enables simulations with a large number of particles. Numerical results for classical one-dimensional Landau damping and two-dimensional Kelvin-Helmholtz test cases are exposed. The implementation also relies on a standard hybrid MPI/OpenMP parallelization. Code performance is assessed by the observed speedup and attained memory bandwidth. A convergence result is also illustrated by comparing the numerical solution of a four-dimensional Vlasov-Poisson system against the one for the guiding center model.

Then, we continued to optimize the code by analyzing different data structures for the particles (structure of arrays vs. arrays of structure) and for the grid fields (using space-filling curves like Morton, Hilbert etc.) with the aim of improving the cache reuse. In addition, we added the functionality of vectorization from the compiler and we obtained significant gain by testing the different data structures. We thus achieved to run PIC simulations processing 65 million particles/second on an Intel Haswell architecture, without hyper-threading. The hybrid parallelization through OpenMP/MPI gave satisfactory strong and weak scaling up to 8192 cores on GENCI's supercomputer Curie.

We performed a full parallelization (over species and using 4D domain decomposition) of the 1D3V Multi-species Vlasov-Poisson finite-volumes code. The 4D code was then used to perform, by means of parametric studies, an analysis of the structure of the multi-scale boundary layer (the so-called Debye sheath and various pre-sheaths) for a magnetized-plasma in contact with an absorbing wall. This study allowed us to show, notably, that when the strong confining magnetic field is close to grazing incidence with respect to the absorbing surface, the boundary layer extends further into the plasma and as a result the magnitude of the electric field is lessened.

A second study was devoted to the dynamics of the propagation of the so-called "ELMs" (Edge-Localized-Modes) at the edge of Tokamak devices. The

We are involved in a common project with the company AxesSim in Strasbourg. The objective is to help for the
development of a commercial software devoted to the numerical simulation of electromagnetic phenomena. The
applications are directed towards antenna design and electromagnetic compatibility. This project was partly
supported by DGA through "RAPID" (*régime d'appui à l'innovation duale*) funds. The CIFRE PhD of Thomas
Strub is part of this project. Another CIFRE PhD has started in AxesSim on the same kind of topic in March
2015 (Bruno Weber). The new project is devoted to the use of a runtime system in order to optimize DG solvers applied to electromagnetism. The resulting software will be used for the numerical simulation of connected devices for clothes or medicine. The project is supported by the "Banque Public d'Investissement" (BPI) and coordinated by the Thales company.

The thesis of Pierre Gerhard devoted to numerical simulation of room acoustics is supported by the Alsace
region. It is a joint project with CEREMA (*Centre d'études et d'expertise sur les risques, l'environnement, la
mobilité et l'aménagement*) in Strasbourg.

ANR project PEPPSI (models for edge plasma physic in Tokamak) in *Programme Blanc* SIMI 9, started in 2013.
Participants, G. Manfredi (coordinator), S. Hirstoaga, D. Coulette.

The TONUS project belongs to the IPL FRATRES (models and numerical methods for Tokamak). The annual meeting has been organized in Strasbourg by Emmanuel Franck and Philippe Helluy.

The TONUS and HIEPACS projects have obtained the financial support for the PhD thesis of Nicolas Bouzat thanks to the IPL C2S@exa (computational sciences at exascale). Nicolas Bouzat works at CEA Cadarache and is supervised locally by Guillaume Latu; the PhD advisors are Michel Mehrenberger and Jean Roman.

GENCI projet : "*Simulation numérique des plasmas par des méthodes
semi-lagrangiennes et PIC adaptées*". 450 000 scalar computing hours on CURIE_standard (January 2016-January 2017); coordinator: Michel Mehrenberger

Eurofusion Enabling Research Project ER15-IPP01 (1/2015-12/2017) "Verification and development of new algorithms for gyrokinetic codes" (Principal Investigator: Eric Sonnendrücker, Max-Planck Institute for Plasma Physics, Garching).

Eurofusion Enabling Research Project ER15-IPP05 (1/2015-12/2017) "Global non-linear MHD modeling in toroidal geometry of disruptions, edge localized modes, and techniques for their mitigation and suppression" (Principal Investigator: Matthias Hoelzl, Max-Planck Institute for Plasma Physics, Garching).

.

Michel Mehrenberger collaborates with Bedros Afeyan (Pleasanton, USA) on KEEN wave simulations.

Emmanuel Franck collaborates with E. Sonnendruecker (IPP Garching) and S. Serra Capizzano (University of Como, Italy) on preconditioning for IGA methods.

ANR/SPPEXA "EXAMAG" is a joint French-German-Japanese project. Its goal is to develop efficient parallel MHD solvers for future exascale architectures. With our partners, we plan to apply highly parallelized and hybrid solvers for plasma physics. One of our objectives is to develop Lattice-Boltzmann MHD solvers based on high-order implicit Discontinous Galerkin methods, using SCHNAPS and runtime systems such as StarPU.

Philippe Helluy is member of the editorial board of IJFV http://

Emmanuel Franck has been reviewer for

Communications in computational physics

Methods and Algorithms for Scientific Computing

Methods for Partial Differential equations

Philippe Helluy has been reviewer for

Math. Review

International Journal for Numerical Methods in Fluids

Computers and fluids

M2AN

ESAIM Proceedings

PIER Journal

Sever Adrian Hirstoaga has been reviewer for

Journal of Fixed Point Theory and Applications

MathSciNet/Mathematical Reviews

Michel Mehrenberger has been reviewer for

SISC

Electronic Journal of Qualitative Theory of Differential Equations (EJQTDE)

Mathematical Methods in the Applied Sciences

Journal Of Computational Physics

Computer Physics Communications

Computational and Applied Mathematics

Zeitschrift fuer Angewandte Mathematik und Physik

David Coulette has been reviewer for

Journal of Plasma Physics - Cambridge University Press

Emmanuel Franck was invited at

Minisymposium Fusion, Canum 2016, Obernai

Workshop Jorek, Nice, France

Seminar of numerical analysis, Rennes university, France

Emmanuel Franck has participated as speaker at

ECCOMAS 2016, Greece

Workshop ABPDE 2016, Lille, France

IGA and free mesh methods, La Jolla, USA

Philippe Helluy was invited at:

Oberwolfach au workshop (Hyperbolic Techniques for Phase Dynamics)

Sever Adrian Hirstoaga was invited at

17th SIAM Conference on Parallel Processing for Scientific Computing, 12-15 April 2016, Paris.

PASC (Platform for Advanced Scientific Computing), 8-10 June 2016, Lausanne.

NumKin, October 2016, Strasbourg

Michel Mehrenberger was invited at

ECCOMAS Congress 2016 , 5-10 june 2016 Crete Island, Greece.

Conference - Stability of non-conservative systems 4th-7th July 2016, University of Valenciennes, France.

Collaborative Research Center (CRC) seminar, Karlsruhe Institute of Technology (KIT), 28 april 2016.

Seminar, "Modellistica Differenziale Numerica", Dipartimento di Matematica - Sapienza Universita di Roma, 19 april 2016.

Seminar, "Applications des mathématiques", ENS Rennes, 2 mars 2016.

Laurent Navoret was invited at

Seminar. Laboratoire Jean Kuntzmann Grenoble

Workshop: Kinetic theory : from equations to models - Imperial College, Londres

Laurent Navoret has participated as speaker at

Congres Hyp2016, Aachen, Germany

David Coulette was invited at

14eme congress of French Physics Society and IAP Plasma Workshop, Nancy France

European Physical Society - 43rd conference on Plasma Physics, Leuven - Belgium

Philippe Helluy, expertises for:

ANR

Philippe Helluy is the head of the "Modélisation et Contrôle" research team at IRMA Strasbourg.

Michel Mehrenberger is in the IREM ("Institut de recherche sur l'enseignement des mathématiques") team "Modélisation" for the year 2016-2017.

Philippe Helluy and Sever Hirstoaga have participated in a jury for an assistant professor position.

Michaël Gutnic is member of the National Commitee for Scientific Research (from september 2012 to august 2016).

Licence : Laurent Navoret, Nonlinear optimisation (108h eq. TD)

Licence : Laurent Navoret, Integration (34h eq. TD)

Licence : Philippe Helluy, scientific computing (70h eq. TD)

Licence : Philippe Helluy, statistic (50h eq. TD)

Licence : Philippe Helluy, basic mathematics (20h eq. TD)

Licence : Michel Mehrenberger, Scientific computing (65 h eq. TD)

Licence : Michel Mehrenberger, nonlinear optimization (18 h eq. TD)

Licence : Michel Mehrenberger, mathematics for chemistry (56h eq. TD)

Licence: Michaël Gutnic, Mathematics for biology, (84h eq. TD)

Licence: Michaël Gutnic, Statistic for biology, (65h eq. TD)

Master 1: Laurent Navoret, Python (19h eq. TD)

Master 1: Philippe Helluy, operational research (50h eq. TD)

Master 1: Michaël Gutnic, probability and statistic, (30h eq. TD)

Master 2 (Agrégation) : Laurent Navoret, scientific computing (50h eq. TD)

Master 2 (Cellar physic) : Laurent Navoret, Basics in maths (24h eq. TD)

Master 2 (Agrégation) : Michel Mehrenberger, scientific computing (28h eq. TD)

Master 2: Philippe Helluy, hyperbolic systems (30h eq. TD)

Master 2: Sever Hirstoaga "two-scales convergence" (24h eq. TD)

PhD defended (december 2016): Thi Trang Nhung Pham, "Méthodes numériques pour Vlasov", Advisors: Philippe Helluy, Laurent Navoret.

PhD defended (december 2016): Michel Massaro, "Méthodes numériques pour les plasmas sur architectures multicœurs", Advisor: Philippe Helluy.

PhD in progress: Pierre Gerhard, "Résolution des modèles cinétiques. Application à l'acoustique du bâtiment.", October 2015, Advisors: Philippe Helluy, Laurent Navoret.

PhD in progress: Bruno Weber, "Optimisation de code Galerkin Discontinu sur ordinateur hybride. Application à la simulation numérique en électromagnétisme", March 2015, Advisor: Philippe Helluy.

PhD in progress: Nicolas Bouzat, "Fine grain algorithms and deployment methods for exascale codes", October 2015, Advisors: Michel Mehrenberger, Jean Roman, Guillaume Latu.

PhD in progress: Mustafa Gaja, "Compatible finite element method and preconditioning", December 2015, Advisors: E. Sonnendruecker (IPP, germany), A. Ratnani (IPP), E. Franck

PhD in progress: Conrad Hillairet, "Implicit Boltzmann scheme and Task Parallelization for MHD simulations", November 2016, Advisors: Philippe Helluy, E. Franck

PhD in progress: Ksander Ejjaouani, "Conception of a programmation model, application to gyrokinetic simulations", October 2015, Advisors: Michel Mehrenberger, Julien Bigot, Olivier Aumage.

Philippe Helluy:

PhD defense of Rémi Chauvin (Toulouse)

PhD defense of Eric Madaule (Nancy)

PhD defense of Juan Manuel Martinez Caamano (Strasbourg)

PHD defense of Thibault Gasc (CEA Paris)

PHD defense of Nicolas Deymier (Toulouse)

Habilitation defense of Virginie Grandgirard (CEA cadarache)

Michel Mehrenberger:

PhD defense of Florian Delage (Strasbourg)