Inria / Raweb 2004
Project-Team: DaRT

Search in Activity Report, year 2004:
HELP

INDEX

Project-Team : dart

Section: Scientific Foundations


Keywords: Modeling, UML, MDA, MDA Transformation, Model, Metamodel, MOF.

Co-modeling for SoC design

Participants: Lossan Bonde, Pierre Boulet, Arnaud Cuccuru, Jean-Luc Dekeyser, Cédric Dumoulin, Philippe Marquet, Ouassila Labbani.

The main research objective is to build a set of metamodels (application, hardware architecture, association, deployment and platform specific metamodels) to support a design flow for SoC design. We use a MDA based approach.

Principles

Because of the vast scope of the encountered problems, of the quick evolution of the architectures, we observe a very great diversity as regards the programming languages. Ten years ago each new proposed model (for example within the framework of a PhD) led to the implementation of this model in a new language or at least in an extension of a standard language. Thus a variety of dialects were born, without releaving the programmer of the usual constraints of code development. Portability of an application from one language to another (a new one for example) increases the workload of the programmer. This drawback is also true for the development of embedded applications. It is even worse, because the number of abstraction levels has to be added to the diversity of the languages. It is essential to associate a target hardware architecture model to the application specification model, and to introduce as well a relationship between them. These two models are practically always different, they are often expressed in two different languages.

From this experience, one can derive some principles for the design of the next generation of environments for embedded application development:

We believe that the Model Driven Architecture [22][30] can enable us to propose a new method of system design respecting these principles. Indeed, it is based on the common UML modeling language to model all kinds of artifacts. The clear separation between the models and the platforms makes it easy to switch to a new technology while re-using the old designs. This may even be done automatically provided the right tools. The MDA is the OMG proposed approach for system development. It primarily focuses on software development, but can be applied to any system development. The MDA is based on models describing the systems to be built. A system description is made of numerous models, each model representing a different level of abstraction. The modeled system can be deployed on one or more platforms via model to model transformations.

Transformations and Mappings

A key point of the MDA is the transformation between models. The transformations allow to go from one model at a given abstraction level to another model at another level, and to keep the different models synchronized. Related models are described by their metamodels, on which we can define some mapping rules describing how concepts from one metamodel are to be mapped on the concepts of the other metamodel. From these mapping rules we deduce the transformations between any models conforming to the metamodels.

The MDA model to model transformation is in a standardization process at the OMG [58].

Use of Standards

The MDA is based on proven standards: UML for modeling and the MOF for metamodel expression. The new coming UML 2.0 [24] standard is specifically designed to be used with the MDA. It removes some ambiguities found in its predecessors (UML 1.x), allows more precise descriptions and opens the road to automatic exploitation of models. The MOF (Meta Object Facilities [59]) is oriented to the metamodel specifications.

System-on-Chip Design

SoC (System-on-Chip) can be considered as a particular case of embedded systems. SoC design covers a lot of different viewpoints including as much the application modeling by the aggregation of functional components, as the assembly of existing physical components, as the verification and the simulation of the modeled system, as the synthesis of a complete end-product integrated into a single chip. As a rule a SoC includes programmable processors, memory units (data/instructions), interconnection mechanisms and hardware functional units (Digital Signal Processors, application specific circuits). These components can be generated for a particular application; they can also be obtained from IP (Intellectual Property) providers. The ability to re-use software or hardware components is without any doubt a major asset for a codesign system.

The multiplicity of the abstraction levels is appropriate to the modeling approach. The information is used with a different viewpoint for each abstraction level. This information is defined only once in a single model. The links or transformation rules between the abstraction levels permit the re-use of the concepts for a different purpose.

Contributions

Our proposal is partially based upon the concepts of the ``Y-chart'' [43]. The MDA contributes to express the model transformations which correspond to successive refinements between the abstraction levels.

Metamodeling brings a set of tools which will enable us to specify our application and hardware architecture models using UML tools, to reuse functional and physical IPs, to ensure refinements between abstraction levels via mapping rules, to ensure interoperability between the different abstraction levels used in a same codesign, and to ensure the opening to other tools, like verification tools, thought the use of standards.

Figure 1. Overview of the metamodels for the ``Y'' design
Ynew

The application and hardware architecture are described by different metamodels. Some concepts from these two metamodels are similar in order to unify and so simplify their understanding and use. Models for application and hardware architecture may be done separately (maybe by two different people). At this point, it becomes possible to map the application model on the hardware architecture model. For this purpose we introduce a third metamodel, named association metamodel, to express associations between the functional components and the hardware components. This metamodel imports the two previously presented metamodels.

All the previously defined models, application, architecture and association, are platform independent. No component is associated with an execution, simulation or synthesis technology. Such an association targets a given technology (Java, SystemC RTL, SystemC TLM, VHDL, etc). Once all the components are associated with some technology, the deployment is realized. This is done by the refinement of the PIM association model to the PIM TLM model first (Transaction Level Model), and to the PIM RTL model second (Register Transfer Level).

The diversity of the technologies requires interoperability between abstraction levels and simulation and execution languages. For this purpose we define an interoperability metamodel allowing to model interfaces between technologies.

Mapping rules between the deployment metamodel, and interoperability and technology metamodels can be defined to automatically specialize the deployment model to the chosen technologies. From each of the resulting models we could automatically produce the execution/simulation code and the interoperability infrastructure.

The simulation results can lead to a refinement of the application, the hardware architecture, the association or the deployment models. We propose a methodology to work with these models. The stages of design could be:

  1. Separate application and hardware architecture modeling.

  2. Association with semi-automatic mapping and scheduling.

  3. Deployment (choice of simulation or execution level and platform for each component).

  4. Automatic generation of the various platform specific simulation or execution models.

  5. Automatic simulation or execution code generation.

  6. Refinement at the PIM level given the simulation results.

Models and Metamodels

The abstract syntax of application and hardware architecture are described by different MOF meta-models. Some concepts from these two meta-models are similar, in order to simplify their understanding and use.

They share a common modelling paradigm, the component oriented approach, to ease reusability. Reusability is one of the key point to face the time to market challenge that the conception of embedded systems implies.

In both application and architecture, components propose an interface materialized by their ports. The interfaces enable to encapsulate the structure and the behaviour of the components, and make them independent of their environment.

The two meta-models also share common construction mechanisms, to express repetitive constructs in a compact way. This kind of compact expression makes them more comprehensible for a compiler or an optimisation tool.

To express the mapping of an application model on an hardware architecture model, a third meta-model named association is introduced. This meta-model imports the concepts of the two previously mentioned meta-models.

Figure 2. Metamodel Architecture
mmArchi

Application Metamodel

The application metamodel focuses on the description of data dependences between components. Components and dependencies completely describe an algorithm without addition of any parasitic information. Actually any compilation optimization or parallelization technique must respect the data dependences. This gives many benefits:

Application components represent some computation and their ports some data input and output capabilities. Data handled in the applications are mainly multidimentional arrays, with one possible infinite dimension representing time.

The application meta-model introduces three kinds of components : Compound, DataParallel, and ElementaryComponents.

Figure 3. Application Metamodel
appliMM

A compound component expresses task parallelism by the way of a component graph. The edges of this graph are directed and represent data dependences.

A data parallel component expresses data parallelism by the way of the parallel repetition of an inner component part on patterns of the input arrays, producing patterns of the output arrays. Some rules must be respected to describe this repetition. In particular, the output patterns must tile exactly the output arrays. Potential data parallelism is explicitly described via Tilers, wich carry dependence vectors (paving and fitting) to express dependences between input/output arrays of the DataParallelComponent and input/output patterns of the inner repeated component part.

Figure 4. Tiler Definition
tiler

Elementary components are the basic computation units of the application. They have to be defined for each target technology.

Data parallelism expression is one of the key point of our approach. In domains such as intensive signal processing or telecommunication (typically targeted by embedded systems), applications generally present lot of potential data parallelism.

In order to broaden the application domain of our metamodel, we have also studied a design methodology for synchronous reactive systems, based on a clear separation between control and data flow parts. This methodology allows to facilitate the specification of different kinds of systems and to have a best readability. It also permits to separate the study of the different parts by using the most appropriated existing tools for each of them. Following this idea, we are particulary interested in the notion of running modes and in the Scade tool. Scade is a graphical development environment coupling data processing and state machines (modeled by synchronous languages Lustre and Esterel). It can be used to specify, simulate, verify and generate C code. However, this tool does not follow any design methodology, which often makes difficult the understanding and the re-use of existing applications. We will show that is also difficult to separate control and data parts using Scade. Thus, regulation systems are better specified using mode-automata which allow adding an automaton structure to data flow specifications written in Lustre. When we observe the mode-structure of the mode-automaton, we clearly see where the modes differ and the conditions for changing modes. This makes it possible to better understand the behavior of the system.

Ouassila Labbani is pursuing her research about how to integrate mode automata further in our hierarchy of metamodels as her PhD, which she started in 2003.

Hardware Architecture MetaModel

The purpose of this meta-model is to satisfy the growing need of embedded system designers to specify the hardware architecture of the system at a high abstraction level. It enables to dimension the ressources of the hardware in a precise enough way to be pertinent, but abstracting irrelevant details so that efficient decision could be taken.

The hardware architecture meta-model introduces three kinds of components : Active, Passive and Interconnect components.

Figure 5. Hardware Architecture Metamodel
hardMM

Active components symbolize resources which are able to read or write data into passive components. It may modify, or not, the data. It includes elements such as CPUs, FPGAs, ASICs or DMAs. It also includes more coarsegrained elements, such as SMP nodes inside a parallel machine.

Passive components symbolize resources which has the function of supporting data. It includes all kind of memories.

Interconnection components enable to connect active and passive components, or active components together. It includes elements as simple as a bus, or as complex as a multistage interconnection network.

Components communicate via a send/receive mechanism, and connections between components (via their ports) represent data paths offered by the architecture.

A mechanism similar to the one used in the application meta-model enables to specify repetitive architecture in a compact way. We believe that regular parallel computation units will be more and more present in embedded in systems in the future, especially for Systems on Chips. This belief is driven by two considerations:

  1. Time-to-market constraints are becoming so tight that massive reuse of computation units is one of the only ways to get the computation power needed for next generation embedded applications.

  2. Parallelism is a good way to reduce power consumption in SoCs. Indeed at equal computing power, a chip able to run several computations simultaneously will be clocked at a lower frequency than a chip able to run less computations in a given cycle. As frequency is square in the power consumption equation, this leads to important gains.

The repetitive constructs we propose can be used to model parallel computation units, such as grids, but also complex static or dynamic interconnection networks, or memory banks.

Arnaud Cuccuru has been working towards his Ph. D. on this subject since september 2002.

Association Metamodel

The association metamodel allows to express how the application is projected and scheduled on the architecture. This metamodel imports the application and architecture metamodels in order to associate their components. The association model associates application components with active architecture components to express which hardware component executes which functionality. If the hardware component is programmable, the application components it is associated with will be implemented in software, otherwise, they will be synthesized as hardware. The dependences between application components are associated with communication routes. These routes are built as sequences of data paths, passive and active components and represent the route of data from one memory to another via processor or DMA initiated data exchanges. The input and output of the functional components are mapped into memories.

As the application and hardware architecture models, the association model takes advantage of a repetitive and hierarchical representation to allow to view the association at different granularity and to factorize its representation.

Characterization

In order to automate the construction of such an association model and to optimize it, one needs to add some characteristics to the application and hardware architecture models. Informations such as real-time or power consumption constraints characterize the application model. In the hardware architecture model, the hardware components are characterized by frequency, bus width, memory size, bus protocol, etc. The characteristics that depend both on the application and the hardware architecture are located in the association model. These are the running time or the power consumption of the application components on the different hardware components.

Optimization

The association model is the input and the output of the optimization algorithm. Indeed, the optimization can be seen as a refactoring of the association model. We have developed code transformations that allow to refactor the application to map it more easily on the target hardware architecture. The idea of these code transformations is to label a hierarchical level of the application model with an execution strategy such as sequential, SPMD, cyclic(k) or block in order to unambiguously specify the distribution and schedule of this level on a given hierarchical level of the hardware architecture model. To compute the optimization, we use a globally irregular, locally regular heuristic, combining a global list heuristic to handle the task parallelism with a local regular heuristic to handle the data parallelism.

PSM Metamodels

We will focus here on two particular abstraction levels: Transaction Level Model and Register Transfer Level. The metamodels appearing at the PIM level are not complete metamodels of the targeted language but rather metamodels providing the concepts needed to execute the mapped application with these abstraction levels. Then a transformation stage will generate PSM SystemC (for example) from the PIM TLM. By refinement the PIM TLM is transformed into a PIM RTL. At last the PIM RTL can be transformed to the PSM VHDL (for example). Code generations are produced from the PSM models using a transformation tool. For more details, see the section on simulation techniques  3.4.2.1.

Transformation Techniques

Model to model transformations are at the heart of the MDA approach. Anyone whishing to use MDA in its projects is sooner or later facing the question: how to perform the model transformations? There are not so much publicly and freely available tools, and the OMG QVT standardization process [58] is not completed today. To fulfill our needs in model transformations, we have developed ModTransf, a simple but powerful transformation engine. ModTransf was developed based on the recommendations done after the review of the first QVT proposals and on the latest proposals. Based on these recommendations and on our needs, we have identified the following requirements for the transformation engine:

The proposed solution fulfills all these needs: ModTransf is a rule based engine taking one or more models as inputs and producing one or more models as outputs. The rules can be expressed using an XML syntax and can be declarative as well as imperative. A transformation is done by submitting a concept to the engine. The engine then searches the more appropriate transformation rule for this concept and applies it to produce the corresponding result concept. The rule describes how properties of the input concept should be mapped, after a transformation, to the properties of the output concept.

The code generation follows the same principle, but the output concept creation is replaced by code generation performed with a template mechanism. A rule specifies one or more template to use, and each template contains holes replaced by the values of the input concepts.

The ModTransf engine is an Open Source project available on the internet. Lossan Bondé will pursue this work in his Ph. D. started in september 2003.


previous
next