Team Pop Art

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Dependable distributed real-time embedded systems

Participants : P.-F. Dutot, A. Girault [ contact person ] , H. Kalla, G. Vaisman.

Revisiting the bicriteria (length,reliability) multiprocessor static scheduling problem

Our starting point is a dependency task graph and an heterogeneous distributed memory target architecture. We have revisited the well studied problem of bicriteria (length,reliability) multiprocessor static scheduling of this task graph onto this architecture. Our first criteria remains the static schedule's length: this is crucial to assess the system's real-time property. For our second criteria, we have considered the global system failure rate(GSFR), seen as if the whole system were a single task scheduled onto a single processor, instead of the usual reliability. Th reason for this choice is that the GSFR does not depend on the schedule length like the reliability does, due to its computation in the classical reliability model of Shatz and Wang [74] . Under this widely accepted reliability model, the probability that a processor be operational during a duration d is Im1 $e^{-\#955 d}$ , where $ \lambda$ is the failure rate per time unit of this processor (in other words, this is a constant parameter Poisson law). We have shown that, unfortunately, using as the two criteria the length and the reliability yields counter-intuitive results: for instance, choosing a processor such that the duration d of a given operation is smaller (which is good for the length criterion) induces a higher reliability (which is also good for the reliability criterion); this is because the Im2 ${d\#8614 e^{-\#955 d}}$ function is decreasing. This is counter-intuitive because it means that replication is bad for reliability! It follows that it is difficult to design a satisfactory bicriteria scheduling heuristic. In particular, this has three drawbacks: first, the length criterion overpowers the reliability criterion; second, it is very tricky to control precisely the replication factor of the operations onto the processors, from the beginning to the end of the schedule (in particular, it can cause a “funnel” effect); and third, the reliability is not a monotonous function of the schedule.

Instead, by using the GSFR jointly with the schedule length, we have shown that we control better the replication factor of each individual task of the dependency task graph given as a specification, with respect to the desired failure rate. Intuitively, this is because the GSFR is the reliability “per time unit”, hence independent of the length. In particular, our new scheduling algorithm does not suffer from the drawbacks mentioned above.

To solve this (length,GSFR) bicriteria optimization problem, we have taken the failure rate as a constraint, and we have minimized the schedule length. We are thus able to produce, for a given application task graph and multiprocessor architecture, a Pareto curve of non-dominated solutions, among which the user can choose the compromise that fits his requirements best  [Oops!] .

Static multiprocessor scheduling of tasks with resource constraints

We address here the problem of scheduling tasks that have strong constraints. The studied constraints are physical resources (specific processor, memory), waiting until all the predecessors tasks have terminated (time constraint), and real time. Also, the computation of a schedule must be accomplished in a short delay (one second) and the solution found must be as close as possible to the optimum. We try to optimize several criteria at the same time, even if the optimization of a criterion can have a negative effect on other criteria. We search, therefore, a good compromise between those criteria. The criteria that we use are the ending date of the last task (makespan), the minimization of the used resources, and the computation time of the scheduling. The algorithm chosen to calculate the scheduling is a branch and bound, because it can provide a precise solution as well as an approached solution if it is stopped before its end, while guaranteeing the quality of the obtained solution in comparison with the optimum. A prototype has been already implemented and tested.

We are currently performing experimentations. On the one hand we are trying different initialization algorithms of the branch and bound, and on the other hand we are evaluating several multi-criterion evaluation functions. Since we have a multiprocessor machine to compute the schedules, several possibilities exist. We can either run one parallel branch and bound program, or we can run several sequential branch and bound programs in parallel. In the former case, we propose to use BOBPP (BOBPP : http://bobpp.prism.uvsq.fr )based on KAAPI (KAAPI : http://kaapi.gforge.inria.fr )in order to build a parallel branch and bound program. This will allow us to compare the benefits of the branch and bound algorithm parallelized on several processors. In the latter case, the programs must “share” the better current solution, so that each program cuts faster the less promising branches.


previous
next

Logo Inria