Section: Application Domains
Image guided intervention
Image-guided neurosurgical procedures rely on complex preoperative planning and intraoperative environment. This includes various multimodal examinations: anatomical, vascular, functional explorations for brain surgery and an increasing number of computer-assisted systems taking place in the Operating Room (OR). Hereto, using an image-guided surgery system, a rigid fusion between the patient's head and the preoperative data is determined. With an optical tracking system and Light Emitting Diodes (LED), it is possible to track the patient's head, the microscope and the surgical instruments in real time. The preoperative data can then be merged with the surgical field of view displayed in the microscope. This fusion is called “augmented reality”.
Unfortunately, the assumption of a rigid registration between the patient's head and the preoperative images only holds at the beginning of the procedure. This is because soft tissues tend to deform during the intervention. This is a common problem in many image-guided interventions, the particular case of neurosurgical procedures can be considered as a representative case. Brain shift is one manifestation of this problem but other tissue deformations can occur and must be taken into account for a more realistic predictive work.
Within this application domain, we aim at developing systems using surgical guidance tools and real-time imagery in the interventional theatre. This imagery can come from video (using augmented reality procedures), echography or even interventional MRI, biological images or thermal imagery in the future.
Per-operative imaging in neurosurgery:Our major objective within this application domain is to correct for brain deformations that occur during surgery. Neuronavigation systems make it now possible to superimpose preoperative images with the surgical field under the assumption of a rigid transformation. Nevertheless, non-rigid brain deformations, as well as brain resection, drastically limit the efficiency of such systems. The major objective here is to estimate brain deformations using 3D ultrasound and video information.
Modeling of surgical gesture expertise:Our objective is to show how the formalization of the medical expertise could improve both the planning and the surgery itself. One way is to rely on previously defined generic model describing surgical procedures. From a data base of surgical cases described by the generic model and from a limited set of parameters related to the patient (i.e. extrinsic parameters), the closest surgical case can be retrieved in order to assist the surgical planning. Similarly, global surgical scenarii representing main categories of surgical procedures could be classified according to extrinsic parameters (coming from the current case) and retrieved from the database. New experiences based on this procedure could then feed the surgical modelling. Another issue would be to use the knowledge extracted from the data base to pre-fetch the image processing procedures (to speed up or tune processing workflows parameters).
Robotics for 3D echography:This project is conducted jointly with the Lagadic project-team. The goal is to use active vision concepts in order to control the trajectory of a robot based on the contents of echographic images and video frames (taken from the acquisition theatre). Possible applications are the acquisition of echographic data between two remote sites (the patient is away from the referent clinician) or the monitoring of interventional procedure like biopsy or selective catheterisms.
3D free-hand ultrasound: Our major objective within this application domain is to develop efficient and automatic procedures to allow the clinician to use conventional echography to acquire 3D ultrasound and to propose calibrated quantification tools for quantitative analysis and fusion procedures. This will be used to extend the scope of view of an examination.