Section: Application Domains
Context Aware Personnal Assistant
As embedded computing matures, it is increasingly possible to build low-cost mobile devices that integrate sensing, display, computing, communications, and interaction. As cost have decreased and technologies matured, the number of such devices in ordinary human environments has doubled roughly every 2 years, in a progression driven by Moore's law. Wireless ad-hoc network technology allows such devices to be federated to create a new form of interconnected distributed computing environment  . In such environments, services are not limited to the resources of a single machine, but may be dynamically composed as an assembly of distributed components. An important challenge is to create "intelligent" services that exploit such environments to provide access to information and communications in a manner that is appropriate and non-disruptive.
To be considered as "intelligent", a service or system must incarnated, autonomous and situated. Services may be incarnated as an assembly of software and hardware components within a smart environment. The OMISCID middleware, developed by the PRIMA group, enables experiments in the dynamic construction of systems and services from software and hardware components. In order to maintain autonomy, a service or system must be able to monitor and reconfigure itself to continue robust operation in the presence of changes in operating conditions or support hardware. Autonomy is made possible by constructing components using autonomic programming techniques such as self-monitoring, auto-regulation and self repair.
Situated behaviour requires that the actions and reactions of the system be appropriate to the current context. In the PRIMA group we have developed situation models as a method to enable systems and services to model human activities and social contexts. Actions and reactions are made contingent on the current situation in order to provide services that are both appropriate and non-disruptive.
Within project PRIMA we are investigating the use of reinforcement learning to automatically construct a context aware personal agent. Rewards are given by the user when expressing his satisfaction of the system's actions. A default context model assures a consistent initial behavior. This default model is provided by the agent programmer and is neutral enough to satisfy a majority of users. This model is then adapted to each particular user in a way that maximizes its satisfaction. The learned system must also be able to explain all its actions to gain user's acceptance.