Section: New Results
Types and recursion
The work by Boudol and Zimmer on type inference in the intersection type discipline has been published  .
For the year 2005, our efforts on Bigloo have mainly focused on preemptive multi-threading. We have re-implemented many components of the runtime system for supporting re-entrance. For this we have had to design and implement a new mechanism for handling errors. The new system uses exceptions. The importance of this implementation effort has slowed down the production of Bigloo releases. We have not been able to produce more than one version this year (the version 2.7a). However, the activity around Bigloo as continued as approximatively the same pace as the previous years. The Bigloo community is still committed to its evolution. This is demonstrated by the numerous mails that are sent to its mailing list: this year, approximatively 1000 mails have been sent.
In addition to multi-threading, we have developed new APIs for Bigloo. Even if they are not yet part of official distribution, we have nearly completed the implementation of libraries for:
secure networking via SSL.
IMAP mail management.
Web programming. This involves facilities for parsing and producing XML documents, parsing HTTP requests, handling URLs, and decoding CGI arguments.
multimedia programming with facilities for handling MP3 and playlist files, Jpeg Exif data, soundcard, etc.
All these libraries are meant to be integrated to the standard Bigloo distribution.
Stack Virtualisation for Source Level Debugging
The compilation of high-level languages to general-purpose execution platforms draws some concerns when it comes to debugging. Indeed, abstractions that are not naively supported by the execution platform are emulated with intermediate data structures and function calls. Unfortunately, the details of the emulation are visible in the execution stack, and this unwanted information greatly reduces the effectiveness of debuggers.
We have developed a novel and language-neutral technique for constructing a virtual view of the stack  , in order to mask intermediate function calls that were generated to emulate high-level abstractions, or even to recover logical frame information that was lost during the compilation process. In particular, virtual views enable the visualization of two disjoint code representations (e.g., natively compiled code and dynamically interpreted bytecode) into a single unified stack.
We have designed a complete set of virtualization rules to hide all the details of the compilation of Bigloo programs into JVM bytecode. We have achieved to mask every emulated language features, such as high order functions, generic functions, exception handling, or runtime code interpretation. Other experiments have been conducted on the Rhino and the Jython languages, in order to show that this technique can be applied on a wide variety of languages.
The complete implementation of this work, along with examples of virtualization rules for various languages has been integrated into the Bugloo distribution available on-line.
During the year 2005, we have rewritten Skribe so that it can be embedded in a web browser. This rewriting was necessary because at its inception Skribe implementation has been designed as a batch document processing tool. We also took advantage of this rewriting to integrate some improvements based on our four years experience with Skribe development and usage.
The Skribe evaluator relies on three stages. In the first stage, the source document is parsed and a tree representing this document is built. The second stage is devoted to inter-document references resolution. Finally, the third phase is in charge of producing the final document. With this scheme, only the last stage should be dependent of the final document output format. However, this was not the case with the old implementation of Skribe: in order to produce documents with layouts highly adapted to the output media, it was possible to build a tree in the first phase which was dependent of the output format. In a batch approach, this is not a problem, since we need one execution of the Skribe evaluator per output (i.e. to produce a HTML version and a PDF version of the same document, the Skribe evaluator must be run twice).
With our work around Web servers, we think that it is interesting to embed Skribe in the server. In this approach, when a Skribe document is requested, it is parsed, a tree is built, the references are resolved and the final document is sent to the client. Since the web server is aware of the capacities of the requesting client, it can produce different documents for different browsers (with/without images, using/avoiding CSS, ...). In order to keep good performances, the two first phases can be done only once for a given document and cached by the server. So, when a previously served Skribe document is requested, the server only needs to produce the client dependent HTML. The model used in our previous implementation was a real hindrance to achieve the embedding of Skribe in a Web server. With our new implementation, a Skribe document could be represented on the server as a output-independent tree and an environment per client. This rewriting was necessary to evolve from the batch approach we had to an embedded one.
The new version of Skribe still needs to be polished before being officially distributed. It should be available by mid 2006. Once we have a stable version, we will be able to effectively start experimentations with embedded Skribe documents on the server.
Low cost computers, ADSL, and wireless connections have made ubiquitous computing a reality. Because the Internet is now available nearly everywhere on the planet, most of us are nearly permanently connected. Many of us use various computers (maybe, one at home, one at work, and a roaming laptop). All these computers ideally use the same synchronized data. Enforcing this synchronization is not always so easy. Hopefully, some dedicated tools such as Unison allow two replicas of a collection of files and directories to be stored on different hosts, modified separately, and then brought up to date by propagating the changes in each replica to the other. However, as convenient as these tools are for file and directory synchronization, they are of little help when considering email synchronization. We address the specific problem of synchronizing email in this study.
We have designed and developed Bimap  , a tool for synchronizing email. It enables emails to be manipulated from different computers and localizations. A user can read, answer, and delete emails from various computers amongst which some can be momentarily disconnected. Bimap automatically propagates the changes to all these computers. Synchronizing email is a simple problem of synchronizing lists. Functional languages are therefore candidates of choice for implementing such algorithms. Bimap is implemented in one of them, namely Scheme, our favorite programming language. It benefits from the recent evolution of Bigloo.
In addition to synchronizing mail, Bigloo is also able to filter and classify email. As such, Bimap could be a potential replacement for procmail . This is highly convenient because it enables email filtering with simple small Scheme scripts. Two such scripts have been presented: one for classifying emails that belong to mailing lists and a second one for implementing white-listing. Each of these scripts is no more that a few lines of Scheme code.