Combining SVG and models of interaction to build highly interactive user interfaces

Keywords: SVG, user interface design, visual design, GUI toolkit, software architecture, model-driven architecture, finite state machine

Stephane Chatty
CTO of IntuiLab, HCI and ATM engineer
IntuiLab
Prologue 1. La Pyreneenne
31672
Labege Cedex
France
chatty@intuilab.com

Biography

Stephane Chatty is the CTO and co-founder of IntuiLab. Before that he was a researcher in UI software engineering and human-computer interaction, and he directed a research department that designed tools for air traffic controllers

Alexandre Lemort
HCI and ATM engineer
IntuiLab
Prologue 1. La Pyreneenne
31672
Labege Cedex
France
lemort@intuilab.com

Biography

Alexandre Lemort is a specialist in UI design and an IntuiKit developer. He was in charge of the design and development of multiple multimodal applications in various application domains.

Dr. Stephane Sire
HCI engineer
IntuiLab
Prologue 1. La Pyreneenne
31672
Labege Cedex
France
sire@intuilab.com

Biography

Stephane Sire is a researcher in user interaction, specialized in groupware systems, e-learning and software engineering for user interfaces.

Jean-Luc Vinot
Graphic designer
CENA
DSNA/SDER, 7 avenue Edouard Belin
31055
Toulouse
France
vinot@cena.fr

Biography


Abstract


The gap between Web design and pure User Interface design is narrowing from day to day, as shown by recent announces around the concept of rich internet application. SVG is at the heart of this evolution because it offers a high-end 2d graphics model, which is familiar to graphical designers, and which fits well within both Web browsers and standalone applications.

IntuiLab designs highly interactive user interfaces, that often feature multimodal capacities: gesture recognition, speech recognition, etc. As it is still difficult to prototype advanced graphical and multimodal applications that fit in a Web browser, we have started to use SVG outside of the browser environment. As regards purely graphical interfaces, our portfolio includes user interfaces with high-end graphics for Air-Traffic control centers, in-car systems and for mobile government services. Using SVG has lead us to improve our usability driven software engineering life cycle and to combine SVG with imperative code written in any programming language and other proprietary markup languages.

SVG fits very well in an iterative design workflow. At the early steps in a project, the graphical structure of the different components that make up a user interface can be expressed as a logical structure. This structure is seen as a set of groups and subgroups by the graphical designer, as a set of components by programmers. It can serve as a contract between the engineers and designers, who can thus start work in parallel: the engineers start programming the components, while the graphical designers experts start designing the graphical aspects of the components. Each time a mockup is required for an intermediate user test, the graphical designer sends the design as a set of SVG files. The binding between the SVG structure and the user interface component is realized through the identifiers of the graphical elements that match the names agreed upon during the definition of the logical structure. With this method, we have been able to integrate the final look and feel of several user interfaces very close to the end of the project, with very low integration costs. Another avantage of this method is that the graphical designs are reproduced with high fidelity in the final user interface,which is not always the case when the programmers have to translate the drawings into programming language instructions. Of course, this requires a rendering engine that conforms to the SVG requirements. For that purpose, we use TkZinc open source graphical toolkit.

We have enhanced this iterative design process based on a separation of graphical elements from the user interface code by introducing interactive components and finite state machines (FSM) as first order objects into our user interface toolkit, IntuiKit. FSMs can be built either declaratively in XML, or with programming instructions. The FSMs are used to control which parts of the SVG trees must be displayed. The link between an FSM and some SVG subtrees is realized in components through a specialized switch-like node that has been introduced into our model. As for FSMs, the switch-like nodes can be built either declaratively in XML or with programmming. Our switch-like node is an extension of the SVG switch node with a condition variable that tests the state of an FSM. SVG trees, FSM and Switches are integrated into the application scene graph that is at the heart of our toolkit. One advantage of this approach is that we do not need complex scripting language code to control the visibility of SVG nodes, as is often the case when programming interactive components with SVG and Javascript for Web browser applications. FSMs provide a simple interface between the code of the application and the SVG that represent the current states of the components.

Now that we have succesfully developped several user interfaces with our toolkit based on this SVG-centric approach (see examples at http://www.intuilab.com/gallery), we are investigating how to specify and reuseanimations and speech grammars for multimodal applications in a similar way. Our hope is that a standardization of finite state machines (or StateCharts) could also in turn be applied to Web based user interface in the future. We also hope to convince everyone that software engineering concerns would be better addressed if DTDs or schemas for describing user-interaction were designed independently of the SVG standard.


Table of Contents


1. Introduction
2. UI development process
     2.1 Participatory design: Design with the user rather than design for the user
     2.2 Team work process
     2.3 Integration of graphical components and behaviors
3. Developping SVG applications with IntuiKit
     3.1 The IntuiKit application object model
     3.2 The switch module
     3.3 The FSM module
     3.4 Building control structures with switch and FSM
     3.5 A component example
     3.6 Making generic component
4. Related works
5. Example application
6. Conclusion
Acknowledgements
Bibliography

1. Introduction

The gap between Web design and pure User Interface (UI) design is narrowing from day to day. This is shown by recent announcements around the concept of rich internet application or dashboard widgets in Mac OSX Tiger. Graphic designers in UI development teams are one of the keys of this evolution. With the growing understanding that this work can improve users' performance and acceptance of new products, it is now sought in the design of specialised user interfaces and not only on the Web, from aircraft cockpits to plant supervision systems. For instance, figure 1 illustrates a graphic designer's work for air traffic control.

digitrafic-control.png

Figure 1: Visual techniques when used by graphic designers can enrich the message conveyed, Jean-Luc Vinot

SVG has a great potential in desktop applications because it combines high quality visualisation and user interactivity. Its wide palette of visual techniques makes it overwhelming superior to raster based GUI. With the coming of SVG 1.2, it will be extended with UI-controls and videos. But until now, the use of the full power of SVG is quite difficult. There is an increasing number of export facilities available in professional vector graphics editors such as Adobe's Illustrator, but they are far away from using all the features of SVG. For example, DOM manipulation, animations, interactivity and scripting are not possible. This makes the export facility of such editors useful for small web applications, but nearly unuseable to produce real applications at reasonable costs.

Wider acceptance of Human-Computer Interaction processes and graphic design requires solutions that preserve the ability to redesign graphics, and avoid duplication of effort. The direction we chose in the design of our UI software development suite, IntuiKit, is to support software engineering processes that give graphic designers a more central role, while preserving the ability of programmers to structure their code appropriately. The vision is that designers become software producers. Their artwork is a new type of software component that can be managed in its own way then merged with other components, just like software components obtained from different source files have to be merged by compilers and link editors in traditional software engineering.

This article describes a method that aims at making the design and development of applications with high-end 2D graphics more accessible to industrial companies. We then present IntuiKit and discuss implementation details. Finally, we illustrate the use of IntuiKit on a real application and present perspective of our work.

2. UI development process

Innitially, User interfaces (UI) were developed with traditionnal software engeneering cycle such as V-cyle or waterfall model. While this is quite suitable when the specifications are fully defined, these approaches showed limitations in the case of UI, particularly for complex systems. It is difficult at the same time to define the requirements accurately and to formalize them before design begins. This introduces major risks concerning the acceptance by users.

In order to overcome these issues, user-centered and iterative designs have been introduced: the software is developed through a series of developement cycles continually-refining the prototype/application. In each cycle, the design is elaborated with users, refined and tested, and the results of testing of each cylce feed into the design focus of the next cycle. Although user-centered design and iterative design are advocated in Human-Computer Interaction (HCI) literature for their benefits for better acceptance by users, they are not widely practised in industrial companies due to a number of weakness:

To overcome these difficulties, we have adopted a UI development process combining a spiral design cycle and a V developement cycle. Figure 2 illustrates our approach divided into three phases:

dphi_designcycle.gif

Figure 2: IntuiLab UI development process

In this section, we describe and illustrate our development process taking the example of a small SVG based application initially designed by Kriss Rockwell (US Airways) for a pilot training: Airbus A321 exterior lighting systems freeplay (figure 3). It will illustrate the role of a participatory design session for defining a first model of the application, and the further parallel work permitted on early prototypes by the separation of application logics from it's persentation

usair_a321.gif

Figure 3: Airbus A321 exterior lighting systems freeplay, Kriss Rockwell

2.1 Participatory design: Design with the user rather than design for the user

Before developing a software, project managers need to understand what its users needs. If they are successful, it will improve users' performance and acceptance of new software. This information is often difficult to obtain from just talking to people or observing their behavior, and these difficulties have led to the misconception that people do not know what they want or cannot tell you what they want.

Participatory design has emerged as a response to this difficulty. Unlike other approaches to understanding users, participatory design is an approach to design with the user rather than to design for the user (Muller and Khun 2003). It assumes that users should play an active role in the creative process: they are not simply the subjects of UI testing, they have an active involvement in design and decision making process. It also encourages the participation of a wide variety of people, such as graphics designers, software engineers, sales persons, etc. The concept of participatory design is becoming standard practice in the computing industry. It involves different techniques such as role-playing activities or paper prototyping.

In the case of the A321 exterior lighting systems, we have used paper versions of screen displays to define an hight level system structure by identifying the fundamental objects and their relationships. Figure 4 presents the analysis and decomposition of the lighting control panel.

panelDescription.png

Figure 4: Description of the lighting control panel

The other pillar of object-oriented design is the specification of dynamic behaviours. To specify the overall system behavior, we have played interactive sequences needed to perform a series of typical tasks. For example, to have both Taxi and Takeoff lights illuminated, the nose wheel light control must be in T.0. position (set to T.O.).

StickStates description
StrobeOFF (strobe lights are off), AUTO (strobe lights are automatically switched on when the shock absorber is not compressed) and ON (strobe lights flash white)
BeaconOperation of the two flashing red lights, one on top and one on bottom of the fuselage. OFF (beacon lights are off) and ON (beacong lights flash red)
WingOperation of two single beam lights on each side of the fuselage, to illuminate wing leading edge and engine air intake to detect ice accretion
Nav and LogoOFF (lights are off), 1 (logo lights are on when the main gear struts are compressed or the flaps are extended at 15° or more) and 2 (circuit for second set of navigation lights is activated)
Runway turn offOperation of runway turnoff light installed on the nose gear strut. OFF and ON
LandOperation of landing lights
Nose wheelOFF (all lights are off), TAXI (only taxi lights is illuminated) and T.O. (Both taxi and takeoff lights are on)

Table 1: Table 1: Description of overall beahvior

Objects definition and scenarii modelling lead us to the definition of a kind of contract. This contract provides a name for each objects such as noseWheel as well as a template name for object states such as noseWheel_off, noseWheel_taxi, noseWhell_to.

2.2 Team work process

Programmers and graphics designers can work indepently, once the initial contract has been established. For each objets, the contract API defines a SVG structure to enable interoperability between look and feel while leaving it flexible enough such that graphics designers may create visually stunning and impressive applications. For example, we need three layers to represent the three possibles states of the nose wheel light control (OFF, TAXI and T.O.). Figure 5 shows the Adobe Illustrator palette that represents the structure used as a contract between the graphics designer and the programmer for the lighting control panel.

illustrator.jpg

Figure 5: Structure of the SVG file for the lighting control panel

One of the most important advantages of this approach is the possibility to create several skins for the UI. There are two ways to accomplish this goal: through CSS files and through new SVG files. Although the easiest way to customize the representation of the UI is through the use of CSS, SVG files may be replaced and swapped with no effect to the application functionnalities, provided that they respect the contract between the graphics designer and the programmer.

Describing behavior is a different facet than graphics. The literature provides various models of discrete or continuous behaviour in user interfaces, such as UML State charts, Finite state machines, Petri nets, data flows, constraints, etc. To implement example the Airbus demonstration, we have chosen finite state machines (FSM). Although the FSM model has well-known limits such as state-explosion issue, it is rich enough to define simple discrete behaviours. The overall system behavior can be defined by two types of FSM:

Figure 6 shows the FSM of a cyclic three-steps control. This FSM has three states (P0, P1 and P2) and three transitions labelled with abstract events (turnP0P1, turnP1P2 and turnP2P0).

behavior3Pstick.png

Figure 6: Finite State Machine of a cyclic three-steps control

2.3 Integration of graphical components and behaviors

The integration of graphical components and behaviors is the last step in a cycle of our development process. It consits first in pairing graphical objects with states of finite state machines (FSMs). Figure 7 shows the association of the FSM states with the corresponding graphical representations of the nose wheel light control.

fsmGraphics.png

Figure 7: The FSM-graphics pairs for the nose wheel stick

Then, the programmer must have a mean to describe the association of events on graphical objects, such as mouse click, with the abstract events of the FSM. For example, to change the nose wheel light control state from TAXI (P1 state) to T.0 (P2 state), we have to left-click on the graphical object named noseWheel_taxi. Thus, the asbtract event turnP1P2 must be set to "left-click on noseWheel_taxi"

3. Developping SVG applications with IntuiKit

SVG allows a separation of the UI presentation from its control logic. This fits well in an iterative design process involving team work because graphical designers and developers can start working in parallel, as soon as they have agreed on a logical structure for retrieving the graphical components. However this can only be achieved if one does not clutter the SVG files with tens of line of scripting language code.

We support the iterative UI design process with a custom software suite, IntuiKit, that makes use of SVG for the UI presentation. However, IntuiKit is not dependant on the graphical modality. It has been designed as a rendering engine for running model-based software components. This makes it closer in many respects to a language interpretor, or eventually to a browser, than to a GUI toolkit.

A UI is the result of the instantiation and combination of several models. SVG is one of these models. The models are bound together into software components. The definition of a component is itself driven by a model that we call the IntuiKit core model. Every other model is designed as an extension of the core model.

The set of models for a given UI depends on the interaction style and on the the preferred modelling approach: for instance, models of graphical objects and behaviours for direct manipulation; speech and grammar rules for speech interfaces. The software components making a given UI are declared and managed independently, their different constitutive models can be authored with specialized editing tools, such as vectorial drawing applications for SVG, by hand with a basic text editor, or they can be automatically generated.

The IntuiKit runtime engine loads the components directly from XML files. It is also possible to declare and to instantiate components directly with native code written in an imperative language, presently Perl or C++. In that case, the component code can be mixed together with other application code or with non-modelled UI code. It is also possible to mix native code and XML components in a same application. The IntuiKit application object model binds components together.

3.1 The IntuiKit application object model

As with a Web application based on XHTML or pure SVG, an IntuiKit application is based on a hierarchical structure which is a tree of elements. Elements are the basic blocks of all models: graphical objects, windows, control structures, etc. For instance, SVG drawing elements, such as rectangles and paths, are also IntuKit elements. Elements define their own properties, which are key-value pairs.

The interpretation of an application consists in a series of traversals of the application tree in response to end-user actions or to internal changes in the system state. Each element in the tree has an internal state that determines the side effects of the traversal. This state is either unrendered, rendered or suspended. An unrendered element has never been traversed, it needs to be rendered when first traversed. A rendered element has been already been rendered as its name suggests; any change to it since its last traversal must be converted to adequate side effects during the next traversal. Finally, the rendering of a suspended element is suspended, which implies that it does not provoque side effects when traversed, even though it must be able to be quickly rendered again if its state changes to rendered.

The rendering process and the side effects depend on the nature of the element. The rendered and suspended states of IntuiKit version of SVG elements have been mapped to their visibility attribute: rendered state maps to a visible value while suspended state maps to a hidden value. Changes to the values of the element properties result in changes of the corresponding attributes of the graphical presentation of the element when it is visible.

Some elements of an application tree can be escaped from the three state model that regulates their life cycle. These elements are treated as pure models, which means they will never be rendered directly, much like SVG elements between <defs> tags. They only can be copied into different parts of the tree, which is similar to creating a new instance in a prototype-based language. So in a way, the IntuiKit application model is a generalization of the SVG document object model with custom elements and an extension mechanism for creating new elements.

Some builtin control structures described below control the current state of elements. As it is common with UI code based on a mainloop processing events, the control logic in IntuiKit relies on an event processing model in which any element can send and receive events. There is a specialized Binding element which is used to bind event specifications with callbacks written in native code.

The application tree is loaded from one or several XML files, such as SVG files. It can also be created through an API in native code, or with a mixture of XML files and imperative programming language files. The values of the properties defined by each element can be set from Cascading Style Sheets (CSS) files, or directly from within the XML files or the native code files. A referencing mechanism based on XPath expressions can be used to store references to elements inside properties in XML files. It is equivalent to the pointer based referencing system in imperative programming languages.

The separation of presentation from application logic comes from the provision of presentation as gaphical components in SVG files, and from the provision of control structures as dedicated elements bolonging to the switch or the FSM modules.

3.2 The switch module

The switch module of IntuiKit shares common points with the switch module in the XForms recommendation. Like it, this element contains one or more case elements, any one of which can be rendered at a given time while all the others are suspended. Put it in another way, it can also be seen as an extension of the SVG switch element that renders only the first of its direct child elements with a special attribute that matches given environment variables.

In IntuiKit, the switch element has a builtin 'current_branch' property whose value determines the identifier of its case children that must be rendered at the exclusion of the others. Changing the value of this property and traversing the switch sub-tree results in changing the case that is rendered.

The following example is a switch-based component that displays the Airbus lights of the plane whose images are pointed to with the XPath expression "plane.svg#strobe" when the value of its 'current_branch' property is "on". It displays nothing when its value is "off".

<switch id="switch" xmlns="http:/www.intuilab.com/2005/intuikit">
  <case id="on">
    <use xlink:href="file://plane.svg#strobe"/>
  </case>
  <case id="off"/>
</switch>
	

Example 1: Switch XML example

The switch module takes advantage of a method often used by graphical designers who use Adobe Illustrator to build behaviours in Web pages: they store the states of their objects into different layers and manually simulate the transitions by turning the visibility flag of the layers on and off. To achieve that with IntuiKit, each layer must be given a unique id and be loaded inside a case branch of a switch element.

3.3 The FSM module

The FSM module wraps native IntuiKit code for managing finite state machines. A FSM element defines one property for each input event that can trigger a transition. The value of this property must be set to a valid event specfication at runtime. A FSM can also trigger events during transitions. Its builtin 'current_state' property stores the name of its current state.

As an example, the following "automaton3p.xml" file defines the FSM produced by the developer for the nose stick component represented in Figure 5:

<intuikit xmlns="http:/www.intuilab.com/2005/intuikit">
 <fsm id="fsm">
  <property name="turnp0p1"/>
  <property name="turnp1p2"/>
  <property name="turnp2p0"/>
  <transitions>
   <transition from="p0" to="p1" on="turnp0p1"/>
   <transition from="p1" to="p0" on="turnp1p0"/>
   <transition from="p2" to="p0" on="turnp2p0"/>
  </transitions>
 </fsm>
</intuikit>

Example 2: Automaton3p.xml file

When a FSM is rendered, it starts listening for input events and changes its state as they occur, triggering the corresponding transitions. When a FSM is suspended it stops listening to events. FSM, like any other element, can be nested inside switch elements. This is a powerful way to create hierarchical finite state machines.

3.4 Building control structures with switch and FSM

One of the main purpose of the FSM element is to be paired with a switch element for controlling the current branch that is rendered. This is conceptually equivalent to considering that they share a common state.

There are some examples of an approaching pattern in Web applications, where a hard coded Javascript control structure is used to move the focus between different elements. This is the case in the pizza-ordering application of IBM's Multimodal team, that accepts size and various toppings via check box or speech recognition (Richard 2003). However, in these type of examples the control structure has not been captured by a declarative model and requires a mix of XML and scripting language code.

IntuiKit introduces a means for declaring a property in a parent component that is shared with some properties of its direct children. In particular, a parent component can define a 'state' property that merges the 'current_state' and 'current_rule' properties of its child FSM and switch elements. The resulting object behaves as if the three properties were the same. As a side effect, each change of state of the FSM element will also change the state of the switch element. The pseudo-code below illustrates the corresponding XML syntax:

<component xmlns="http:/www.intuilab.com/2005/intuikit">
 <property name="state" extends="f.current_state; s.currrent_branch">
    <fsm id="f">...</fsm>
    <switch id="s">...</switch>
</component>

Example 3: Merge of properties

3.5 A component example

The following code is the "nosestick3p.xml" file that declares one of the two sticks with 3 positions used in the Airbus demo application. It clearly shows the effective separation of graphical models from control models. That code defines one component containing a 3 states switch and a 3 states automaton (in an external file) that control the rendering of the stick.

The skin for the stick is defined in an external SVG file that contains the corresponding graphical components. The event specifications that triggers the transitions of the automaton are declared with an IntuiKit syntax. The event specfications use an XPath expression to point to the graphical objects that trigger the events.

<intuikit xmlns="http:/www.intuilab.com/2005/intuikit">
 <component id="nosestick">
  <property name="state" value="p1" extends="fsm.current_state; switch.currrent_branch">

  <switch id="switch">
   <case id="p0" >
      <use xlink:href="lightPanel.svg#nose_bg"/>
      <use id="stick_p0" xlink:href="lightPanel.svg#nose_off"/>
   </case>
   <case id="p1" >
      <use xlink:href="lightPanel.svg#nose_bg"/>
      <use id="stick_p1" xlink:href="lightPanel.svg#nose_taxi"/>
   </case>
   <case id="p2" >
      <use xlink:href="lightPanel.svg#nose_bg"/>
      <use id="stick_p2" xlink:href="lightPanel.svg#nose_to"/>
   </case>
 </switch>

 <use id="fsm" xlink:href="automaton3p.xml">
   <property name="turnp0p1" source="#stick_p0" spec="ButtonPress-1"/>
   <property name="turnp1p2" source="#stick_p1" spec="ButtonPress-1"/>
   <property name="turnp2p0" source="#stick_p2" spec="ButtonPress-1" />
  </use>
 </component>
</intuikit>

Example 4: XML description of a three-steps stick

3.6 Making generic component

The models and the techniques described above support the creation of full applications. When a new model is developped as a custom native code extension and is not available in XML, then it can be instanciated in native code. However, it is also possible to create new XML parser modules. These parsers and the corresponding markup are isolated in their own namespaces. This boils down to create new "code-behind" elements that fits within the overall IntuiKit architecture.

In any case, it would be fastidious to have to declare all the interactive components of an application into individual XML files when they differ only by the name of the SVG files that contain their skins, or by the name of the identifiers of the skin components. This is the case for instance for all the sticks with 2 positions, or for all the sticks with 3 positions in the example used through this article. Their definition differs only by the identifiers of their graphical components.

For that purpose we have defined an 'eval(name)' function that can be used as a string value of any property, or as a value of the "xlink:href" attribute of the use element. That function returns the value of the property called 'name' of the component from within which it is called. The following example declares a generic component that displays a SVG file whose name is contained in its 'stick_off' property. The component is also instanciated with a "file://panel.svg#noseWheel_off" target:

<component id="display">
    <property name="stick_off"/>
    <use xlink:href="value(target)"/>
</component>

<use xlink:href="#display">
    <property name="stick_off" value="file://panel.svg#noseWheel_off"/>
</use>

Example 5: Example of customization

The full Airbus demo application can be programmed with only 6 short XML files (automaton2p.xml, automaton3p.xml, stick2p.xml, stick3.xml, light.xml, airbus.xml) and 2 SVG files (plane.svg, panel.svg) by using the paramaterization mechanism. In that particular application there is no need for native code. Most of the files, such as the automaton description files, can be reused in other projects. They can also be generated with specialized graphical editing tools.

4. Related works

IntuiKit approach to developing SVG applications has evolved from our practical experience of working with graphical designers. It is also the result of our work on modelling languages for declarative UI programming. Some ideas presented in this article can be related with several different technologies which have been around during the last years of the Web evolution.

As we have already mentionned, the IntuiKit switch element is a generalization of the switch module in XForms (Dubinko 2003) and of the switch element in SVG. The IntuiKit core module is used to structure a UI into a set of reusable software components; it shares some ideas with the HTML Components W3C Member Submission (Wilson 1998) and with the XUL/XBL (XUL) way to create components through a binding. This work has recently been endorsed by the W3C with the ongoing sXBL recommendation (Ferraiolo 2005). In a near future the frontier should blur between the notion of document, that sustains the Web browser, and the notion of software component which is explicit in IntuiKit.

There have been some other attempts to introduce declarative control structures. The repeat module in XForms maps data to presentation; it is a kind of iterator, or a constraint declaration depending on the point of view. Laszlo has an explicit 'state' element, but it can be manipulated only programmaticaly with Javascript methods (Laszlo 2005). In this regards, Intuikit is closer to UML 2.0 with its hierarchical state machines.

Finally, IntuiKit is a prototype-based modelling language into which each component sub-tree can serve as a model for cloning new instances (Dony 2002). The mechanism to merge properties, such as the state property of FSM and switch elements, allows the dynamical creation of split objects (Bardou 1996). In such a split object, the parent component and its FSM elements represent the control logic view of the object, while the parent component and its switch elements represent the graphical view of the object (assuming switch elements contain only graphical elements).

5. Example application

We have developed prototypes and pre-operational products in the automotive, aerospace, manufacturing, telecommunications and defence domain with IntuiKit. This has given us an opportunity to test the concepts described in this article. Four graphic designers were involved in these projects, one at a time. Some had no previous experience of working with programmers. In all cases the collaboration was remote, meetings being reserved for participatory design sessions. These experiences gave us evidence of the gains brought by our model-based UI design process in terms of schedules and effort

The development of a departure manager for airports is a typical use case. It has been designed and developed over a period of a few weeks in late 2003. A company had developed algorithms to optimize the sequence of departures and coordinate controllers who guide taxiing aircraft. Air traffic controllers are known to be very demanding professional users, and the company wanted to embed their algorithms in touchscreen workstations that would both provide excellent usability and seduce users and decision makers. The application was meant for pre-operational tests, and consequently had to offer full functionality. The company also had a very tight schedule because they wanted to exhibit the product at a professional convention so as to gain customers. This strict deadline obviously played a key role in the organisation of the project.

The project team was composed of a graphic designer and two programmers (among whom a lead interface designer and a domain expert). The project started with a discount participatory design session that produced a paper prototype for every interface to be built in the project. Figure 8 shows one of the prototypes.

esquisse.png

Figure 8: The paper prototype that served as a reference for group work

This prototype served as the reference for all further developments on the interface, in several ways. First, the layout of the different parts of the interface was the result of collective work and served as the basis for future composition work by the designer. Second, all parts of the prototype were given a name: static parts (printer, column, timeline, etc) as well as a template name for dynamic parts (strips, plan). The names served as a basis for informal communication between participants, who never met again until the end of the project. Third, immediately after the session the lead designer decomposed the prototype into a tree of components, starting with the toplevel parts. This tree represented the architecture of the application, both in terms of software components and graphical components. She used the tree as a contract between the actors of the projects, especially between programmers and the graphic designer.

Programmers and the designer worked independently after this meeting. Contact has been limited to clarication questions or visual design proposals and feedback. The designer started working on the general impression he wanted to convey, and tested ambiances, colours, harmonies, textures, etc. Figure 9 gives a sample of his work.

ambiance2.jpg

Figure 9: The designer's work on the global picture

Meanwhile programmers used the reference component tree to code the application. For each component they defined control structures, calculations and connections with a functional core resident on a network server. They also drafted basic graphics for testing purpose. They used a professional drawing tool to produce some of the graphics, others being coded with the native code Perl API. The result of their work is shown on Figure 10. It is of very limited visual quality, but sufficient to test usability as well as the network connection.

ground0.png

Figure 10: The application with programmer-made graphics

The designer had finished maturing the design and preparing his elements, such as fonts and patterns, three to four weeks before the deadline. He started then to create the structured graphical components. During that process he was able to test them in context himself with the version of IntuiKit installed on his computer: he just had to drop the SVG files in the appropriate folder to see them executed by the last version of the code received from the developers. He used to send back the updated SVG files to the programmers by email. Figure 11 shows two equivalent SVG designs from the application's point of view, though not exactly equivalent for the user.

twostrips.png

Figure 11: Two skins for the strips

The final graphics were produced a few days before the deadline, after several cycles of test and redesign. The simplicity of graphics integration into the application as well as the ability to work in parallel allowed very late iterations while programmers were still busy testing and debugging the application. The final result is shown on Figure 12. As a final note, the product obtained a great success with potential users and customers at the exhibition, and the UI was perceived as a competitive advantage.

dman.jpg

Figure 12: The final application after full integration

We have compared the departure manager project with an equivalent project conducted with a traditional approach: visual elements produced by a graphics designer are reproduced with a programming language by the programmers. We have observed three major improvements of our approach over the traditional one:

We have also identified several limitations and possible improvements:

6. Conclusion

Pratical experience with IntuiKit shows that interactive UI design with SVG is worthful. The most obvious benefits of high-end 2D vectorial graphics is the visual quality of the result, when designed by professionnals. But the efficency gain of the separation of the graphic model from the interaction logic is also important. It gives more time to improve the usability of the design.

Now that we have succesfully developped several design-intensive application with SVG files as application resources, our plan is to reuse or to invent more declarative models to increase the coverage of UI code that can be implemented with declarative languages. Our work includes constraint-based layout control, non-linear geometric transformations (fisheye, perspective wall, etc.), animations and speech grammars for multimodal applications.

In particular we are looking forward to the standardization of models such as finite state machine (or state charts) by the W3C and to the newest SVG 1.2. We also hope to convince people that the gap between UI design and system design processes can be reduced without loosing creativity by working with a model-based approach to UI programming.

Acknowledgements

A special thanks goes to Patrick Lecoanet (DSNA/SDER), the main author of TkZinc (http://www.tkzinc.org) an advanced open source graphical rendering engine used in IntuiKit. Pierre Dragicevic, Celine Schlienger and Stephane Vales contributed to the implementation of IntuiKit. Yves Rinato (Intactile Design) designed the departure manager, which is shown with the kind permission of Sofreavia. Thanks to those who made significant contributions but are not listed here.

Bibliography

[BARDOU96]
Bardou Daniel and Dony Christophe (1996). Split objects: a disciplined use of delegation within objects. ACM SIGPLAN Notices, Proceedings of the 11th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications, Volume 31 (10)
[DONY02]
Dony Christophe, Malenfant Jacques and Cointe Pierre (2002). Prototype-based languages: from a new taxonomy to constructive proposals and their validation. ACM SIGPLAN Notices, conference proceedings on Object-oriented programming systems, languages, and applications, Volume 27 (10).
[DUBINKO03]
Dubinko Micah, Klotz Jr. Leigh L., Merrick Roland and Raman T. V. (editors) (2003). XForms 1.0, recommendation, W3C, 14 October 2003, http://www.w3.org/TR/xforms/
[FERRAIOLO05]
Ferraiolo Jon, Hickson Ian, Hyatt David (editors) (2005). A SVG's XML Binding Language (sXBL), W3C Working Draft 05 April 2005, http://www.w3.org/TR/sXBL/
[LAZLO]
Laszlo Systems, Inc. (2005). Software Engineer's Guide to Developing Laszlo Applications. Visited in June 2005 at http://www.laszlosystems.com/lps-3.0/docs/guide/
[MIRANTI03]
Miranti Richard, Jaramillo David, Ativanichayaphong Soonthorn and White Marc (2003). Developing Multimodal Applications using XHTML+Voice, Voice XML Forum, Volume 3 (5), September/October 2003. Visited in June 2005 at http://www.voicexmlreview.org/Sep2003/features/Sep2003_dev_mm_apps.html
[MULLER93]
Muller Michael J., Kuhn Sarah (1993). Participatory design, Communications of the ACM, Volume 36 Issue 6, pp. 24-28, ACM Press, New York, NY, USA.
[WILSON98]
Wilson Chris (editor) (1998). HTML Components, Componentizing Web Applications. W3C member submission, NOTE-HTMLComponents-19981023, http://www.w3.org/TR/1998/NOTE-HTMLComponents-19981023
[XBL]
XUL Planet. Introduction to XBL. Visited in June 2005 at http://www.xulplanet.com/tutorials/xultu/introxbl.html

XHTML rendition made possible by SchemaSoft's Document Interpreter™ technology.