SVG, layered user interfaces and end to end models


Table of Contents

Introduction
Model-based User Interface Design
Application task models, data and meta-data
Abstract User Interface
Concrete User Interface
Final User Interface
Conclusion
References

Despite having been very closely involved with the development of HTML, I believe the time has now come to move authoring of Web applications to a new level where the emphasis is on what you want the application to do rather than battling with the details of how this is to be achieved on any given browser or delivery platform.

This talk will describe the goals of a proposed new W3C Incubator Group on applying research on model-based user interfaces and end-to-end models for a declarative treatment of Web applications that features diagrams and business rules rather than hacking html, CSS and JavaScript.

At first sight, model-based design sounds like yet another top-down design process with all of the problems that that entails. However, the aim is instead to support agile development processes, where trying to define all the requirements up front is considered to be a risky proposition. To paraphrase Scott Ambler:1

  • Agile modelers travel light and create models which are just barely good enough.

  • Agile developers solve today’s problem today and trust that they can solve tomorrow’s problem tomorrow.

  • Agile modeling is both evolutionary and collaborative.

  • Requirements change over time, so embrace this concept and adopt techniques which allow you to react effectively.

SVG has an important role to play in this framework, e.g. as a basis for round tripping a variety of diagram notations to XML and back, for presenting application data, for theming controls and for purely decorative purposes.

Research work on model-based design of user interfaces has sought to address the challenge of reducing the costs for developing and maintaining user interfaces through a layered architecture that separates out different concerns:

  1. Application task models, data and meta-data

  2. Abstract Interface (device independent, e.g. select 1 from N)

  3. Concrete Interface (device dependent, e.g. use of radio buttons)

  4. Implementation on specific devices (e.g. HTML, SVG or Java)

Each layer embodies a model of behavior (e.g. dialog models and rule-based event handlers) at a progressively finer level of detail. The relationships between the layers can be given in terms of transformations, for example, between objects and events in adjoining layers. XML is well suited as a basis for representing each layer, with the possible exception of the final user interface, which may be generated automatically, guided via author supplied policies.

High level development suites can be provided to shield authors from the underlying XML representations. For example, a data model could be manipulated as a diagram, while the user interface could be defined via drag and drop operations together with editing values in property sheets. The development suite is responsible for maintaining the mappings between layers and verifying their consistency. Authors can choose to provide alternative mappings as needed to address different delivery contexts.

This is the topmost layer and abstracts away from details of the user interface. Task models name the tasks the user is able to perform, along with how these decompose into sub-tasks, temporal ordering of tasks, and pre and post conditions. ConcurTaskTrees2 is a graphical notation for task models that has been developed by Fabio Paternò at the Italian Institute of Information Science and Technologies:


UML is a widely supported suite of diagrammatic notations for modeling requirements, with the means to export these to Java stubs as a starting point for development work. UML includes a variety of different types of diagrams as appropriate to different kinds of requirements capture. Wikipedia divides UML diagrams into three categories:3

  • Structure diagrams which emphasize what things must be included. These include class diagrams, component diagrams and object diagrams.

  • Behaviour diagrams that describe what must happen. These include activity diagrams, state machine diagrams and use case diagrams.

  • Interaction diagrams, a subset of behaviour diagrams that emphasize the flow control and data. These include communication diagrams, interaction overview diagrams, sequence and timing diagrams.

UML's class diagrams are sometimes used for ontologies defined with OWL, which so far lacks its own standard diagrammatic notation. Without diagrams, complex ontologies can become really tough to understand, and yet they are critical to modeling relationships within application data. Another family of notations has been developed for modeling business processes, for example the business process modeling notation (BPMN).4


BPMN is not executable as such, and a possible solution to that would be to combine diagrams of processes with high level rule languages. These would both be transformed into an internal XML representation that is in turn compiled into a virtual machine language for efficient execution. What kinds of XML representation are appropriate and how do they relate to SVG? This takes us back to the very early work on SVG, where Bob Hopgood argued for the need for two levels of markup, one for the semantics of a diagram, and the other for its presentation. Unfortunately, at the time commercial vendors, like Adobe, were much more interested in the presentation markup, and work on semantics was put on hold. XSLT and XBL solve some problems for mapping semantics to presentation, but aren't a complete solution. For a user of an application development suite, it is important to be able preserve both the semantics and the presentation when saving work and later resuming it. The ability to combine semantics and presentation markup in the same document makes this easier to support. SVG allows for such a combination, but without standards, such documents won't be able to be edited interoperably across different editoring tools. Further work on standards is therefore necessary.

This layer assumes a commitment to a class of devices and a choice of modalities. This can include the provision of alternative user interfaces as appropriate to the delivery context (user preferences, device capabilities and environmental conditions). On a desktop device with a large high resolution screen there is an opportunity to place more information and to support a wider range of tasks than on a small screen such as a mobile phone. The following table suggests some of the considerations for realising a selection task on desktop and mobile devices:

Table 1. Effect of cardinality on UI
Cardinality Desktop Mobile
Low Radio button Radio button
Medium List box Drop Down List
High List with scrollbars Drop Down List
Huge Search box Search box

On mobile devices, a temporal ordering of tasks may be imposed by display and memory limitations. Trying to adapt a page designed for the desktop to run on a mobile device is fraught with problems. High end devices like the Apple iPhone provide the means to view a scaled down version of a large virtual display, and to zoom into areas of interest, but this is just a work around to allow access to unadapted websites, and provides an impoverished user experience when compared to pages designed with the mobile user in mind.

To facilitate the deployment of accessible websites, the concrete user interface should conform to the WAI-ARIA ontology of roles, states and events for user interfaces. This was developed to enable assistive technology such as screen readers to gain access to the user interface controls generated within a web page through a combination of markup and scripting. The following figure illustrates the scale of the ontology, but won't be legible at normal window sizes. Try right clicking on the figure to open the image in an external image viewer.


The concrete user interface should define some, but not all, aspects of the final presentation. This ensures that there is sufficient flexibility in being able to realize the presentation on a variety of devices and platforms. Application developers should be able to define themes and other policies for guiding the transformation for a particular device/platform. For visual interfaces, the concrete user interface defines layout (including pagination) and the choice of user interface controls, e.g. radio buttons versus list boxes. The precise visual appearence and behaviour are left unspecified.

Proprietary markup languages like Microsoft's XAML and Adobe's MXML have built in assumptions about the class libraries that will be used to realize them, and as a result these languages are illsuited for a cross platform standard. There is an opportunity for new work on defining the requirements for the concrete UI, building upon experience in industry and academia, and this is one of the goals for the proposed Incubator Group. It is important to make clear that the concrete UI is intended for use in authoring environments, and there is no requirement for its implementation on delivery platforms like web browsers. The goal is to is instead to support cross vendor support for authoring so that application developers are not locked into a single vendor, nor are they forced into authoring at the level of HTML and JavaScript.

The concrete user interface can be transformed automatically into the final user interface for delivery to a specific device. For an HTML browser, this would involve the generation of the HTML markup and associated JavaScript. It might also involve the generation of server-side scripts, for example, when it is necessary to provide fall backs in the case that client-side scripting has been disabled.

SVG itself is an up and coming delivery target, now that most browsers provide native support for SVG. The variations in the scripting interfaces in current implementations remains a major challenge, e.g. for keystroke input, but it is to be hoped that support for DOM3 events and the SVG microDOM will soon be widely deployed.

Another possibility is to target the ubiquitous Adobe Flash Player, given that it is now on virtually all desktop computers. The open source compiler and programming language haxe19 is proving to be a very effect solution for developing Flash applications. In a separate paper at this conference, I report on work on implementing support for viewing and editing SVG using Flash. Additional experiments have demonstrated the feasibility of using Flash to render XML for a wide range of user interface controls, raising the possibility of direct interpretation of markup for the concrete user interface layer. Flash would also allow for authors to work with the application authoring suite from within their Web browser. It even seems feasible to deploy XHTML2+XForms+SVG as a Flash application, although support for client-side scripting would be challenging.

One unresolved issue is how to enable Web search engines to adequately index rich Web applications. The search robot has the disadvantage of being blind and unable to make sense of any images. With web pages increasingly using client side scripting to construct the user experience, this reduces the information that can be gleaned from the page's markup. The robot has to emulate a browser and run the page's scripts and apply any associated style sheets before it is possible to make sense of the page. This is where WAI-ARIA could help the robot to make better sense of the user interface, and could result in motivating website developers to make their pages indexable and accessible.

Model-based design has the potential to usher in a new generation of Web authoring tools that support agile processes and reduce the costs in development, testing and maintenance, especially for delivery to a wide range of devices. The approach uses a layered architecture to separate out different concerns. The underlying markup will inevitably be cumbersome for direct editing, but this can be avoided through the use of development tools that hide the markup and manage the mappings between different levels of abstraction. A W3C Incubator Group is proposed for evaluating research and advising W3C on opportunities for standardization. If successful, this will allow developers to switch vendors. The approach would also free developers from having to battle with HTML and JavaScript, and the complications from variations across Web browsers.

Finally, I would like to acknowledge the support of JustSystems20, and my manager Hideki Hiura for enabling me to do this work.

  1. Scott Ambler on agile processes, see http://www.agiledata.org/essays/agileDataModeling.html

  2. ConCurTaskTrees, see http://giove.cnuce.cnr.it/concurtasktrees.html

  3. Wikipedia on UML, see http://en.wikipedia.org/wiki/Unified_Modeling_Language

  4. Wikipedia on BPMN, see http://en.wikipedia.org/wiki/BPMNe

  5. Puerta, A.R. A Model-Based Interface Development Environment. IEEE Software, 14(4), July/August 1997, pp. 41-47.

  6. E. Schlungbaum. Model-Based User Interface Software Tools - Current State of Declarative Models. Technical Report 96-30, Graphics, Visualization and Usability Center, Georgia Institute of Technology, 1996.

  7. Paulo Pinheiro da Silva. User Interface Declarative Models and Development Environments: A Survey. In /Interactive Systems: Design, Specification, and Verification/ (7th International Workshop DSV-IS, Limerick, Ireland, June, 2000), Ph. Palanque and F. Paternò (Eds.). LNCS Vol. 1946, pages 207-226, Springer, 2000.

  8. Souchon, N., Vanderdonckt, J., A Review of XML-Compliant User Interface Description Languages, Proc. of 10th Int. Conf. on Design, Specification, and Verification of Interactive Systems DSV-IS'2003 (Madeira, 4-6 June 2003), Jorge, J., Nunes, N.J., Falcao e Cunha, J. (Eds.), Lecture Notes in Computer Science, Vol. 2844, Springer-Verlag, Berlin, 2003, pp. 377-391.

  9. Workshop on Developing User Interfaces with XML: Advances on User Interface Description Languages organized at Advanced Visual Interfaces 2004.

  10. Limbourg, Q., Vanderdonckt, J., Michotte, B., Bouillon, L., López Jaquero, V. UsiXML: a Language Supporting Multi-Path Development of User Interfaces, Proc. of 9th IFIP Working Conference on Engineering for Human-Computer Interaction jointly with 11th Int. Workshop on Design, Specification, and Verification of Interactive Systems EHCI-DSVIS’2004 (Hamburg, July 11-13, 2004). LNCS, Vol. 3425, Springer-Verlag

  11. Limbourg, Q. Multi-Path Development of User Interfaces. Ph.D Thesis, Université catholique de Louvain, 2004.

  12. Montero, F., López Jaquero, V., Vanderdonckt, J., González, P., Lonzano, M. Solving the Mapping Problem in User Interface Design by Seamless Intergration in IdealXML. Proc. of 12th Int. Workshop on Design, Specification and Verification of Interactive Systems, Newcastle (UK), July 2005.

  13. UsiXML Website an XML markup language that describes the UI for multiple contexts of use such as Character User Interfaces (CUIs), Graphical User Interfaces (GUIs), Auditory User Interfaces, and Multimodal User Interfaces.

  14. Multimodal TERESA website, a transformation-based environment, supporting multimodal interfaces designed and developed at the HCI Group of ISTI-C.N.R.

  15. ConcurTaskTrees website, a notation for task model specifications developed by Fabio Paternò to overcome limitations of notations previously used to design interactive applications.

  16. Model-driven architecture covers a suite of standards produced by the OMG. These include QVT (queries, views and transformations) and XMI (XML Metadata Interchange). More details can be found on the OMG MDA home page

  17. ATL (ATLAS Transformation Language) is a model transformation language developed at INRIA to answer the QVT Request For Proposal from OMG. More details are available on the ATL project page

  18. Accessible Rich Internet Applications (WAI-ARIA). A W3C specification that provides an ontology of roles, states, and properties that set out an abstract model for accessible interfaces.

  19. Haxe, see http://haxe.org/

  20. JustSystems, see http://www.justsystems.com