Requirements for an On-line

Knowledge-based Anatomy Information System

James F. Brinkley and Cornelius Rosse

Structural Informatics Group, Department of Biological Structure

University of Washington

Seattle, Washington USA

 

Abstract

User feedback from the Digital Anatomist Web-based anatomy atlases, together with over 20 years of anatomy teaching experience, were used to formulate the requirements and system design for a next-generation anatomy information system. The main characteristic of this system over current image-based approaches is that it is knowledge-based. A foundational model of anatomy is accessed by an intelligent agent that uses its knowledge about the available anatomy resources and the user types to generate customized interfaces. Current usage statistics suggest that even partial implementation of this design will be of great practical value for both clinical and educational needs.

 

Introduction

Anatomical information is fundamental for biomedicine, not only because there is a large amount of it, but also because anatomy serves as a framework for organizing other kinds of data.

Increased multimedia capabilities, together with new data sources such as the Visible Human1, have led to many CD-ROM based anatomy atlases and tutorials2, 3. The World Wide Web provides the potential for these and other anatomical resources to be delivered over the Internet. However, most Web-accessible resources have to-date been little more than advertisements for CD-ROM products. The availability of the Web, together with the new anatomical information resources, present opportunities and challenges, both technical and economic, for delivering on-demand anatomical information that is customized to a wide variety of users.

Over the past several years we have been developing a distributed system for organizing and delivering anatomical information, both over the Internet and via CD-ROM4. Based on this system we have created a series of interactive image-based atlases that are available both on the Web and on CD-ROM. These atlases consist of annotated images through various body regions, as well as stored animations of 3-D graphics reconstructions. Given an annotated image, a user is able to click on regions to see structures, to take an online quiz, to retrieve an associated animation, or to generate a "pin diagram" showing the names of all the structures in the image.

The atlases are widely used, both within our institution and throughout the world, receiving over 7000 hits per day. However, on-line surveys and student feedback show a need for additional features. These features include 1) content from more parts of the body, 2) links to additional symbolic information besides just the name, 3) more control over the navigation, 4) varying levels of detail depending on the user, and 5) direct manipulation of 3-D models.

The purpose of this paper is to define the requirements for a next-generation user interface that meets these needs, and to specify the architecture of the system that can implement this interface. In the remaining sections we define the interface requirements for such a system, specify the architectural framework, and describe the current status and plans for implementation.

User-interface requirements

The requirements for the next-generation user interface are based on over 20 years of experience teaching anatomy by one of the authors (CR). In an attempt to discourage rote memorization, standard templates were developed for the major classes of anatomical structures. The slots for these templates include reference images that allow the student to build mental visualizations of anatomical entities, as well as symbolic information that the student needs to associate with the entities. Prior to each lecture the student is required to 1) generate a mental model of individual or clustered anatomical entities, and 2) using the templates, acquire and invent new information that is relevant to these structures. The values for these slots are then refined during class and in later exercises. By learning the kinds of information that need to be acquired about classes of structures, the student develops a knowledge framework that provides a higher level conceptualization of anatomy than can be acquired by rote memorization.

The generic template that applies to all structures includes the following slots:

Additional slots are included depending on the anatomical class (e.g., nerves have origin and insertion).

A next-generation anatomy information system should be organized around these templates, which when appropriately represented in the computer, define a foundational model that can be used to provide knowledge-based access to anatomy. As described elsewhere in these proceedings5, a foundational model of anatomy organizes anatomical concepts into several component models, including an ontological model capturing classes and subclasses of anatomical structures, each characterized by defining attributes, such that all anatomical structures can be represented as instances of one or more of these classes. The foundational model also organizes the concepts into a structural model, capturing spatial relationships, and a transformational model capturing developmental changes.

 

Classes of user interface

Although the information should be organized around the foundational model, the end-user will not necessarily see this model unless he or she requests it. Instead the information should be presented in different ways depending on the type of user, the manner of use, and the category of information. These three axes define a 3-D interface "space" within which each style of interface can be defined. Within each style the specific contents of the interface will depend on the anatomical entities under study.

The type of user determines the level of detail that is appropriate to present. Users include K-12 students, undergraduates, professional students such as medical students, postdoctoral fellows and residents, professionals such as cardiac surgeons, and the lay public.

The manner of use includes reference, tutorial and consultant modes. In reference mode the user is free to navigate throughout the information system, at the level of detail that is appropriate for his or her user type, taking advantage of any indices or tables of contents that are available. Such a mode is analogous to browsing in a library, and is the primary mode of access for our current on-line atlases. It is also the mode on which the other modes should be built, since it provides access to all the information.

In tutorial mode the system acts like a teacher, guiding the student through the information resources. In this case the student might not be presented directly with the values in the structural templates, but would be required to fill in the values before going on to the next lesson.

In consultant mode the system would answer specific queries posed by the user. This is likely to be the mode of most use to a clinician. For example, a radiation oncologist might like to see an annotated CT section through the head, in a non-standard orientation that matches the CT image taken of a cancer patient. Such an annotated section would allow the oncologist to utilize the expected location of critical structures, such as the facial nerve, when planning the radiation treatment for a tumor of the head and neck. The oncologist is not interested in being taught anatomy or in wandering through large amounts of irrelevant information. Instead he or she needs very specific information in a timely manner.

The category of information is spatial or symbolic. Although these two are linked, the manner of presentation is different. In general both types of information should be presented at the same time, in either overlapping or side-by-side windows, and each in sync with the other.

Spatial information includes 2-D annotated images (e.g., cadaver sections, x-rays, MR slices, illustrations, and photographs). These are the traditional materials present in hardcopy atlases. Annotations, arranged in levels of detail, allow the images to become interactive. Spatial information also includes 3-D segmented image volumes, such as those from the Visible Human, and 3-D models that can be combined into 3-D scenes.

The presentation of spatial information depends on its type and on changes in technology. Annotated images can be presented as clickable image maps that either return the name of a structure, or link to other images. 3-D models can be combined into 3-D scenes The user should then be able to rotate the scenes, to "dissect" structures by making them invisible, to explore in different directions (e.g., follow a vein to the heart), and to zoom out or in to greater or lesser detail.

In the simplest case each change in the scene can be rendered as a static snapshot on a fast server, then presented as annotated 2-D images to the remote user. This method has the advantage that the images can be seen on any browser running inexpensive hardware, and can be transferred over low speed connections. A more interactive presentation could be created in VRML or 3-D Java when these methods become more mature. Eventually, widely available virtual reality displays will allow the user to roam through the 3-D body.

Symbolic information to be presented in the user interface includes components of the foundational model that are relevant to the specific structures of interest, and which complement the visible spatial information.

One form of presentation should be a "template inspector" that shows the attributes and values as defined for the structure template (name, synonyms, class, and definition), ether with values filled in or in the form of questions for tutorial mode. Clicking on the value for a specific attribute (a part of the lung, for example) would display the corresponding template for that part.

The second form of presentation should show the relationships of a set of structures, in the form of a collapsible outline (as in Microsoft Word), a 2-D graph, or a 3-D graph. Depending on switches in the interface, clicking on a structure would either open up more detailed levels in the hierarchy, or would switch to the template inspector.

In most cases the spatial and symbolic presentations should be simultaneously visible as two separate windows. The user should be able to navigate in either window while the other window automatically updates to match the focus window.

Examples

Figure 1 is a mockup of a Web interface that might be designed for a professional student (medical student), accessing the information system in reference mode. The symbolic information is presented in the left-hand browser frame, whereas the spatial is presented on the right. In this case the symbolic information shows the part-of relationships for the aorta, in the form of a collapsible outline. Corresponding to the nodes that are visible in the symbolic window, the spatial window shows a 3-D scene made up of primitive 3-D models of parts of the aorta, each labeled with the corresponding name in the foundational model. When a user clicks on the image the name of the structure is returned. Clicking on that name would bring up the template inspector for that structure. The user can also rotate and zoom the scene.

If the user were a K-12 student the same scene might be shown, but clicking on a part of the aorta might only return "Aorta" instead of "Descending Thoracic Aorta". In tutorial mode the symbolic hierarchy might contain blank entries that would need to be filled in by the student. In consultant mode the user might be presented with a labeled cross-section showing the aorta on a CT image, in response to the query, "Display the aorta on a CT section through the apex of the heart".

System design

Our long-term goal is to build a single information system that can dynamically generate the many types of interfaces discussed in the last section, and can apply these interface styles to all structures in the body. We believe that the best architecture for achieving this goal is our evolving distributed framework, in which authoring and end-user programs access a set of structural information resources by means of one or more structural information servers4. By separating the resources from the methods of delivery we can deliver the resources in multiple forms and can develop multiple means for accessing them.

Figure 2 shows the design that we are using to develop the interfaces described in the previous section. The upper right portion of this figure shows one axis of the user interface space, the mode of use.

The spatial information resources include 2-D annotated images, 3-D labeled image volumes, 3-D models, and stored animations. The symbolic information resources include the knowledge base, which implements the foundational model of anatomy.

The symbolic resources also include metadata, which will be used to locate, track and identify the spatial information, since spatial information comes in many forms and may be located in different locations on the Internet.

The authoring programs shown in the upper left are used to create these resources, and include a Knowledge Builder for creating the foundational model, a Model Builder for creating the 3-D models from 3-D image volumes, and an Annotator, for associating names from the knowledge base with regions on the images.

The Structural Information Servers provide network access to these resources. The Annotated Image Server delivers interactive images that respond to mouse clicks, as in our current Web atlas programs. The Knowledge Server provides access to the foundational model, and provides answers to queries such as "Find all the synonyms of aorta", or "List the branches of the aorta".

The Data Server consults the metadata to provide the filenames of spatial objects that are associated with a set of anatomical concepts indexed by terms in the knowledge base, and the Graphics Server combines these spatial objects into scenes that are delivered to the client. These scenes can take the form of dynamically generated annotated images, animations, or VRML descriptions.

The Digital Anatomist module is an intelligent agent6 that acts as an intermediary between the user and the set of resources. Eventually the Digital Anatomist should act like a real anatomist in presenting anatomical information to the end user.

Current status

Most components of Figure 2 are either working or in development in the Digital Anatomist Project4. The data server is being developed as part of our Human Brain Project7, and will be modified to manage the spatial resources for this project.

The graphics server, a screen shot from which is shown in the right frame of Figure 1, has not been described previously. The graphics server accepts high level Lisp commands to load primitive 3-D models, and to combine them into scenes. It then renders the scene as a GIF image and sends the image to a Web client. The user’s mouse clicks are translated by a cgi script into commands to the server to rotate, highlight structures, or otherwise change the scene. The resulting Web-based scene navigator will be demonstrated during these proceedings8.

The one module of Figure 2 that is not yet in development is the Digital Anatomist itself. Development of such an intelligent agent is a problem in artificial intelligence research that would not be possible to address without the prior creation of the anatomical information resources and servers.

Discussion

In this paper we have presented the requirements for a next-generation anatomy information system, based on feedback from the current Digital Anatomist on-line atlases, and from our own extensive experience teaching anatomy. Based on these requirements we have designed the architecture as a modification of our existing distributed system, and have partially implemented the modules in this architecture.

The main characteristic of this approach over other image-based anatomy information systems is that it is knowledge-based, relying on an underlying foundational model of anatomy that is accessed by an intelligent agent. We argue that as spatial information resources, such as the Visible Human and clinical image volume, become widespread, attention will shift from the current image-based approaches to delivering anatomical information, to knowledge-based approaches.

As evidenced by the widespread use of our current, rather limited image-based atlases, the implementation of even part of the system we have designed should be of great practical use. In the longer term such a system, particularly the foundational model, should be of great use in organizing other information as attributes of anatomical structures.

 

Acknowledgements

This work was funded by National Library of Medicine grant RO1 LM06316 and contract NO1 LM43546. We thank Jay Locke for help formulating the requirements for Radiation Treatment planning.

References

  1. Ackerman, M.J. The Visible Human Project. J. Biocommun. 1991:18(2):14.
  2. ADAM Software, "Adam Scholar Series", CD-ROM. 1995.
  3. Hohne, K.H., Pflesser, B., Riemer, M., Schiemann, Th., Schubert, R., Tiede, U. A new representation of knowledge concerning human anatomy and function. Nature Med. 1995:1(6):506.
  4. Brinkley, J.F., Bradley, S.W., Sundsten, J.W., Rosse, C. The Digital Anatomist information system and its use in the generation and delivery of Web-based anatomy atlases. Comp Biomed Res, 1997; 30:472-503.
  5. Rosse, C., Shapiro, L.G., Brinkley, J.F. The Digital Anatomist foundational model: principles for defining and structuring its concept domain. 1998 Fall AMIA Symposium. Submitted.
  6. Graham, I. The architecture of agents. Object Magazine. 1997:7(7):26-28.
  7. Jakobovits, R.M., Brinkley, J.F. Managing medical research data with a Web-Interfacing Repository Manager. Proceedings, AMIA Fall Symposium, 1997:454-458.
  8. Wong, B.A. , Brinkley, J.F. Dynamic 3-D scene navigation in Web-based anatomy atlases. 1998 Fall AMIA Symposium. Submitted.

 

Figure 1. Mockup of reference mode end-user interface, incorporating pages generated by the knowledge server and the graphics server. The user should be able to move between the two windows, while the scenes in the windows should remain in sync. Although these windows are not yet in sync they access working programs that may be manipulated individually. Nodes in the symbolic window may be collapsed and expanded, and structures in the spatial window may be highlighted, while the entire scene may be rotated and zoomed.

Figure 2. System Architecture