The Virtual Anatomy Lab: A Hands-on Anatomy Learning Environment

Bruce Campbell
Cornelius Rosse
J.F. Brinkley
University of Washington

This paper introduces the Virtual Anatomy Lab software platform
for coordinating on-line gross anatomy learning sessions over time.

1. Introduction

Over 24 million health professionals rely on their knowledge of anatomy to effectively perform their work. Yet, successful anatomy knowledge acquisition techniques vary by individual. Many anatomy learning tools exist on the Web. For example, The University of Washington’s Structural Informatics Group creates symbolic (text-based), semantic (text-based), and spatial (2-D and 3-D image-based) anatomy learning tools all of which are Web-accessible. In this paper we describe the Virtual Anatomy Lab (VAL), a collaborative environment that allows students to coordinate their learning process across available on-line tools by providing a dynamic, persistent 3-D online lab space that students modify to represent their understanding or focus on a particular area of study. From those spaces, students can continue to investigate gross anatomy from anywhere they have Web access.

2. The Virtual Playground

The VAL is based on the Virtual Playground architecture, developed in 1997 by the Human Interface Technology Laboratory at the University of Washington (UW) [1]. The VP in turn was based on the success of previous GreenSpace software development done between 1993 and 1996 [2]. Both platforms investigated the use of on-line, 3D cyberspace as a medium for overcoming geographical distance in visual and aural communication. While GreenSpace made its reputation as an early proof of concept platform demonstrating Trans-Pacific feasibility using expensive SGI computers and multiple ISDN lines, the Virtual Playground demonstrated Trans-Pacific feasibility using $1000 Pentium-based computers, $100 graphics accelerator cards, and inexpensive Internet communications. Before its use as an underlying architecture for the Virtual Anatomy Lab, the VP provided infrastructure for the Netgate Mall, Adjective World, and Virtual Big Beef Creek projects at the UW HIT Lab.

3. The Virtual Anatomy Lab

The Virtual Anatomy Lab (VAL), a Java-based software application using the Java 2 SDK [3] and Java 3D API [4], focuses students’ personal learning through an interactive interface that lets them build their own 3D anatomy study space on-line. They modify their spaces by moving their viewpoint in six degrees of freedom, clicking on available tools, and dragging objects to change object position and orientation. Text and images available via Web URLs (such as UW’s Digital Anatomist Interactive Atlas [5]) can be imported into their space and moved to appropriate locations. 3-D models are provided for import onto a virtual cadaver table using cadaver mesh data maintained by the Digital Anatomist Project, but just as well could come from imported VRML files made available elsewhere on the Web. An in-world blackboard, connected to the UW’s Foundational Model Server (FMS) [6], allows students to mouse click up and down multiple hierarchies that define relationships among body parts (part of, is a, adjacent to, tributary of, branch of, etc.). Students can also collect URL’s to sites with study aids and interactive quizzes, take their own screenshots, and leave personalized push-pin messages anywhere within the room. Students can leave their spaces and return at a later date to find them exactly as they left them, facilitating repetitive and iterative use at work, home, or other location. Over time, their persistent lab space helps them document their knowledge acquisition progress while providing their memory with a highly visual memory aid.

By coordinating their use of 3-D body part meshes, students can dissect and rebuild the body as if in a physical cadaver lab. Although realism lacks currently in both the fidelity of model appearance and the methods for dissection, the VAL demonstrates a blueprint for a viable future in cadaver lab simulation, important for a world where access to cadavers becomes increasingly cost prohibitive.

4. Illustrative Use

Figure 1 shows a VAL session where a beginning anatomy student has loaded the vertebrae as a yardstick for study of the main veins and arteries of the thorax. The student has moved the heart to the side to better see the tributaries of blood flow. After loading the desired models, the student can move around the cadaver table to see from any angle, or click on a specific mesh to confirm the body part name and its relationship to others on the FMS blackboard.

Figure 2 shows the same VAL user focusing on the semantic tools. The student has clicked down the is-a hierarchy to find all body parts that are classified as parenchymatous organs. The student will study the list and then select each item to load a model mesh on the cadaver table. Inspection of the meshes provides a reaffirming reminder of tubular characteristics.

Figure 3 shows a health care professional brushing up on visualizations of the left recurrent laryngeal nerve. Since the user’s learning is highly specific, illustrations from Web sites have been imported to provide a wider range of renderings. A link to the best found on-line reference has been saved on the blackboard.

Figure 4 shows the level of complexity that is possible by using all of the available VAL tools at once. The student is focusing on the lower thorax and has taken a snapshot of the cadaver table earlier as the process unfolded. The cadaver table is below, the FMS blackboard is at lower left, a cadaver table screenshot hangs on the door, other images hang on the wall and a bookshelf at right holds links to on-line references.

Figure 1 The Cadaver Table
Figure 2 The FMS Blackboard
Figure 3 Importing Images Figure 4 The VAL in action

5. Conclusions

The VAL prototype effectively integrates components of a personal anatomy-learning journey into a 3D space, navigable by students in six degrees of freedom. 3D virtual environments are interactive places for organizing knowledge acquisition journeys of complex subject matter. With the VAL, the student can take control of the camera instead of relying solely upon those who create pre-rendered illustrations. Or, the student can hang an expert's rendered illustration on the wall and annotate it with his or her own words. Although no formal use studies have been made of the VAL, four UW anatomists suggest that the approach is very promising and in line with their views on appropriate web-based instructional methods. Negative comments typically refer to the lack of fidelity in the rendered 3D space, overcome in the near future by more powerful computers and better graphics subsystems.

Users can build rooms that attempt to externalize their own unique cognitive maps. Since the VAL is built on top of the VP architecture that emphasizes meeting spaces over geographical distance, rooms can be shared on-line allowing for student-tutor or student-student VAL sessions. Informal discussions with anatomy students have found that most students prefer to have their own personalized space, but perhaps that reflects on the past history of traditional study methods. Like the many successful Web pages that index other sites and organize cyberspace around a topic for learning, VALs could be built over time that get updated with the best resources on Earth for common anatomical learning objectives. Curricula could be organized by knowledgeable instructors to provide VAL spaces that come with pre-defined learning exercises. Instructors could assist in doing the exercises within the VAL.

Acknowledgements

This work was funded by NIH grants LM06316 and LM06822. The rapid prototyping of the VAL would not be possible without the availability of the Java 3D API developed at Sun Microsystems and the enthusiastic vision of Dr. Tom Furness of the UW HIT Lab.

References

[1] Mandeville, Jon et al, GreenSpace: Creating a Distributed Virtual Environment for Global Applications, http://www.hitl.washington.edu/publications/p-95-17/
[2] Schwartz, Paul et al, Virtual Playground: Architectures for a Shared Virtual World, http://www.hitl.washington.edu/publications/r-98-12/
[3] Sun Microsystems, Java 2 SDK, http://www.javasoft.com/products/jdk/1.2/
[4] Sun Microsystems, Java 3D 1.2 API, http://www.javasoft.com/products/java-media/3D/
[5] The Digital Anatomist Project, http://sig.biostr.washington.edu/projects/da/
[6] Rosse, C et al, 'The Digital Anatomist foundational model: principles for defining and structuring its concept domain,' in Proceedings, American Medical Informatics Association Fall Symposium pp. 820-824, 1998.