Biomedical Sciences and Visualization

Haptic Interface for Virtual Exploration of Computational Data

Overview
Endoscopic Sinus Surgery (ESS) has gained popularity among otolaryngologists as the treatment of choice for recurrent acute and chronic sinusitis. These techniques require a high level of skill to be performed adequately and without morbidity. ESS requires a thorough understanding of the three-dimensional anatomy of the paranasal sinuses to avoid the associated risks of working within millimeters of the brain, orbital contents, and associated vascular structures. These techniques are learned primarily in a "hands-on" fashion within the operating room or through cadaver dissection after the anatomy has been mastered from textbooks. The associated risks of novice surgeons doing ESS on real patients is self evident. Cadaver specimens offer a nonthreatening environment in which to practice, but the supply of such material is often limited, and tissue realism is lacking in both appearance and texture. The above factors make the development of a surgical simulator for ESS using high performance computing a useful and necessary goal.

Patient

In collaboration with Immersion Corporation, the Ohio Supercomputer Center is developing an Endoscopic Sinus Surgery Simulator. The patient model will be reconstructed from merged high resolution computed tomography and magnetic resonance images and will be available for use on a high performance graphics workstation. The simulation will consist of a physical model of the patient's head. An endoscope will be available in the nares of the model. The user will insert the forceps and suction device through the nares. The instruments will engage the haptic feedback hardware housed inside the model of the head. Initial versions will employ 3 degrees of freedom to the instrumentation. Initially, the display will appear on a separate monitor, a technique commonly used in ESS . Later versions will employ micromonitors combined with the eyepiece of the endoscope. The generated graphics will be driven directly to the scope.


Acknowledgments
This research is supported by Grant DE-FG03-94ER from the Department of Energy. We would like to acknowledge the support of Karl Storz Endoscopy - America Inc. for the donation of the endoscopic surgery equipment. We would like to acknowledge the encouragement and support from our colleagues in the Department of Otolaryngology, Immersion Corporation, and the Ohio Supercomputer Center. Special thanks goes to Dr. Petra Schmalbrock, in the Department of Radiology at The Ohio State University Hospitals, for the imaging protocol and support in data acquisition, and Dr. Roni Yagel, in the Department of Computer and Information Science, for support in developing the real-time volume renderer.