John M. Galeotti
I am primarily interested in the areas of image analysis and visualization, especially as applied to medical/bio-robotics. Topics of interest include:
My dissertation work is in the area of in-situ holographic visualization. Thus far, I have built and demonstrated a system capable of accurately projecting tomographic data in situ, in real time, by use of holography. I would like to adapt my system for the purpose of in-situ real-time visualization of clinical ultrasound for the guidance of invasive procedures. As with my current setup, the patient would be viewed through a transparent holographic optical element, and the real-time holographic image of the ultrasound slice would be accurately superimposed within the patient, independent of viewer location. The holographic virtual image would truly occupy its correct location and would not require any particular viewpoint to be correctly perceived. Toward this goal, I would need to design, construct, and calibrate the new device. I would then evaluate human performance using this device on clinical phantoms appropriate for amniocentesis and liver biopsy. Finally, I would like to integrate this project with some of my other planned research into cross-modality registration (discussed later), for the purpose of holographic visualization of CT or MRI data based on real-time 3D ultrasound scans.
While working on my dissertation, I have also been collaborating with others in my lab on an independent project we call the Shells and Spheres (S&S) framework for image analysis. S&S is a novel multidimensional multi-scale statistically-based framework which, among other things, facilitates development of segmentation and registration algorithms that integrate bottom-up and top-down approaches. S&S-based algorithms inherently generate many potentially useful feature vectors, including a distance map of the segmentation which greatly simplifies extraction of the medial manifold. In addition to helping develop the original S&S framework and its first segmentation algorithms, I have also lead research and development in a particular ongoing direction of the project which promises to produce more effective and efficient algorithms based on novel extensions of the original framework. I plan to carry this work forward, focusing on effective propagation of information across the image and between scales, allowing the emerging segmentation to refine the selection of boundary points, and vice versa. I would then like to apply machine learning techniques to the rich set of feature vectors produce by S&S, in an effort to expand the realm of clinically useful computer-aided disease detection and diagnosis systems.
S&S feature-vectors lend themselves well to cross-modality registration, and real-time holographic visualization of CT or MRI data would be facilitated by cross-registration with 3D ultrasound, as mentioned above. 3D ultrasound is the ideal imaging modality for guiding invasive procedures. In addition to being relatively cheap and portable, it is rapid and free of ionizing radiation, both of which are important for updating scans in real-time. Unfortunately, not all pathologies are visible in ultrasound, and so it would be useful to allow a clinician to mark a target (e.g. for biopsy) in a pre-acquired CT scan, and then register the annotated CT with real-time 3D ultrasound to determine the real-time position of the marked target. I believe it would be possible to do this without potentially cumbersome external tracking equipment, based on a computer-vision technique known as simultaneous localization and mapping (SLAM). If possible, such a technique has tremendous potential to improve percutaneous injection, biopsy and excision. I want to pursue this challenging but important registration problem, including the application of SLAM.
There are other applications of computer vision and machine learning that I would also like to pursue. I would like to use these technologies to provide automatic guidance for emergency clinical procedures for use by EMS, soldiers, and the public. For example, to administer intravenous life-saving drugs it is sufficient for a doctor to have an ultrasound image displayed in situ, but the clinically untrained would be much better served by a bright dot projected inside the patient where the tip of the needle should be placed. Such a system would require very robust and highly specialized computer vision software. A completely separate interest of mine is building upon my advisor's new Finger-Sight technology to improve quality of life for the disabled. Using computer-vision techniques to interpret data from miniature finger-mounted cameras, I hope to facilitate perception of the environment for the blind and to allow the physically immobile to control physically remote devices (e.g. turning off the lights by pointing to the light switch). In general, I am interested in developing novel and useful approaches to medical, biological, and quality-of-life problems by use of computer vision, machine learning, and in-situ visualization.