Yang, S. and Atmosukarto, I. and Franklin, J. and Brinkley, James F and Suciu, D. and Shapiro, L. G. (2009) A model of multimodal fusion for medical applications. In: Proc. SPIE 7255: Multimedia Content Access: Algorithms and Systems III. SPIE 72550H.
Preview |
Text
431fd39a3bd1a46b6c380e23d3643e3e7f36.pdf Download (3MB) | Preview |
Abstract
Content-based image retrieval has been applied to many different biomedical applications1. In almost all cases, retrievals involve a single query image of a particular modality and retrieved images are from this same modality. For example, one system may retrieve color images from eye exams, while another retrieves fMRI images of the brain. Yet real patients often have had tests from multiple different modalities, and retrievals based on more than one modality could provide information that single modality searches fail to see. In this paper, we show medical image retrieval for two different single modalities and propose a model for multimodal fusion that will lead to improved capabilities for physicians and biomedical researchers. We describe a graphical user interface for multimodal retrieval that is being tested by real biomedical researchers in several different fields.
Item Type: | Book Section |
---|---|
Subjects: | All Projects > Content-based Retrieval |
Divisions: | University of Washington > Department of Biological Structure University of Washington > Department of Computer Science and Engineering |
Depositing User: | Jim Brinkley |
Date Deposited: | 17 Jul 2018 22:41 |
Last Modified: | 17 Jul 2018 22:41 |
URI: | http://sigpubs.si.washington.edu/id/eprint/306 |
Actions (login required)
View Item |