<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="de">
		<id>http://de.evo-art.org/index.php?action=history&amp;feed=atom&amp;title=Multi-modal_Medical_Image_Retrieval</id>
		<title>Multi-modal Medical Image Retrieval - Versionsgeschichte</title>
		<link rel="self" type="application/atom+xml" href="http://de.evo-art.org/index.php?action=history&amp;feed=atom&amp;title=Multi-modal_Medical_Image_Retrieval"/>
		<link rel="alternate" type="text/html" href="http://de.evo-art.org/index.php?title=Multi-modal_Medical_Image_Retrieval&amp;action=history"/>
		<updated>2026-05-03T11:17:23Z</updated>
		<subtitle>Versionsgeschichte dieser Seite in de_evolutionary_art_org</subtitle>
		<generator>MediaWiki 1.27.4</generator>

	<entry>
		<id>http://de.evo-art.org/index.php?title=Multi-modal_Medical_Image_Retrieval&amp;diff=32742&amp;oldid=prev</id>
		<title>Gubachelier: Die Seite wurde neu angelegt: „  == Referenz ==  Yu Caoa, Henning Müller b, Charles E. Kahn, Jr.c, Ethan Munson: Multi-modal Medical Image Retrieval. Department of Computer Science &amp; En…“</title>
		<link rel="alternate" type="text/html" href="http://de.evo-art.org/index.php?title=Multi-modal_Medical_Image_Retrieval&amp;diff=32742&amp;oldid=prev"/>
				<updated>2016-06-17T15:47:53Z</updated>
		
		<summary type="html">&lt;p&gt;Die Seite wurde neu angelegt: „  == Referenz ==  Yu Caoa, Henning Müller b, Charles E. Kahn, Jr.c, Ethan Munson: &lt;a href=&quot;/index.php?title=Multi-modal_Medical_Image_Retrieval&quot; title=&quot;Multi-modal Medical Image Retrieval&quot;&gt;Multi-modal Medical Image Retrieval&lt;/a&gt;. Department of Computer Science &amp;amp; En…“&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Neue Seite&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== Referenz == &lt;br /&gt;
Yu Caoa, Henning Müller b, Charles E. Kahn, Jr.c, Ethan Munson: [[Multi-modal Medical Image Retrieval]]. Department of Computer Science &amp;amp; Engineering, University of Tennessee at Chattanooga &lt;br /&gt;
&lt;br /&gt;
== DOI ==&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
Images are ubiquitous in biomedicine and the image viewers play a central role in many aspects of modern health care.&lt;br /&gt;
Tremendous amounts of medical image data are captured and recorded in digital format during the daily clinical&lt;br /&gt;
practice, medical research, and education (in 2009, over 117,000 images per day in the Geneva radiology department&lt;br /&gt;
alone). Facing such an unprecedented volume of image data with heterogeneous image modalities, it is necessary to&lt;br /&gt;
develop an effective and efficient medical image retrieval system for clinical practice and research. Traditionally,&lt;br /&gt;
medical image retrieval systems rely on text-based retrieval techniques that use the captions associated with the images,&lt;br /&gt;
and most often, the access is by patient ID, only. Since the 1990s, we have seen increasing interests in content-based&lt;br /&gt;
image retrieval for medical applications. One of the promising directions in content-based medical image retrieval is to&lt;br /&gt;
correlate multi-modal information (e.g., text and image information) to provide better insights.&lt;br /&gt;
In this paper, we concentrate our efforts on how to retrieve the most relevant medical images using multi-modal&lt;br /&gt;
information. Specifically, we use two modalities: the visual content of the images (represented by visual features) and&lt;br /&gt;
the textual information associated with the images. The core idea for multi-modal retrieval is rooted in information&lt;br /&gt;
fusion. Existing literature on multi-modal retrieval can roughly be classified into two categories: feature fusion and&lt;br /&gt;
retrieval fusion. The feature fusion strategy generates an integrated feature representation from multiple modalities. The&lt;br /&gt;
retrieval fusion strategy refers to the techniques that merge the retrieval results from multiple retrieval algorithms. Our&lt;br /&gt;
proposed approach belongs to the first category (feature fusion) and is largely inspired by Pham et al. [1] and Leinhart et&lt;br /&gt;
al.[2]. In [1], the features from different modalities are normalized and concatenated to generate the feature vectors.&lt;br /&gt;
Then, the Latent Semantic Analysis (LSA) is applied on these features for image retrieval. In [2], Lienhart et al propose&lt;br /&gt;
a multi-layer probability Latent Semantic Analysis (pLSA) to solve the multi-modal image retrieval problem. Our&lt;br /&gt;
proposed approach is different from Pham et al. [1] in that we do not simply concatenate the features from different&lt;br /&gt;
modalities. Instead, we represent the features from different modalities as a multi-dimensional matrix and incorporate&lt;br /&gt;
these feature vectors using an extended pLSA model. Our method is also different from Lienhart et al. [2] since we use a&lt;br /&gt;
single pLSA model instead of multiple pLSA models. The major contribution of our work is the new representation of&lt;br /&gt;
an image using visual-textual “words”. These “words” are generated from the visual descriptors and textual information&lt;br /&gt;
using the extended pLSA model.&lt;br /&gt;
&lt;br /&gt;
== Extended Abstract ==&lt;br /&gt;
&lt;br /&gt;
== Bibtex == &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Used References == &lt;br /&gt;
[1] T.-T. Pham, N. E. Maillot, J.-H. Lim, and J.-P. Chevallet, &amp;quot;Latent semantic fusion model for image retrieval and&lt;br /&gt;
annotation,&amp;quot; in Proc. of the sixteenth ACM conference on Conference on information and knowledge management (CIKM), Lisbon,&lt;br /&gt;
Portugal, 2007, pp. 439-444.&lt;br /&gt;
&lt;br /&gt;
[2] R. Lienhart, S. Romberg, and E. Hörster, &amp;quot;Multilayer pLSA for multimodal image retrieval,&amp;quot; in Proc. of the ACM&lt;br /&gt;
International Conference on Image and Video Retrieval (CIVR), Island of Santorini, Greece, 2009, pp. 1-8.&lt;br /&gt;
&lt;br /&gt;
[3] J. Sivic, B. Russell, A. Efros, A. Zisserman, and W. Freeman, &amp;quot;Discovering object categories in image collections,&amp;quot; in Proc.&lt;br /&gt;
of the IEEE International Conference on Computer Vision (ICCV), Beijing, P.R.China, 2005, pp. 370- 377.&lt;br /&gt;
&lt;br /&gt;
[4] S. Lazebnik, C. Schmid, and J. Ponce, &amp;quot;Beyond bags of features: Spatial pyramid matching for recognizing natural scene&lt;br /&gt;
categories,&amp;quot; in Proc. of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), New&lt;br /&gt;
York, NY, USA, 2006, pp. 2169-2178.&lt;br /&gt;
&lt;br /&gt;
[5] H. Muller, J. Kalpathy-Cramer, I. Eggel, S. Bedrick, S. ı. Radhouani, B. Bakke, C. E. K. Jr, and W. Hersh, &amp;quot;Overview of the&lt;br /&gt;
CLEF 2009 medical image retrieval track,&amp;quot; in 10th Workshop of the Cross-Language Evaluation Forum, 2009, pp. 1-11.&lt;br /&gt;
&lt;br /&gt;
[6] &amp;quot;trec_eval: A standard tool used by the TREC community for evaluating an ad hoc retrieval run,&amp;quot; in&lt;br /&gt;
http://trec.nist.gov/trec_eval/. Washington DC, 2010.&lt;br /&gt;
&lt;br /&gt;
== Links == &lt;br /&gt;
&lt;br /&gt;
=== Full Text === &lt;br /&gt;
http://publications.hevs.ch/index.php/attachments/single/273&lt;br /&gt;
&lt;br /&gt;
[[internal file]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Sonstige Links ===&lt;/div&gt;</summary>
		<author><name>Gubachelier</name></author>	</entry>

	</feed>