<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="de">
		<id>http://de.evo-art.org/index.php?action=history&amp;feed=atom&amp;title=Evaluating_Search_Engine_Relevance_with_Click-Based_Metrics</id>
		<title>Evaluating Search Engine Relevance with Click-Based Metrics - Versionsgeschichte</title>
		<link rel="self" type="application/atom+xml" href="http://de.evo-art.org/index.php?action=history&amp;feed=atom&amp;title=Evaluating_Search_Engine_Relevance_with_Click-Based_Metrics"/>
		<link rel="alternate" type="text/html" href="http://de.evo-art.org/index.php?title=Evaluating_Search_Engine_Relevance_with_Click-Based_Metrics&amp;action=history"/>
		<updated>2026-05-17T19:12:26Z</updated>
		<subtitle>Versionsgeschichte dieser Seite in de_evolutionary_art_org</subtitle>
		<generator>MediaWiki 1.27.4</generator>

	<entry>
		<id>http://de.evo-art.org/index.php?title=Evaluating_Search_Engine_Relevance_with_Click-Based_Metrics&amp;diff=32423&amp;oldid=prev</id>
		<title>Gubachelier am 30. November 2015 um 11:08 Uhr</title>
		<link rel="alternate" type="text/html" href="http://de.evo-art.org/index.php?title=Evaluating_Search_Engine_Relevance_with_Click-Based_Metrics&amp;diff=32423&amp;oldid=prev"/>
				<updated>2015-11-30T11:08:08Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&#039;diff-marker&#039; /&gt;
				&lt;col class=&#039;diff-content&#039; /&gt;
				&lt;col class=&#039;diff-marker&#039; /&gt;
				&lt;col class=&#039;diff-content&#039; /&gt;
				&lt;tr style=&#039;vertical-align: top;&#039; lang=&#039;de&#039;&gt;
				&lt;td colspan=&#039;2&#039; style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Nächstältere Version&lt;/td&gt;
				&lt;td colspan=&#039;2&#039; style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Version vom 30. November 2015, 11:08 Uhr&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l2&quot; &gt;Zeile 2:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Zeile 2:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Reference ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Reference ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Filip Radlinski, Madhu Kurup, Thorsten Joachims : Evaluating Search Engine Relevance with Click-Based Metrics. In: Fürnkranz, J. and Hüllermeier, E.: Preference Learning, 2011, 337-361. &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Filip Radlinski, Madhu Kurup, Thorsten Joachims : Evaluating Search Engine Relevance with Click-Based Metrics. In: Fürnkranz, J. and Hüllermeier, E.: &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;[[&lt;/ins&gt;Preference Learning&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;]]&lt;/ins&gt;, 2011, 337-361. &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== DOI ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== DOI ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Gubachelier</name></author>	</entry>

	<entry>
		<id>http://de.evo-art.org/index.php?title=Evaluating_Search_Engine_Relevance_with_Click-Based_Metrics&amp;diff=32409&amp;oldid=prev</id>
		<title>Gubachelier: Die Seite wurde neu angelegt: „  == Reference == Filip Radlinski, Madhu Kurup, Thorsten Joachims : Evaluating Search Engine Relevance with Click-Based Metrics. In: Fürnkranz, J. and Hüller…“</title>
		<link rel="alternate" type="text/html" href="http://de.evo-art.org/index.php?title=Evaluating_Search_Engine_Relevance_with_Click-Based_Metrics&amp;diff=32409&amp;oldid=prev"/>
				<updated>2015-11-29T20:37:57Z</updated>
		
		<summary type="html">&lt;p&gt;Die Seite wurde neu angelegt: „  == Reference == Filip Radlinski, Madhu Kurup, Thorsten Joachims : Evaluating Search Engine Relevance with Click-Based Metrics. In: Fürnkranz, J. and Hüller…“&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Neue Seite&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
Filip Radlinski, Madhu Kurup, Thorsten Joachims : Evaluating Search Engine Relevance with Click-Based Metrics. In: Fürnkranz, J. and Hüllermeier, E.: Preference Learning, 2011, 337-361. &lt;br /&gt;
&lt;br /&gt;
== DOI ==&lt;br /&gt;
http://dx.doi.org/10.1007/978-3-642-14125-6_16 &lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
Automatically judging the quality of retrieval functions based on observable user behavior holds promise for making retrieval evaluation faster, cheaper, and more user centered. However, the relationship between observable user behavior and retrieval quality is not yet fully understood. In this chapter, we expand upon, Radlinski et al. (How does clickthrough data reflect retrieval quality, In Proceedings of the ACM Conference on Information and Knowledge Management (CIKM), 43–52, 2008), presenting a sequence of studies investigating this relationship for an operational search engine on the arXiv.org e-print archive. We find that none of the eight absolute usage metrics we explore (including the number of clicks observed, the frequency with which users reformulate their queries, and how often result sets are abandoned) reliably reflect retrieval quality for the sample sizes we consider. However, we find that paired experiment designs adapted from sensory analysis produce accurate and reliable statements about the relative quality of two retrieval functions. In particular, we investigate two paired comparison tests that analyze clickthrough data from an interleaved presentation of ranking pairs, and find that both give accurate and consistent results. We conclude that both paired comparison tests give substantially more accurate and sensitive evaluation results than the absolute usage metrics in our domain.&lt;br /&gt;
&lt;br /&gt;
== Extended Abstract ==&lt;br /&gt;
&lt;br /&gt;
== Bibtex == &lt;br /&gt;
 @incollection{&lt;br /&gt;
 year={2011},&lt;br /&gt;
 isbn={978-3-642-14124-9},&lt;br /&gt;
 booktitle={Preference Learning},&lt;br /&gt;
 editor={Fürnkranz, Johannes and Hüllermeier, Eyke},&lt;br /&gt;
 doi={10.1007/978-3-642-14125-6_16},&lt;br /&gt;
 title={Evaluating Search Engine Relevance with Click-Based Metrics},&lt;br /&gt;
 url={http://dx.doi.org/10.1007/978-3-642-14125-6_16},&lt;br /&gt;
 publisher={Springer Berlin Heidelberg},&lt;br /&gt;
 author={Radlinski, Filip and Kurup, Madhu and Joachims, Thorsten},&lt;br /&gt;
 pages={337-361},&lt;br /&gt;
 language={English}&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
== Used References ==&lt;br /&gt;
1. E. Agichtein, E. Brill, S. Dumais, R. Ragno, Learning user interaction models for prediction web search results preferences, in Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR) (2006), pp. 3–10&lt;br /&gt;
    &lt;br /&gt;
2. R. Agrawal, A. Halverson, K. Kenthapadi, N. Mishra, P. Tsaparas, Generating labels from clicks, in Proceedings of ACM International Conference on Web Search and Data Mining (WSDM) (2009), pp. 172–181&lt;br /&gt;
    &lt;br /&gt;
3. K.Ali, C.Chang, On the relationship between click-rate and relevance for search engines, in Proceedings of Data Mining and Information Engineering (2006)&lt;br /&gt;
    &lt;br /&gt;
4. J.A. Aslam, V. Pavlu, E. Yilmaz, A sampling technique for efficiently estimating measures of query retrieval performance using incomplete judgments, in ICML Workshop on Learning with Partially Classified Training Data (2005)&lt;br /&gt;
    &lt;br /&gt;
5. J. Boyan, D. Freitag, T. Joachims, A machine learning architecture for optimizing web search engines, in AAAI Workshop on Internet Based Information Systems (1996)&lt;br /&gt;
    &lt;br /&gt;
6. C. Buckley, E.M. Voorhees, Retrieval evaluation with incomplete information, in Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR) (2004), pp. 25–32&lt;br /&gt;
    &lt;br /&gt;
7. B. Carterette, J. Allan, R. Sitaraman, Minimal test collections for retrieval evaluation, in Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR) (2006), pp. 268–275&lt;br /&gt;
    &lt;br /&gt;
8. B. Carterette, P.N. Bennett, D.M. Chickering, S.T. Dumais, Here or there: Preference judgements for relevance, in Proceedings of the European Conference on Information Retrieval (ECIR) (2008), pp. 16–27&lt;br /&gt;
    &lt;br /&gt;
9. B. Carterette, R. Jones, Evaluating search engines by modeling the relationship between relevance and clicks, in Proceedings of the International Conference on Advances in Neural Information Processing Systems (NIPS) (2007), pp. 217–224&lt;br /&gt;
    &lt;br /&gt;
10. K.Crammer, Y.Singer, Pranking with ranking, in Proceedings of the International Conference on Advances in Neural Information Processing Systems (NIPS) (2001), pp. 641–647&lt;br /&gt;
    &lt;br /&gt;
11. G. Dupret, V. Murdock, B. Piwowarski, Web search engine evaluation using clickthrough data and a user model, in WWW Workshop on Query Log Analysis (2007)&lt;br /&gt;
    &lt;br /&gt;
12. S. Fox, K. Karnawat, M. Mydland, S. Dumais, T. White, Evaluating implicit measures to improve web search, ACM Trans. Inf. Sci. (TOIS) 23(2), 147–168 (2005)&lt;br /&gt;
    &lt;br /&gt;
13. S.B. Huffman, M. Hochster, How well does result relevance predict session satisfaction? in Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR) (2007), pp. 567–573&lt;br /&gt;
    &lt;br /&gt;
14. T.Joachims, Evaluating retrieval performance using clickthrough data, in Text Mining, ed. by J.Franke, G.Nakhaeizadeh, I.Renz (Physica Verlag, 2003)&lt;br /&gt;
    &lt;br /&gt;
15. T. Joachims, Optimizing search engines using clickthrough data, in Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining (KDD) (2002),pp. 132–142&lt;br /&gt;
    &lt;br /&gt;
16. T. Joachims, L. Granka, B. Pan, H. Hembrooke, F. Radlinski, G. Gay, Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search. ACM Trans. Inf. Sci. (TOIS) 25(2) (2007), Article 7&lt;br /&gt;
    &lt;br /&gt;
17. D.Kelly, J.Teevan, Implicit feedback for inferring user preference: A bibliography. ACM SIGIR Forum 37(2), 18–28 (2003) http://dx.doi.org/10.1145/959258.959260&lt;br /&gt;
    &lt;br /&gt;
18. J.Kozielecki, Psychological Decision Theory (Kluwer, 1981)&lt;br /&gt;
    &lt;br /&gt;
19. D.Laming, Sensory Analysis (Academic, 1986)&lt;br /&gt;
    &lt;br /&gt;
20. Y. Liu, Y. Fu, M. Zhang, S. Ma, L. Ru, Automatic search engine performance evaluation with click-through data analysis, in Proceedings of the International World Wide Web Conference (WWW) (2007)&lt;br /&gt;
    &lt;br /&gt;
21. C.D. Manning, P. Raghavan, H. Schuetze, Introduction to Information Retrieval (Cambridge University Press, 2008)&lt;br /&gt;
    &lt;br /&gt;
22. F. Radlinski, T. Joachims, Query chains: Learning to rank from implicit feedback, in Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining (KDD) (2005)&lt;br /&gt;
    &lt;br /&gt;
23. F. Radlinski, M. Kurup, T. Joachims, How does clickthrough data reflect retrieval quality, in Proceedings of the ACM Conference on Information and Knowledge Management (CIKM) (2008), pp. 43–52&lt;br /&gt;
    &lt;br /&gt;
24. S. Rajaram, A. Garg, Z.S. Zhou, T.S. Huang, Classification approach towards ranking and sorting problems, in Lecture Notes in Artificial Intelligence (2003), pp. 301–312&lt;br /&gt;
    &lt;br /&gt;
25. J. Reid, A task-oriented non-interactive evaluation methodology for information retrieval systems. Inf. Retr. 2, 115–129 (2000) http://dx.doi.org/10.1023/A%3A1009906420620&lt;br /&gt;
    &lt;br /&gt;
26. I. Soboroff, C. Nicholas, P. Cahan, Ranking retrieval systems without relevance judgments, in Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR) (2001), pp. 66–73&lt;br /&gt;
    &lt;br /&gt;
27. A. Spink, D. Wolfram, M. Bernard, J. Jansen, T. Saracevic, Searching to web: The public and their queries. J. Am. Soc. Inf. Sci. Technol. 52(3), 226–234 (2001) http://dx.doi.org/10.1002/1097-4571(2000)9999%3A9999%3C%3A%3AAID-ASI1591%3E3.0.CO%3B2-R&lt;br /&gt;
    &lt;br /&gt;
28. A. Turpin, F. Scholer, User performance versus precision measures for simple search tasks, in Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR) (2006), pp. 11–18&lt;br /&gt;
    &lt;br /&gt;
29. E.M. Voorhees, D.K. Harman (eds.), TREC: Experiment and Evaluation in Information Retrieval (MIT, 2005)&lt;br /&gt;
    &lt;br /&gt;
30. Y. Yue, T. Joachims, Interatively optimizing information systems as a dueling bandits problem, in NIPS 2008 Workshop on Beyond Search: Compuitations Intelligence for the Web (2008)&lt;br /&gt;
    &lt;br /&gt;
31. Y. Yue, T. Joachims, Interatively optimizing information retrieval systems as a dueling bandits problem, in Proceedings of the International Conference on Machine Learning (ICML) (2009)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
=== Full Text === &lt;br /&gt;
https://github.com/tolleiv/thesis/blob/master/Research/Papers/__Radlinski11%20-%20Evaluating%20Search%20Engine%20Relevance%20with%20Click-Based%20Metrics.pdf&lt;br /&gt;
&lt;br /&gt;
[[intern file]]&lt;br /&gt;
&lt;br /&gt;
=== Sonstige Links ===&lt;/div&gt;</summary>
		<author><name>Gubachelier</name></author>	</entry>

	</feed>