Ranking Cases with Classification Rules: Unterschied zwischen den Versionen
(Die Seite wurde neu angelegt: „ == Reference == Jianping Zhang, Jerzy W. Bala, Ali Hadjarian, Brent Han: Ranking Cases with Classification Rules. In: Fürnkranz, J. and Hüllermeier, E.:…“) |
|||
Zeile 1: | Zeile 1: | ||
== Reference == | == Reference == | ||
− | Jianping Zhang, Jerzy W. Bala, Ali Hadjarian, Brent Han: [[Ranking Cases with Classification Rules]]. In: Fürnkranz, J. and Hüllermeier, E.: Preference Learning, 2011, 155-177. | + | Jianping Zhang, Jerzy W. Bala, Ali Hadjarian, Brent Han: [[Ranking Cases with Classification Rules]]. In: Fürnkranz, J. and Hüllermeier, E.: [[Preference Learning]], 2011, 155-177. |
== DOI == | == DOI == |
Aktuelle Version vom 30. November 2015, 12:05 Uhr
Inhaltsverzeichnis
Reference
Jianping Zhang, Jerzy W. Bala, Ali Hadjarian, Brent Han: Ranking Cases with Classification Rules. In: Fürnkranz, J. and Hüllermeier, E.: Preference Learning, 2011, 155-177.
DOI
http://dx.doi.org/10.1007/978-3-642-14125-6_8
Abstract
Many real-world machine learning applications require a ranking of cases, in addition to their classification. While classification rules are not a good representation for ranking, the human comprehensibility aspect of rules makes them an attractive option for many ranking problems where such model transparency is desired. There have been numerous studies on ranking with decision trees, but not many on ranking with decision rules. Although rules are similar to decision trees in many respects, there are important differences between them when used for ranking. In this chapter, we propose a framework for ranking with rules. The framework extends and substantially improves on the reported methods for ranking with decision trees. It introduces three types of rule-based ranking methods: post analysis of rules, hybrid methods, and multiple rule set analysis. We also study the impact of rule learning bias on the ranking performance. While traditional measures used for ranking performance evaluation tend to focus on the entire rank ordered list, the aim of many ranking applications is to optimize the performance on only a small portion of the top ranked cases. Accordingly, we propose a simple method for measuring the performance of a classification or ranking algorithm that focuses on these top ranked cases. Empirical studies have been conducted to evaluate some of the proposed methods.
Extended Abstract
Bibtex
@incollection{ year={2011}, isbn={978-3-642-14124-9}, booktitle={Preference Learning}, editor={Fürnkranz, Johannes and Hüllermeier, Eyke}, doi={10.1007/978-3-642-14125-6_8}, title={Ranking Cases with Classification Rules}, url={http://dx.doi.org/10.1007/978-3-642-14125-6_8, http://de.evo-art.org/index.php?title=Ranking_Cases_with_Classification_Rules }, publisher={Springer Berlin Heidelberg}, author={Zhang, Jianping and Bala, JerzyW. and Hadjarian, Ali and Han, Brent}, pages={155-177}, language={English} }
Used References
1. I. Alvarez, S. Bernard, Ranking cases with decision trees: A geometric method that preserves intelligibility, in Proceedings of the 19th International Joint Conference on AI (IJCAI-05) (2005), pp. 635–640
2. L.W. Barsalou, Ideals, central tendency, and frequency of instantiation as determinants of graded structure in categories. J. Exp. Psychol. Learn. Mem. Cogn. 11, 629–654 (1985) http://dx.doi.org/10.1037/0278-7393.11.1-4.629
3. E. Bauer, R. Kohavi, An empirical comparison of voting classification algorithms: bagging, boosting and variants. Mach. Learn. 36, 105–142 (1999) http://dx.doi.org/10.1023/A%3A1007515423169
4. B.G. Buchanan, E.H. Shortliffe (eds.), Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project (Addison Wesley, Reading, MA, 1984)
5. W. Cohen, Text categorization and relational learning, in Proceedings of the 12th International Conference on Machine Learning (ICML-95) (1995), pp. 124–132
6. C. Cortes, M. Mohri, AUC Optimization vs. Error Rate Minimization, in Advances in Neural Information Processing Systems (NIPS-03) (MIT, 2003)
7. C. Ferri, P.A. Flach, J. Hernandez-Orallo, Improving the AUC of probabilistic estimation trees, in Proceedings of the 14th European Conference on Machine Learning (ECML-03) (Springer, 2003)
8. R. Kohavi, Scaling up the accuracy of Naive-Bayes classifiers: a decision-tree hybrid, in Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD-96) (1996)
9. C.X. Ling, R.J. Yan, Decision tree with better ranking, in Proceedings of the 20th International Conference on Machine Learning (ICML-01) (Morgan Kaufmann, 2003)
10. F.J. Provost, P. Domingos, Tree induction for probability-based ranking. Mach. Learn. 52(30), 199–215 (2003) http://dx.doi.org/10.1023/A%3A1024099825458
11. P. Utgoff, Perceptron trees – a case study in hybrid concept representation, in Proceedings of The Seventh National Conference on Artificial Intelligence (AAAI-88) (1988), pp. 601–606
12. D.A. Waterman, A Guide to Expert Systems (Addison-Wesley Publishing, Reading, Mass, USA, 1985)
13. L.X. Wang, J.M. Mendel, Generating fuzzy rules by learning from examples. IEEE Trans. Syst. Man Cybern. 22(6), 1414–1427 (1992) http://dx.doi.org/10.1109/21.199466
14. J. Zhang, R.S. Michalski, An integration of rule induction and exemplar-based learning for graded concepts. Mach. Learn. 21(3), 235–267 (1995)