No free lunch theorems for search: Unterschied zwischen den Versionen

Aus de_evolutionary_art_org
Wechseln zu: Navigation, Suche
(Die Seite wurde neu angelegt: „ == Reference == Wolpert, D.H., Macready, W.G. (1997). No free lunch theorems for search. IEEE Transactions on Evolutionary Computation, 1(1): 67–82. == D…“)
 
 
Zeile 2: Zeile 2:
  
 
== Reference ==
 
== Reference ==
Wolpert, D.H., Macready, W.G. (1997). No free lunch theorems for search. IEEE Transactions on Evolutionary Computation, 1(1): 67–82.
+
Wolpert, D.H., Macready, W.G. (1995). No free lunch theorems for search. SFI-TR-95-02-010 Santa Fe Institute 1995
  
 
== DOI ==
 
== DOI ==
http://dx.doi.org/10.1109/4235.585893
+
 
  
 
== Abstract ==
 
== Abstract ==
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori “head-to-head” minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms
+
We show that all algorithms that search for an extremum of a cost
 +
function perform exactly the same, when averaged over all possible cost
 +
functions. In particular, if algorithm A outperforms algorithm B on some
 +
cost functions, then loosely speaking there must exist exactly as many
 +
other functions where B outperforms A. Starting from this we analyze a
 +
number of the other a priori characteristics of the search problem, like its
 +
geometry and its information-theoretic aspects. This analysis allows us
 +
to derive mathematical benchmarks for assessing a particular search algo-
 +
rithm's performance. We also investigate minimax aspects of the search
 +
problem, the validity of using characteristics of a partial search over a
 +
cost function to predict future behavior of the search algorithm on that
 +
cost function, and time-varying cost functions. We conclude with som
  
 
== Extended Abstract ==
 
== Extended Abstract ==
Zeile 15: Zeile 26:
  
 
== Used References ==
 
== Used References ==
L. J. Fogel, A. J. Owens, and M. J. Walsh, Artificial Intelligence Through Simulated Evolution,  1966 :Wiley
+
1] M.R. Garey, D.S. Johnson, Computers and Intractability, Freeman (1979).
 
 
J. H. Holland,  Adaptation in Natural and Artificial Systems,  1993 :MIT Press
 
  
H.-P. Schwefel, Evolution and Optimum Seeking, 1995 :Wiley
+
2] E.L. Lawler, D.E. Wood, Operations Research, 14(4), 699-719, (1966).
  
S. Kirkpatrick, D. C. Gelatt, and M. P. Vecchi, "Optimization by simulated annealing",  Science,  vol. 220, pp.671 -680 1983 http://dx.doi.org/10.1126/science.220.4598.671
+
3] J. Pearl, Heuristics, intelligent search strategies for computer problem solv-
 +
ing, Addison-Wesley, (1984).
  
W. G. Macready and D. H. Wolpert, "What makes an optimization problem hard?", Complexity, vol. 5, pp.40 -46 1996
+
4] S. Kirkpatrick, C. D. Gelatt Jr., M. P. Vecchi, Science, 220, 671, (1983).
  
D. H. Wolpert and W. G. Macready, No free lunch theorems for search, 1995 :Santa Fe Institute
+
5] J. Holland, Adaptation in Natural and Arti cial Systems, University of
 +
Michigan Press, Ann Arbor, (1975).
  
F. Glover, "Tabu search I", ORSA J. Comput., vol. 1,  pp.190 -206 1989 http://dx.doi.org/10.1287/ijoc.1.3.190
+
6] L. Ingber, Adaptive Simulated Annealing, Software package documenta-
 +
tion, ftp.caltech.edu:/pub/ingber/asa.Z.
  
F. Glover, "Tabu search II", ORSA J. Comput., vol. 2, pp.4 -32 1990
+
7] D. Yuret, M. de la Maza, Dynamic Hill-Climbing: Overcoming the limi-
 +
tations of optimization techniques in The Second Turkish Symposium on
 +
Arti cial Intelligence and Neural Networks, pp208-212, (1993).
  
E. L. Lawler and D. E. Wood, "Branch and bound methods: A survey",  Oper. Res., vol. 14, pp.699 -719 1966 http://dx.doi.org/10.1287/opre.14.4.699
+
8] C.E.M. Strauss, D.H. Wolpert, D.R. Wolf. Alpha, Evidence, and the
 +
Entropic Prior in Maximum Entropy and Bayesian Methods, ed. Ali
 +
Mohammed-Djafari, pp113-120, (1992).
  
R. Kinderman and J. L. Snell, Markov Random Fields and Their Applications, 1980 :Amer. Math. Soc.
+
9] T. Cover, J. Thomas, Elements of Information Theory, John Wiley &
 +
Sons, (1991).
  
D. H. Wolpert, "The lack of a prior distinctions between learning algorithms", Neural Computation, vol. 8, pp.1341 -1390 1996 http://dx.doi.org/10.1162/neco.1996.8.7.1341
+
10] D H. Wolpert, O -training set error and a priori distinctions between
 +
learning algorithms, Technical Report SFI-TR-95-01-003, Santa Fe Insti-
 +
tute, 1995.
  
D. H. Wolpert, "On bias plus variance", Neural Computation, vol. 9, pp.1271 -1248 1996
+
11] D H. Wolpert, On Over tting Avoidance as Bias, Technical Report SFI-
 +
TR-92-03-5001, Santa Fe Institute, 1992.
  
D. Griffeath, J. G. Kemeny, J. L. Snell, and A. W. Knapp,  "Introduction to random fields",  Denumerable Markov Chains,  1976 :Springer-Verlag
+
12] Gerhard Reinelt, The Traveling Salesman, computational solutions for
 +
TSP applications, Springer Verlag Berlin Heidelberg (1994).
  
C. E. M. Strauss, D. H. Wolpert, and D. R. Wolf, "Alpha, evidence, and the entropic prior",  Maximum Entropy and Bayesian Methods,  pp.113 -120 1992 :Addison-Wesley
+
13] P.F. Stadler, Europhys. Lett. 20, pp479-482, (1992).
  
T. M. Cover and J. A. Thomas,  Elements of Information Theory,  1991 :Wiley http://dx.doi.org/10.1002/0471200611
 
  
  
 
== Links ==
 
== Links ==
 
=== Full Text ===  
 
=== Full Text ===  
 
+
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.5447&rep=rep1&type=pdf
  
 
[[intern file]]
 
[[intern file]]
  
 
=== Sonstige Links ===
 
=== Sonstige Links ===
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.39.6926
+
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.33.5447&rank=1
 
 
http://www.cs.ubc.ca/~hutter/earg/papers07/00585893.pdf No Free Lunch Theorems for Optimization
 

Aktuelle Version vom 14. November 2014, 18:33 Uhr


Reference

Wolpert, D.H., Macready, W.G. (1995). No free lunch theorems for search. SFI-TR-95-02-010 Santa Fe Institute 1995

DOI

Abstract

We show that all algorithms that search for an extremum of a cost function perform exactly the same, when averaged over all possible cost functions. In particular, if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many other functions where B outperforms A. Starting from this we analyze a number of the other a priori characteristics of the search problem, like its geometry and its information-theoretic aspects. This analysis allows us to derive mathematical benchmarks for assessing a particular search algo- rithm's performance. We also investigate minimax aspects of the search problem, the validity of using characteristics of a partial search over a cost function to predict future behavior of the search algorithm on that cost function, and time-varying cost functions. We conclude with som

Extended Abstract

Bibtex

Used References

1] M.R. Garey, D.S. Johnson, Computers and Intractability, Freeman (1979).

2] E.L. Lawler, D.E. Wood, Operations Research, 14(4), 699-719, (1966).

3] J. Pearl, Heuristics, intelligent search strategies for computer problem solv- ing, Addison-Wesley, (1984).

4] S. Kirkpatrick, C. D. Gelatt Jr., M. P. Vecchi, Science, 220, 671, (1983).

5] J. Holland, Adaptation in Natural and Arti cial Systems, University of Michigan Press, Ann Arbor, (1975).

6] L. Ingber, Adaptive Simulated Annealing, Software package documenta- tion, ftp.caltech.edu:/pub/ingber/asa.Z.

7] D. Yuret, M. de la Maza, Dynamic Hill-Climbing: Overcoming the limi- tations of optimization techniques in The Second Turkish Symposium on Arti cial Intelligence and Neural Networks, pp208-212, (1993).

8] C.E.M. Strauss, D.H. Wolpert, D.R. Wolf. Alpha, Evidence, and the Entropic Prior in Maximum Entropy and Bayesian Methods, ed. Ali Mohammed-Djafari, pp113-120, (1992).

9] T. Cover, J. Thomas, Elements of Information Theory, John Wiley & Sons, (1991).

10] D H. Wolpert, O -training set error and a priori distinctions between learning algorithms, Technical Report SFI-TR-95-01-003, Santa Fe Insti- tute, 1995.

11] D H. Wolpert, On Over tting Avoidance as Bias, Technical Report SFI- TR-92-03-5001, Santa Fe Institute, 1992.

12] Gerhard Reinelt, The Traveling Salesman, computational solutions for TSP applications, Springer Verlag Berlin Heidelberg (1994).

13] P.F. Stadler, Europhys. Lett. 20, pp479-482, (1992).


Links

Full Text

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.5447&rep=rep1&type=pdf

intern file

Sonstige Links

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.33.5447&rank=1