No Free Lunch Theorems for Optimization

Aus de_evolutionary_art_org
Wechseln zu: Navigation, Suche


Wolpert, D.H., Macready, W.G. (1997). No Free Lunch Theorems for Optimization. IEEE Transactions on Evolutionary Computation, 1(1): 67–82.



A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori “head-to-head” minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms

Extended Abstract


Used References

L. J. Fogel, A. J. Owens, and M. J. Walsh, Artificial Intelligence Through Simulated Evolution, 1966 :Wiley

J. H. Holland, Adaptation in Natural and Artificial Systems, 1993 :MIT Press

H.-P. Schwefel, Evolution and Optimum Seeking, 1995 :Wiley

S. Kirkpatrick, D. C. Gelatt, and M. P. Vecchi, "Optimization by simulated annealing", Science, vol. 220, pp.671 -680 1983

W. G. Macready and D. H. Wolpert, "What makes an optimization problem hard?", Complexity, vol. 5, pp.40 -46 1996

D. H. Wolpert and W. G. Macready, No free lunch theorems for search, 1995 :Santa Fe Institute

F. Glover, "Tabu search I", ORSA J. Comput., vol. 1, pp.190 -206 1989

F. Glover, "Tabu search II", ORSA J. Comput., vol. 2, pp.4 -32 1990

E. L. Lawler and D. E. Wood, "Branch and bound methods: A survey", Oper. Res., vol. 14, pp.699 -719 1966

R. Kinderman and J. L. Snell, Markov Random Fields and Their Applications, 1980 :Amer. Math. Soc.

D. H. Wolpert, "The lack of a prior distinctions between learning algorithms", Neural Computation, vol. 8, pp.1341 -1390 1996

D. H. Wolpert, "On bias plus variance", Neural Computation, vol. 9, pp.1271 -1248 1996

D. Griffeath, J. G. Kemeny, J. L. Snell, and A. W. Knapp, "Introduction to random fields", Denumerable Markov Chains, 1976 :Springer-Verlag

C. E. M. Strauss, D. H. Wolpert, and D. R. Wolf, "Alpha, evidence, and the entropic prior", Maximum Entropy and Bayesian Methods, pp.113 -120 1992 :Addison-Wesley

T. M. Cover and J. A. Thomas, Elements of Information Theory, 1991 :Wiley


Full Text

intern file

Sonstige Links