Simple algorithmic principles of discovery, subjective beauty, selective attention, curosity and creativity

Aus de_evolutionary_art_org
Wechseln zu: Navigation, Suche


Reference

Schmidhuber, J.: Simple algorithmic principles of discovery, subjective beauty, selective attention, curosity and creativity. In: Corruble, V., Takeda, M., Suzuki, E. (eds.) DS 2007. LNCS (LNAI), vol. 4755, pp. 26–38. Springer, Heidelberg (2007)

DOI

http://link.springer.com/chapter/10.1007%2F978-3-540-75488-6_3

Abstract

I postulate that human or other intelligent agents function or should function as follows. They store all sensory observations as they come—the data is ‘holy.’ At any time, given some agent’s current coding capabilities, part of the data is compressible by a short and hopefully fast program / description / explanation / world model. In the agent’s subjective eyes, such data is more regular and more beautiful than other data. It is well-known that knowledge of regularity and repeatability may improve the agent’s ability to plan actions leading to external rewards. In absence of such rewards, however, known beauty is boring. Then interestingness becomes the first derivative of subjective beauty: as the learning agent improves its compression algorithm, formerly apparently random data parts become subjectively more regular and beautiful. Such progress in data compression is measured and maximized by the curiosity drive: create action sequences that extend the observation history and yield previously unknown / unpredictable but quickly learnable algorithmic regularity. I discuss how all of the above can be naturally implemented on computers, through an extension of passive unsupervised learning to the case of active data selection: we reward a general reinforcement learner (with access to the adaptive compressor) for actions that improve the subjective compressibility of the growing data. An unusually large compression breakthrough deserves the name discovery. The creativity of artists, dancers, musicians, pure mathematicians can be viewed as a by-product of this principle. Several qualitative examples support this hypothesis.

Extended Abstract

Bibtex

Used References

Balter, M.: Seeking the key to music. Science 306, 1120–1122 (2004) http://dx.doi.org/10.1126/science.306.5699.1120

Barlow, H.B., Kaushal, T.P., Mitchison, G.J.: Finding minimum entropy codes. Neural Computation 1(3), 412–423 (1989)

Huffman, D.A.: A method for construction of minimum-redundancy codes. In: Proceedings IRE, vol. 40, pp. 1098–1101 (1952)

Hutter, M.: Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Springer, Heidelberg (2004) (On J. Schmidhuber’s SNF grant 20-61847)

Hutter, M.: On universal prediction and Bayesian confirmation. Theoretical Computer Science (2007)

Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. Journal of AI research 4, 237–285 (1996)

Kolmogorov, A.N.: Three approaches to the quantitative definition of information. Problems of Information Transmission 1, 1–11 (1965)

Levin, L.A.: Universal sequential search problems. Problems of Information Transmission 9(3), 265–266 (1973)

Li, M., Vitányi, P.M.B.: An Introduction to Kolmogorov Complexity and its Applications, 2nd edn. Springer, Heidelberg (1997)

Pinker, S.: How the mind works (1997)

Schmidhuber, J.: Adaptive curiosity and adaptive confidence. Technical Report FKI-149-91, Institut für Informatik, Technische Universität München (April 1991) See also [12]

Schmidhuber, J.: Curious model-building control systems. In: Proceedings of the International Joint Conference on Neural Networks, vol. 2, pp. 1458–1463. IEEE, Los Alamitos (1991) http://dx.doi.org/10.1109/IJCNN.1991.170605

Schmidhuber, J.: Learning complex, extended sequences using the principle of history compression. Neural Computation 4(2), 234–242 (1992) http://dx.doi.org/10.1162/neco.1992.4.2.234

Schmidhuber, J.: Learning factorial codes by predictability minimization. Neural Computation 4(6), 863–879 (1992)

Schmidhuber, J.: Low-complexity art. Leonardo, Journal of the International Society for the Arts, Sciences, and Technology 30(2), 97–103 (1997)

Schmidhuber, J.: What’s interesting? Technical Report IDSIA-35-97, IDSIA, (1997), ftp://ftp.idsia.ch/pub/juergen/interest.ps.gz (extended abstract in Proc. Snowbird 1998, Utah (1998) see also [16])

Schmidhuber, J.: Facial beauty and fractal geometry. Technical Report TR IDSIA-28-98, IDSIA (1998) Published in the Cogprint Archive, http://cogprints.soton.ac.uk

Schmidhuber, J.: Exploring the predictable. In: Ghosh, A., Tsuitsui, S. (eds.) Advances in Evolutionary Computing, pp. 579–612. Springer, Heidelberg (2002)

Schmidhuber, J.: Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science 13(4), 587–612 (2002) http://dx.doi.org/10.1142/S0129054102001291

Schmidhuber, J.: The Speed Prior: a new simplicity measure yielding near-optimal computable predictions. In: Kivinen, J., Sloan, R.H. (eds.) COLT 2002. LNCS (LNAI), vol. 2375, pp. 216–228. Springer, Heidelberg (2002)

Schmidhuber, J.: Gödel machines: self-referential universal problem solvers making provably optimal self-improvements. Technical Report IDSIA-19-03, arXiv:cs.LO/0309048, IDSIA, Manno-Lugano, Switzerland (2003)

Schmidhuber, J.: Optimal ordered problem solver. Machine Learning 54, 211–254 (2004) http://dx.doi.org/10.1023/B:MACH.0000015880.99707.b2

Schmidhuber, J.: Overview of artificial curiosity and active exploration, with links to publications since 1990 (2004), http://www.idsia.ch/~juergen/interest.html

Schmidhuber, J.: Completely self-referential optimal reinforcement learners. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 223–233. Springer, Heidelberg (2005)

Schmidhuber, J.: Gödel machines: Towards a technical justification of consciousness. In: Kudenko, D., Kazakov, D., Alonso, E. (eds.) Adaptive Agents and Multi-Agent Systems III. LNCS (LNAI), vol. 3394, pp. 1–23. Springer, Heidelberg (2005)

Schmidhuber, J.: Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts. Connection Science 18(2), 173–187 (2006) http://dx.doi.org/10.1080/09540090600768658

Schmidhuber, J.: Gödel machines: fully self-referential optimal universal problem solvers. In: Goertzel, B., Pennachin, C. (eds.) Artificial General Intelligence, pp. 199–226. Springer, Heidelberg (2006)

Schmidhuber, J., Heil, S.: Sequential neural text compression. IEEE Transactions on Neural Networks 7(1), 142–146 (1996) http://dx.doi.org/10.1109/72.478398

Schmidhuber, J., Huber, R.: Learning to generate artificial fovea trajectories for target detection. International Journal of Neural Systems 2(1 & 2), 135–141 (1991) http://dx.doi.org/10.1142/S012906579100011X

Shannon, C.E.: A mathematical theory of communication (parts I and II). Bell System Technical Journal XXVII, 379–423 (1948)

Solomonoff, R.J.: A formal theory of inductive inference. Part I. Information and Control 7, 1–22 (1964) http://dx.doi.org/10.1016/S0019-9958(64)90223-2

Solomonoff, R.J.: Complexity-based induction systems. IEEE Transactions on Information Theory IT-24(5), 422–432 (1978) http://dx.doi.org/10.1109/TIT.1978.1055913

Storck, J., Hochreiter, S., Schmidhuber, J.: Reinforcement driven information acquisition in non-deterministic environments. In: Proceedings of the International Conference on Artificial Neural Networks, Paris, vol. 2, pp. 159–164. EC2 & Cie (1995)


Links

Full Text

http://arxiv.org/pdf/0709.0674v1

intern file

Sonstige Links

http://www.idsia.ch/~juergen