<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="de">
		<id>http://de.evo-art.org/index.php?action=history&amp;feed=atom&amp;title=Imagenet_classification_with_deep_convolutional_neural_networks</id>
		<title>Imagenet classification with deep convolutional neural networks - Versionsgeschichte</title>
		<link rel="self" type="application/atom+xml" href="http://de.evo-art.org/index.php?action=history&amp;feed=atom&amp;title=Imagenet_classification_with_deep_convolutional_neural_networks"/>
		<link rel="alternate" type="text/html" href="http://de.evo-art.org/index.php?title=Imagenet_classification_with_deep_convolutional_neural_networks&amp;action=history"/>
		<updated>2026-04-13T14:58:24Z</updated>
		<subtitle>Versionsgeschichte dieser Seite in de_evolutionary_art_org</subtitle>
		<generator>MediaWiki 1.27.4</generator>

	<entry>
		<id>http://de.evo-art.org/index.php?title=Imagenet_classification_with_deep_convolutional_neural_networks&amp;diff=33021&amp;oldid=prev</id>
		<title>Gubachelier: /* Links */</title>
		<link rel="alternate" type="text/html" href="http://de.evo-art.org/index.php?title=Imagenet_classification_with_deep_convolutional_neural_networks&amp;diff=33021&amp;oldid=prev"/>
				<updated>2016-06-28T11:30:45Z</updated>
		
		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Links&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&#039;diff-marker&#039; /&gt;
				&lt;col class=&#039;diff-content&#039; /&gt;
				&lt;col class=&#039;diff-marker&#039; /&gt;
				&lt;col class=&#039;diff-content&#039; /&gt;
				&lt;tr style=&#039;vertical-align: top;&#039; lang=&#039;de&#039;&gt;
				&lt;td colspan=&#039;2&#039; style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Nächstältere Version&lt;/td&gt;
				&lt;td colspan=&#039;2&#039; style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Version vom 28. Juni 2016, 11:30 Uhr&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l120&quot; &gt;Zeile 120:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Zeile 120:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Sonstige Links ===&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Sonstige Links ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.299.205&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.299.205&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Gubachelier</name></author>	</entry>

	<entry>
		<id>http://de.evo-art.org/index.php?title=Imagenet_classification_with_deep_convolutional_neural_networks&amp;diff=33019&amp;oldid=prev</id>
		<title>Gubachelier: Die Seite wurde neu angelegt: „  == Referenz ==  Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural info…“</title>
		<link rel="alternate" type="text/html" href="http://de.evo-art.org/index.php?title=Imagenet_classification_with_deep_convolutional_neural_networks&amp;diff=33019&amp;oldid=prev"/>
				<updated>2016-06-28T11:29:43Z</updated>
		
		<summary type="html">&lt;p&gt;Die Seite wurde neu angelegt: „  == Referenz ==  Krizhevsky, A., Sutskever, I., Hinton, G.E.: &lt;a href=&quot;/index.php?title=Imagenet_classification_with_deep_convolutional_neural_networks&quot; title=&quot;Imagenet classification with deep convolutional neural networks&quot;&gt;Imagenet classification with deep convolutional neural networks&lt;/a&gt;. In: Advances in neural info…“&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Neue Seite&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== Referenz == &lt;br /&gt;
Krizhevsky, A., Sutskever, I., Hinton, G.E.: [[Imagenet classification with deep convolutional neural networks]]. In: Advances in neural information processing systems. (2012) 1097-1105&lt;br /&gt;
&lt;br /&gt;
== DOI ==&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
We trained a large, deep convolutional neural network to classify the 1.3 million high-resolution images in the LSVRC-2010 ImageNet training set into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 39.7\% and 18.9\% which is considerably better than the previous state-of-the-art results. The neural network, which has 60 million parameters and 500,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and two globally connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of convolutional nets. To reduce overfitting in the globally connected layers we employed a new regularization method that proved to be very effective.&lt;br /&gt;
&lt;br /&gt;
== Extended Abstract ==&lt;br /&gt;
&lt;br /&gt;
== Bibtex == &lt;br /&gt;
 @incollection{NIPS2012_4824,&lt;br /&gt;
 title = {ImageNet Classification with Deep Convolutional Neural Networks},&lt;br /&gt;
 author = {Alex Krizhevsky and Sutskever, Ilya and Geoffrey E. Hinton},&lt;br /&gt;
 booktitle = {Advances in Neural Information Processing Systems 25},&lt;br /&gt;
 editor = {F. Pereira and C. J. C. Burges and L. Bottou and K. Q. Weinberger},&lt;br /&gt;
 pages = {1097--1105},&lt;br /&gt;
 year = {2012},&lt;br /&gt;
 publisher = {Curran Associates, Inc.},&lt;br /&gt;
 url = {http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf http://de.evo-art.org/index.php?title=Imagenet_classification_with_deep_convolutional_neural_networks}&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Used References == &lt;br /&gt;
[1] R.M. Bell and Y. Koren. Lessons from the netflix prize challenge. ACM SIGKDD Explorations Newsletter,&lt;br /&gt;
9(2):75–79, 2007.&lt;br /&gt;
&lt;br /&gt;
[2] A. Berg, J. Deng, and L. Fei-Fei. Large scale visual recognition challenge 2010. www.imagenet.&lt;br /&gt;
org/challenges. 2010.&lt;br /&gt;
&lt;br /&gt;
[3] L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001.&lt;br /&gt;
&lt;br /&gt;
[4] D. Cire¸san, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification.&lt;br /&gt;
Arxiv preprint arXiv:1202.2745, 2012.&lt;br /&gt;
&lt;br /&gt;
[5] D.C. Cire¸san, U. Meier, J. Masci, L.M. Gambardella, and J. Schmidhuber. High-performance neural&lt;br /&gt;
networks for visual object classification. Arxiv preprint arXiv:1102.0183, 2011.&lt;br /&gt;
&lt;br /&gt;
[6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical&lt;br /&gt;
Image Database. In CVPR09, 2009.&lt;br /&gt;
&lt;br /&gt;
[7] J. Deng, A. Berg, S. Satheesh, H. Su, A. Khosla, and L. Fei-Fei. ILSVRC-2012, 2012. URL&lt;br /&gt;
http://www.image-net.org/challenges/LSVRC/2012/.&lt;br /&gt;
&lt;br /&gt;
[8] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An&lt;br /&gt;
incremental bayesian approach tested on 101 object categories. Computer Vision and Image Understanding,&lt;br /&gt;
106(1):59–70, 2007.&lt;br /&gt;
&lt;br /&gt;
[9] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical Report 7694, California&lt;br /&gt;
Institute of Technology, 2007. URL http://authors.library.caltech.edu/7694.&lt;br /&gt;
&lt;br /&gt;
[10] G.E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R.R. Salakhutdinov. Improving neural networks&lt;br /&gt;
by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.&lt;br /&gt;
&lt;br /&gt;
[11] K. Jarrett, K. Kavukcuoglu, M. A. Ranzato, and Y. LeCun. What is the best multi-stage architecture for&lt;br /&gt;
object recognition? In International Conference on Computer Vision, pages 2146–2153. IEEE, 2009.&lt;br /&gt;
&lt;br /&gt;
[12] A. Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, Department of&lt;br /&gt;
Computer Science, University of Toronto, 2009.&lt;br /&gt;
&lt;br /&gt;
[13] A. Krizhevsky. Convolutional deep belief networks on cifar-10. Unpublished manuscript, 2010.&lt;br /&gt;
&lt;br /&gt;
[14] A. Krizhevsky and G.E. Hinton. Using very deep autoencoders for content-based image retrieval. In&lt;br /&gt;
ESANN, 2011.&lt;br /&gt;
&lt;br /&gt;
[15] Y. Le Cun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, L.D. Jackel, et al. Handwritten&lt;br /&gt;
digit recognition with a back-propagation network. In Advances in neural information processing&lt;br /&gt;
systems, 1990.&lt;br /&gt;
&lt;br /&gt;
[16] Y. LeCun, F.J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to&lt;br /&gt;
pose and lighting. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the&lt;br /&gt;
2004 IEEE Computer Society Conference on, volume 2, pages II–97. IEEE, 2004.&lt;br /&gt;
&lt;br /&gt;
[17] Y. LeCun, K. Kavukcuoglu, and C. Farabet. Convolutional networks and applications in vision. In&lt;br /&gt;
Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on, pages 253–256.&lt;br /&gt;
IEEE, 2010.&lt;br /&gt;
&lt;br /&gt;
[18] H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng. Convolutional deep belief networks for scalable unsupervised&lt;br /&gt;
learning of hierarchical representations. In Proceedings of the 26th Annual International Conference&lt;br /&gt;
on Machine Learning, pages 609–616. ACM, 2009.&lt;br /&gt;
&lt;br /&gt;
[19] T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka. Metric Learning for Large Scale Image Classification:&lt;br /&gt;
Generalizing to New Classes at Near-Zero Cost. In ECCV - European Conference on Computer&lt;br /&gt;
Vision, Florence, Italy, October 2012.&lt;br /&gt;
&lt;br /&gt;
[20] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proc. 27th&lt;br /&gt;
International Conference on Machine Learning, 2010.&lt;br /&gt;
&lt;br /&gt;
[21] N. Pinto, D.D. Cox, and J.J. DiCarlo. Why is real-world visual object recognition hard? PLoS computational&lt;br /&gt;
biology, 4(1):e27, 2008.&lt;br /&gt;
&lt;br /&gt;
[22] N. Pinto, D. Doukhan, J.J. DiCarlo, and D.D. Cox. A high-throughput screening approach to discovering&lt;br /&gt;
good forms of biologically inspired visual representation. PLoS computational biology, 5(11):e1000579,&lt;br /&gt;
2009.&lt;br /&gt;
&lt;br /&gt;
[23] B.C. Russell, A. Torralba, K.P. Murphy, and W.T. Freeman. Labelme: a database and web-based tool for&lt;br /&gt;
image annotation. International journal of computer vision, 77(1):157–173, 2008.&lt;br /&gt;
&lt;br /&gt;
[24] J. Sánchez and F. Perronnin. High-dimensional signature compression for large-scale image classification.&lt;br /&gt;
In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1665–1672. IEEE,&lt;br /&gt;
2011.&lt;br /&gt;
&lt;br /&gt;
[25] P.Y. Simard, D. Steinkraus, and J.C. Platt. Best practices for convolutional neural networks applied to&lt;br /&gt;
visual document analysis. In Proceedings of the Seventh International Conference on Document Analysis&lt;br /&gt;
and Recognition, volume 2, pages 958–962, 2003.&lt;br /&gt;
&lt;br /&gt;
[26] S.C. Turaga, J.F. Murray, V. Jain, F. Roth, M. Helmstaedter, K. Briggman,W. Denk, and H.S. Seung. Convolutional&lt;br /&gt;
networks can learn to generate affinity graphs for image segmentation. Neural Computation,&lt;br /&gt;
22(2):511–538, 2010.&lt;br /&gt;
&lt;br /&gt;
== Links == &lt;br /&gt;
&lt;br /&gt;
=== Full Text === &lt;br /&gt;
https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf&lt;br /&gt;
&lt;br /&gt;
[[internal file]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Sonstige Links ===&lt;br /&gt;
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.299.205&lt;/div&gt;</summary>
		<author><name>Gubachelier</name></author>	</entry>

	</feed>