forked from nnadl-ja/nnadl_site_ja
-
Notifications
You must be signed in to change notification settings - Fork 0
/
chap5.html
797 lines (768 loc) · 61.4 KB
/
chap5.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
<!DOCTYPE html>
<html lang="en">
<!-- Produced from a LaTeX source file. Note that the production is done -->
<!-- by a very rough-and-ready (and buggy) script, so the HTML and other -->
<!-- code is quite ugly! Later versions should be better. -->
<meta charset="utf-8">
<meta name="citation_title" content="ニューラルネットワークと深層学習">
<meta name="citation_author" content="Nielsen, Michael A.">
<meta name="citation_publication_date" content="2015">
<meta name="citation_fulltext_html_url" content="http://neuralnetworksanddeeplearning.com">
<meta name="citation_publisher" content="Determination Press">
<link rel="icon" href="nnadl_favicon.ICO" />
<title>ニューラルネットワークと深層学習</title>
<script src="assets/jquery.min.js"></script>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {inlineMath: [['$','$']]},
"HTML-CSS":
{scale: 92},
TeX: { equationNumbers: { autoNumber: "AMS" }}});
</script>
<script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<link href="assets/style.css" rel="stylesheet">
<link href="assets/pygments.css" rel="stylesheet">
<link rel="stylesheet" href="http://code.jquery.com/ui/1.11.2/themes/smoothness/jquery-ui.css">
<style>
/* Adapted from */
/* https://groups.google.com/d/msg/mathjax-users/jqQxrmeG48o/oAaivLgLN90J, */
/* by David Cervone */
@font-face {
font-family: 'MJX_Math';
src: url('http://cdn.mathjax.org/mathjax/latest/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); /* IE9 Compat Modes */
src: url('http://cdn.mathjax.org/mathjax/latest/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot?iefix') format('eot'),
url('http://cdn.mathjax.org/mathjax/latest/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'),
url('http://cdn.mathjax.org/mathjax/latest/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype'),
url('http://cdn.mathjax.org/mathjax/latest/fonts/HTML-CSS/TeX/svg/MathJax_Math-Italic.svg#MathJax_Math-Italic') format('svg');
}
@font-face {
font-family: 'MJX_Main';
src: url('http://cdn.mathjax.org/mathjax/latest/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); /* IE9 Compat Modes */
src: url('http://cdn.mathjax.org/mathjax/latest/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot?iefix') format('eot'),
url('http://cdn.mathjax.org/mathjax/latest/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'),
url('http://cdn.mathjax.org/mathjax/latest/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype'),
url('http://cdn.mathjax.org/mathjax/latest/fonts/HTML-CSS/TeX/svg/MathJax_Main-Regular.svg#MathJax_Main-Regular') format('svg');
}
</style>
</head>
<body><div class="header"><h1 class="chapter_number">
<a href="">CHAPTER 5</a></h1>
<h1 class="chapter_title"><a href="">ニューラルネットワークを訓練するのはなぜ難しいのか</a></h1></div><div class="section"><div id="toc">
<p class="toc_title"><a href="index.html">ニューラルネットワークと深層学習</a></p><p class="toc_not_mainchapter"><a href="about.html">What this book is about</a></p><p class="toc_not_mainchapter"><a href="exercises_and_problems.html">On the exercises and problems</a></p><p class='toc_mainchapter'><a id="toc_using_neural_nets_to_recognize_handwritten_digits_reveal" class="toc_reveal" onMouseOver="this.style.borderBottom='1px solid #2A6EA6';" onMouseOut="this.style.borderBottom='0px';"><img id="toc_img_using_neural_nets_to_recognize_handwritten_digits" src="images/arrow.png" width="15px"></a><a href="chap1.html">ニューラルネットワークを用いた手書き文字認識</a><div id="toc_using_neural_nets_to_recognize_handwritten_digits" style="display: none;"><p class="toc_section"><ul><a href="chap1.html#perceptrons"><li>Perceptrons</li></a><a href="chap1.html#sigmoid_neurons"><li>Sigmoid neurons</li></a><a href="chap1.html#the_architecture_of_neural_networks"><li>The architecture of neural networks</li></a><a href="chap1.html#a_simple_network_to_classify_handwritten_digits"><li>A simple network to classify handwritten digits</li></a><a href="chap1.html#learning_with_gradient_descent"><li>Learning with gradient descent</li></a><a href="chap1.html#implementing_our_network_to_classify_digits"><li>Implementing our network to classify digits</li></a><a href="chap1.html#toward_deep_learning"><li>Toward deep learning</li></a></ul></p></div>
<script>
$('#toc_using_neural_nets_to_recognize_handwritten_digits_reveal').click(function() {
var src = $('#toc_img_using_neural_nets_to_recognize_handwritten_digits').attr('src');
if(src == 'images/arrow.png') {
$("#toc_img_using_neural_nets_to_recognize_handwritten_digits").attr('src', 'images/arrow_down.png');
} else {
$("#toc_img_using_neural_nets_to_recognize_handwritten_digits").attr('src', 'images/arrow.png');
};
$('#toc_using_neural_nets_to_recognize_handwritten_digits').toggle('fast', function() {});
});</script><p class='toc_mainchapter'><a id="toc_how_the_backpropagation_algorithm_works_reveal" class="toc_reveal" onMouseOver="this.style.borderBottom='1px solid #2A6EA6';" onMouseOut="this.style.borderBottom='0px';"><img id="toc_img_how_the_backpropagation_algorithm_works" src="images/arrow.png" width="15px"></a><a href="chap2.html">逆伝播の仕組み</a><div id="toc_how_the_backpropagation_algorithm_works" style="display: none;"><p class="toc_section"><ul><a href="chap2.html#warm_up_a_fast_matrix-based_approach_to_computing_the_output_from_a_neural_network"><li>Warm up: a fast matrix-based approach to computing the output from a neural network</li></a><a href="chap2.html#the_two_assumptions_we_need_about_the_cost_function"><li>The two assumptions we need about the cost function</li></a><a href="chap2.html#the_hadamard_product_$s_\odot_t$"><li>The Hadamard product, $s \odot t$</li></a><a href="chap2.html#the_four_fundamental_equations_behind_backpropagation"><li>The four fundamental equations behind backpropagation</li></a><a href="chap2.html#proof_of_the_four_fundamental_equations_(optional)"><li>Proof of the four fundamental equations (optional)</li></a><a href="chap2.html#the_backpropagation_algorithm"><li>The backpropagation algorithm</li></a><a href="chap2.html#the_code_for_backpropagation"><li>The code for backpropagation</li></a><a href="chap2.html#in_what_sense_is_backpropagation_a_fast_algorithm"><li>In what sense is backpropagation a fast algorithm?</li></a><a href="chap2.html#backpropagation_the_big_picture"><li>Backpropagation: the big picture</li></a></ul></p></div>
<script>
$('#toc_how_the_backpropagation_algorithm_works_reveal').click(function() {
var src = $('#toc_img_how_the_backpropagation_algorithm_works').attr('src');
if(src == 'images/arrow.png') {
$("#toc_img_how_the_backpropagation_algorithm_works").attr('src', 'images/arrow_down.png');
} else {
$("#toc_img_how_the_backpropagation_algorithm_works").attr('src', 'images/arrow.png');
};
$('#toc_how_the_backpropagation_algorithm_works').toggle('fast', function() {});
});</script><p class='toc_mainchapter'><a id="toc_improving_the_way_neural_networks_learn_reveal" class="toc_reveal" onMouseOver="this.style.borderBottom='1px solid #2A6EA6';" onMouseOut="this.style.borderBottom='0px';"><img id="toc_img_improving_the_way_neural_networks_learn" src="images/arrow.png" width="15px"></a><a href="chap3.html">ニューラルネットワークの学習の改善</a><div id="toc_improving_the_way_neural_networks_learn" style="display: none;"><p class="toc_section"><ul><a href="chap3.html#the_cross-entropy_cost_function"><li>The cross-entropy cost function</li></a><a href="chap3.html#overfitting_and_regularization"><li>Overfitting and regularization</li></a><a href="chap3.html#weight_initialization"><li>Weight initialization</li></a><a href="chap3.html#handwriting_recognition_revisited_the_code"><li>Handwriting recognition revisited: the code</li></a><a href="chap3.html#how_to_choose_a_neural_network's_hyper-parameters"><li>How to choose a neural network's hyper-parameters?</li></a><a href="chap3.html#other_techniques"><li>Other techniques</li></a></ul></p></div>
<script>
$('#toc_improving_the_way_neural_networks_learn_reveal').click(function() {
var src = $('#toc_img_improving_the_way_neural_networks_learn').attr('src');
if(src == 'images/arrow.png') {
$("#toc_img_improving_the_way_neural_networks_learn").attr('src', 'images/arrow_down.png');
} else {
$("#toc_img_improving_the_way_neural_networks_learn").attr('src', 'images/arrow.png');
};
$('#toc_improving_the_way_neural_networks_learn').toggle('fast', function() {});
});</script><p class='toc_mainchapter'><a id="toc_a_visual_proof_that_neural_nets_can_compute_any_function_reveal" class="toc_reveal" onMouseOver="this.style.borderBottom='1px solid #2A6EA6';" onMouseOut="this.style.borderBottom='0px';"><img id="toc_img_a_visual_proof_that_neural_nets_can_compute_any_function" src="images/arrow.png" width="15px"></a><a href="chap4.html">ニューラルネットワークが任意の関数を表現できることの視覚的証明</a><div id="toc_a_visual_proof_that_neural_nets_can_compute_any_function" style="display: none;"><p class="toc_section"><ul><a href="chap4.html#two_caveats"><li>Two caveats</li></a><a href="chap4.html#universality_with_one_input_and_one_output"><li>Universality with one input and one output</li></a><a href="chap4.html#many_input_variables"><li>Many input variables</li></a><a href="chap4.html#extension_beyond_sigmoid_neurons"><li>Extension beyond sigmoid neurons</li></a><a href="chap4.html#fixing_up_the_step_functions"><li>Fixing up the step functions</li></a><a href="chap4.html#conclusion"><li>Conclusion</li></a></ul></p></div>
<script>
$('#toc_a_visual_proof_that_neural_nets_can_compute_any_function_reveal').click(function() {
var src = $('#toc_img_a_visual_proof_that_neural_nets_can_compute_any_function').attr('src');
if(src == 'images/arrow.png') {
$("#toc_img_a_visual_proof_that_neural_nets_can_compute_any_function").attr('src', 'images/arrow_down.png');
} else {
$("#toc_img_a_visual_proof_that_neural_nets_can_compute_any_function").attr('src', 'images/arrow.png');
};
$('#toc_a_visual_proof_that_neural_nets_can_compute_any_function').toggle('fast', function() {});
});</script><p class='toc_mainchapter'><a id="toc_why_are_deep_neural_networks_hard_to_train_reveal" class="toc_reveal" onMouseOver="this.style.borderBottom='1px solid #2A6EA6';" onMouseOut="this.style.borderBottom='0px';"><img id="toc_img_why_are_deep_neural_networks_hard_to_train" src="images/arrow.png" width="15px"></a><a href="chap5.html">ニューラルネットワークを訓練するのはなぜ難しいのか</a><div id="toc_why_are_deep_neural_networks_hard_to_train" style="display: none;"><p class="toc_section"><ul><a href="chap5.html#the_vanishing_gradient_problem"><li>The vanishing gradient problem</li></a><a href="chap5.html#what's_causing_the_vanishing_gradient_problem_unstable_gradients_in_deep_neural_nets"><li>What's causing the vanishing gradient problem? Unstable gradients in deep neural nets</li></a><a href="chap5.html#unstable_gradients_in_more_complex_networks"><li>Unstable gradients in more complex networks</li></a><a href="chap5.html#other_obstacles_to_deep_learning"><li>Other obstacles to deep learning</li></a></ul></p></div>
<script>
$('#toc_why_are_deep_neural_networks_hard_to_train_reveal').click(function() {
var src = $('#toc_img_why_are_deep_neural_networks_hard_to_train').attr('src');
if(src == 'images/arrow.png') {
$("#toc_img_why_are_deep_neural_networks_hard_to_train").attr('src', 'images/arrow_down.png');
} else {
$("#toc_img_why_are_deep_neural_networks_hard_to_train").attr('src', 'images/arrow.png');
};
$('#toc_why_are_deep_neural_networks_hard_to_train').toggle('fast', function() {});
});</script><p class='toc_mainchapter'><a id="toc_deep_learning_reveal" class="toc_reveal" onMouseOver="this.style.borderBottom='1px solid #2A6EA6';" onMouseOut="this.style.borderBottom='0px';"><img id="toc_img_deep_learning" src="images/arrow.png" width="15px"></a>Deep learning<div id="toc_deep_learning" style="display: none;"><p class="toc_section"><ul><li>Convolutional neural networks</li><li>Pretraining</li><li>Recurrent neural networks, Boltzmann machines, and other models</li><li>Is there a universal thinking algorithm?</li><li>On the future of neural networks</li></ul></p></div>
<script>
$('#toc_deep_learning_reveal').click(function() {
var src = $('#toc_img_deep_learning').attr('src');
if(src == 'images/arrow.png') {
$("#toc_img_deep_learning").attr('src', 'images/arrow_down.png');
} else {
$("#toc_img_deep_learning").attr('src', 'images/arrow.png');
};
$('#toc_deep_learning').toggle('fast', function() {});
});</script><p class="toc_not_mainchapter"><a href="acknowledgements.html">Acknowledgements</a></p><p class="toc_not_mainchapter"><a href="faq.html">Frequently Asked Questions</a></p>
<hr>
<span class="sidebar_title">Sponsors</span>
<br/>
<a href='http://www.ersatz1.com/'><img src='assets/ersatz.png' width='140px' style="padding: 0px 0px 10px 8px; border-style: none;"></a>
<a href='http://gsquaredcapital.com/'><img src='assets/gsquared.png' width='150px' style="padding: 0px 0px 10px 10px; border-style: none;"></a>
<a href='http://www.tineye.com'><img src='assets/tineye.png' width='150px'
style="padding: 0px 0px 10px 8px; border-style: none;"></a>
<a href='http://www.visionsmarts.com'><img
src='assets/visionsmarts.png' width='160px' style="padding: 0px 0px
0px 0px; border-style: none;"></a> <br/>
<!--
<p class="sidebar">Thanks to all the <a
href="supporters.html">supporters</a> who made the book possible.
Thanks also to all the contributors to the <a
href="bugfinder.html">Bugfinder Hall of Fame</a>. </p>
<p class="sidebar">The book is currently a beta release, and is still
under active development. Please send error reports to
[email protected]. For other enquiries, please see the <a
href="faq.html">FAQ</a> first.</p>
-->
<p class="sidebar">著者と共にこの本を作り出してくださった<a
href="supporters.html">サポーター</a>の皆様に感謝いたします。
また、<a
href="bugfinder.html">バグ発見者の殿堂</a>に名を連ねる皆様にも感謝いたします。
また、日本語版の出版にあたっては、<a
href="translators.html">翻訳者</a>の皆様に深く感謝いたします。
</p>
<p class="sidebar">この本は目下のところベータ版で、開発続行中です。
エラーレポートは [email protected] まで、日本語版に関する質問は [email protected] までお送りください。
その他の質問については、まずは<a
href="faq.html">FAQ</a>をごらんください。</p>
<hr>
<span class="sidebar_title">Resources</span>
<p class="sidebar">
<a href="https://github.com/mnielsen/neural-networks-and-deep-learning">Code repository</a></p>
<p class="sidebar">
<a href="http://eepurl.com/BYr9L">Mailing list for book announcements</a>
</p>
<p class="sidebar">
<a href="http://eepurl.com/0Xxjb">Michael Nielsen's project announcement mailing list</a>
</p>
<hr>
<a href="http://michaelnielsen.org"><img src="assets/Michael_Nielsen_Web_Small.jpg" width="160px" style="border-style: none;"/></a>
<p class="sidebar">
著:<a href="http://michaelnielsen.org">Michael Nielsen</a> / 2014年9月-12月 <br > 訳:<a href="https://github.com/nnadl-ja/nnadl_site_ja">「ニューラルネットワークと深層学習」翻訳プロジェクト</a>
</p>
</div>
</p><p>Imagine you're an engineer who has been asked to design a computerfrom scratch. One day you're working away in your office, designinglogical circuits, setting out <CODE>AND</CODE> gates, <CODE>OR</CODE> gates,and so on, when your boss walks in with bad news. The customer hasjust added a surprising design requirement: the circuit for the entirecomputer must be just two layers deep:</p><p><center><img src="images/shallow_circuit.png" width="500px"></center></p><p>You're dumbfounded, and tell your boss: "The customer is crazy!"</p><p>Your boss replies: "I think they're crazy, too. But what thecustomer wants, they get."</p><p>In fact, there's a limited sense in which the customer isn't crazy.Suppose you're allowed to use a special logical gate which lets you<CODE>AND</CODE> together as many inputs as you want. And you're alsoallowed a many-input <CODE>NAND</CODE> gate, that is, a gate which can<CODE>AND</CODE> multiple inputs and then negate the output. With thesespecial gates it turns out to be possible to compute any function atall using a circuit that's just two layers deep.</p><p>But just because something is possible doesn't make it a good idea.In practice, when solving circuit design problems (or most any kind ofalgorithmic problem), we usually start by figuring out how to solvesub-problems, and then gradually integrate the solutions. In otherwords, we build up to a solution through multiple layers ofabstraction. </p><p>For instance, suppose we're designing a logical circuit to multiplytwo numbers. Chances are we want to build it up out of sub-circuitsdoing operations like adding two numbers. The sub-circuits for addingtwo numbers will, in turn, be built up out of sub-sub-circuits foradding two bits. Very roughly speaking our circuit will look like:</p><p><center><img src="images/circuit_multiplication.png" width="500px"></center></p><p>That is, our final circuit contains at least three layers of circuitelements. In fact, it'll probably contain more than three layers, aswe break the sub-tasks down into smaller units than I've described.But you get the general idea.</p><p>So deep circuits make the process of design easier. But they're notjust helpful for design. There are, in fact, mathematical proofsshowing that for some functions very shallow circuits requireexponentially more circuit elements to compute than do deep circuits.For instance, a famous 1984 paper by Furst, Saxe andSipser*<span class="marginnote">*See <a href="http://link.springer.com/article/10.1007/BF01744431">Parity, Circuits, and the Polynomial-Time Hierarchy</a>, by Merrick Furst, James B. Saxe, and Michael Sipser (1984).</span> showed that computing theparity of a set of bits requires exponentially many gates, if donewith a shallow circuit. On the other hand, if you use deeper circuitsit's easy to compute the parity using a small circuit: you justcompute the parity of pairs of bits, then use those results to computethe parity of pairs of pairs of bits, and so on, building up quicklyto the overall parity. Deep circuits thus can be intrinsically muchmore powerful than shallow circuits.</p><p>Up to now, this book has approached neural networks like the crazycustomer. Almost all the networks we've worked with have just asingle hidden layer of neurons (plus the input and output layers):</p><p><center><img src="images/tikz35.png"/></center></p><p>These simple networks have been remarkably useful: in earlier chapterswe used networks like this to classify handwritten digits with betterthan 98 percent accuracy! Nonetheless, intuitively we'd expectnetworks with many more hidden layers to be more powerful:</p><p><center><img src="images/tikz36.png"/></center></p><p>Such networks could use the intermediate layers to build up multiplelayers of abstraction, just as we do in Boolean circuits. Forinstance, if we're doing visual pattern recognition, then the neuronsin the first layer might learn to recognize edges, the neurons in thesecond layer could learn to recognize more complex shapes, saytriangle or rectangles, built up from edges. The third layer wouldthen recognize still more complex shapes. And so on. These multiplelayers of abstraction seem likely to give deep networks a compellingadvantage in learning to solve complex pattern recognition problems.Moreover, just as in the case of circuits, there are theoreticalresults suggesting that deep networks are instrinsically more powerfulthan shallow networks*<span class="marginnote">*For certain problems and network architectures this is proved in <a href="http://arxiv.org/pdf/1312.6098.pdf">On the number of response regions of deep feed forward networks with piece-wise linear activations</a>, by Razvan Pascanu, Guido Montúfar, and Yoshua Bengio (2014). See also the more informal discussion in section 2 of <a href="http://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf">Learning deep architectures for AI</a>, by Yoshua Bengio (2009).</span>.</p><p>How can we train such deep networks? In this chapter, we'll trytraining deep networks using our workhorse learning algorithm -<a href="chap1.html#learning_with_gradient_descent">stochastic gradient descent</a> by <a href="chap2.html">backpropagation</a>. But we'llrun into trouble, with our deep networks not performing much (if atall) better than shallow networks.</p><p>That failure seems surprising in the light of the discussion above.Rather than give up on deep networks, we'll dig down and try tounderstand what's making our deep networks hard to train. When welook closely, we'll discover that the different layers in our deepnetwork are learning at vastly different speeds. In particular, whenlater layers in the network are learning well, early layers often getstuck during training, learning almost nothing at all. This stucknessisn't simply due to bad luck. Rather, we'll discover there arefundamental reasons the learning slowdown occurs, connected to our useof gradient-based learning techniques.</p><p>As we delve into the problem more deeply, we'll learn that theopposite phenomenon can also occur: the early layers may be learningwell, but later layers can become stuck. In fact, we'll find thatthere's an intrinsic instability associated to learning by gradientdescent in deep, many-layer neural networks. This instability tendsto result in either the early or the later layers getting stuck duringtraining.</p><p>This all sounds like bad news. But by delving into thesedifficulties, we can begin to gain insight into what's required totrain deep networks effectively. And so these investigations are goodpreparation for the next chapter, where we'll use deep learning toattack image recognition problems.</p><p><h3><a name="the_vanishing_gradient_problem"></a><a href="#the_vanishing_gradient_problem">The vanishing gradient problem</a></h3></p><p>So, what goes wrong when we try to train a deep network?</p><p>To answer that question, let's first revisit the case of a networkwith just a single hidden layer. As per usual, we'll use the MNISTdigit classification problem as our playground for learning andexperimentation*<span class="marginnote">*I introduced the MNIST problem and data <a href="chap1.html#learning_with_gradient_descent">here</a> and <a href="chap1.html#implementing_our_network_to_classify_digits">here</a>.</span>.</p><p>If you wish, you can follow along by training networks on yourcomputer. It is also, of course, fine to just read along. If you dowish to follow live, then you'll need Python 2.7, Numpy, and a copy ofthe code, which you can get by cloning the relevant repository fromthe command line:<div class="highlight"><pre>git clone https://github.com/mnielsen/neural-networks-and-deep-learning.git
</pre></div>
If you don't use <tt>git</tt> then you can download the data and code<a href="https://github.com/mnielsen/neural-networks-and-deep-learning/archive/master.zip">here</a>. You'll need to change into the <tt>src</tt> subdirectory.</p><p>Then, from a Python shell we load the MNIST data:</p><p><div class="highlight"><pre><span class="o">>>></span> <span class="kn">import</span> <span class="nn">mnist_loader</span>
<span class="o">>>></span> <span class="n">training_data</span><span class="p">,</span> <span class="n">validation_data</span><span class="p">,</span> <span class="n">test_data</span> <span class="o">=</span> \
<span class="o">...</span> <span class="n">mnist_loader</span><span class="o">.</span><span class="n">load_data_wrapper</span><span class="p">()</span>
</pre></div>
</p><p>We set up our network:</p><p><div class="highlight"><pre><span class="o">>>></span> <span class="kn">import</span> <span class="nn">network2</span>
<span class="o">>>></span> <span class="n">net</span> <span class="o">=</span> <span class="n">network2</span><span class="o">.</span><span class="n">Network</span><span class="p">([</span><span class="mi">784</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">10</span><span class="p">])</span>
</pre></div>
</p><p>This network has 784 neurons in the input layer, corresponding to the$28 \times 28 = 784$ pixels in the input image. We use 30 hiddenneurons, as well as 10 output neurons, corresponding to the 10possible classifications for the MNIST digits ('0', '1', '2', $\ldots$,'9').</p><p>Let's try training our network for 30 complete epochs, usingmini-batches of 10 training examples at a time, a learning rate $\eta= 0.1$, and regularization parameter $\lambda = 5.0$. As we trainwe'll monitor the classification accuracy on the<tt>validation_data</tt>*<span class="marginnote">*Note that the networks take quite some time to train - up to a few minutes per training epoch, depending on the speed of your machine. So if you're running the code it's best to continue reading and return later, not to wait for the code to finish executing.</span>:</p><p><div class="highlight"><pre><span class="o">>>></span> <span class="n">net</span><span class="o">.</span><span class="n">SGD</span><span class="p">(</span><span class="n">training_data</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">10</span><span class="p">,</span> <span class="mf">0.1</span><span class="p">,</span> <span class="n">lmbda</span><span class="o">=</span><span class="mf">5.0</span><span class="p">,</span>
<span class="o">...</span> <span class="n">evaluation_data</span><span class="o">=</span><span class="n">validation_data</span><span class="p">,</span> <span class="n">monitor_evaluation_accuracy</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
</pre></div>
</p><p>We get a classification accuracy of 96.48 percent (or thereabouts -it'll vary a bit from run to run), comparable to our earlier resultswith a similar configuration.</p><p>Now, let's add another hidden layer, also with 30 neurons in it, andtry training with the same hyper-parameters:</p><p><div class="highlight"><pre><span class="o">>>></span> <span class="n">net</span> <span class="o">=</span> <span class="n">network2</span><span class="o">.</span><span class="n">Network</span><span class="p">([</span><span class="mi">784</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">10</span><span class="p">])</span>
<span class="o">>>></span> <span class="n">net</span><span class="o">.</span><span class="n">SGD</span><span class="p">(</span><span class="n">training_data</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">10</span><span class="p">,</span> <span class="mf">0.1</span><span class="p">,</span> <span class="n">lmbda</span><span class="o">=</span><span class="mf">5.0</span><span class="p">,</span>
<span class="o">...</span> <span class="n">evaluation_data</span><span class="o">=</span><span class="n">validation_data</span><span class="p">,</span> <span class="n">monitor_evaluation_accuracy</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
</pre></div>
</p><p>This gives an improved classification accuracy, 96.90 percent. That'sencouraging: a little more depth is helping. Let's add another30-neuron hidden layer:</p><p><div class="highlight"><pre><span class="o">>>></span> <span class="n">net</span> <span class="o">=</span> <span class="n">network2</span><span class="o">.</span><span class="n">Network</span><span class="p">([</span><span class="mi">784</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">10</span><span class="p">])</span>
<span class="o">>>></span> <span class="n">net</span><span class="o">.</span><span class="n">SGD</span><span class="p">(</span><span class="n">training_data</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">10</span><span class="p">,</span> <span class="mf">0.1</span><span class="p">,</span> <span class="n">lmbda</span><span class="o">=</span><span class="mf">5.0</span><span class="p">,</span>
<span class="o">...</span> <span class="n">evaluation_data</span><span class="o">=</span><span class="n">validation_data</span><span class="p">,</span> <span class="n">monitor_evaluation_accuracy</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
</pre></div>
</p><p>That doesn't help at all. In fact, the result drops back down to96.57 percent, close to our original shallow network. And suppose weinsert one further hidden layer:</p><p><div class="highlight"><pre><span class="o">>>></span> <span class="n">net</span> <span class="o">=</span> <span class="n">network2</span><span class="o">.</span><span class="n">Network</span><span class="p">([</span><span class="mi">784</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">10</span><span class="p">])</span>
<span class="o">>>></span> <span class="n">net</span><span class="o">.</span><span class="n">SGD</span><span class="p">(</span><span class="n">training_data</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">10</span><span class="p">,</span> <span class="mf">0.1</span><span class="p">,</span> <span class="n">lmbda</span><span class="o">=</span><span class="mf">5.0</span><span class="p">,</span>
<span class="o">...</span> <span class="n">evaluation_data</span><span class="o">=</span><span class="n">validation_data</span><span class="p">,</span> <span class="n">monitor_evaluation_accuracy</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
</pre></div>
</p><p>The classification accuracy drops again, to 96.53 percent. That'sprobably not a statistically significant drop, but it's notencouraging, either.</p><p>This behaviour seems strange. Intuitively, extra hidden layers oughtto make the network able to learn more complex classificationfunctions, and thus do a better job classifying. Certainly, thingsshouldn't get worse, since the extra layers can, in the worst case,simply do nothing*<span class="marginnote">*See <a href="#identity_neuron">this later problem</a> to understand how to build a hidden layer that does nothing.</span>. But that's not what's going on.</p><p>So what is going on? Let's assume that the extra hidden layers reallycould help in principle, and the problem is that our learningalgorithm isn't finding the right weights and biases. We'd like tofigure out what's going wrong in our learning algorithm, and how to dobetter.</p><p>To get some insight into what's going wrong, let's visualize how thenetwork learns. Below, I've plotted part of a $[784, 30, 30, 10]$network, i.e., a network with two hidden layers, each containing $30$hidden neurons. Each neuron in the diagram has a little bar on it,representing how quickly that neuron is changing as the networklearns. A big bar means the neuron's weights and bias are changingrapidly, while a small bar means the weights and bias are changingslowly. More precisely, the bars denote the gradient $\partial C/ \partial b$ for each neuron, i.e., the rate of change of the costwith respect to the neuron's bias. Back in<a href="chap2.html#the_four_fundamental_equations_behind_backpropagation">Chapter 2</a> we saw that this gradient quantity controlled not just howrapidly the bias changes during learning, but also how rapidly theweights input to the neuron change, too. Don't worry if you don'trecall the details: the thing to keep in mind is simply that thesebars show how quickly each neuron's weights and bias are changing asthe network learns.</p><p>To keep the diagram simple, I've shown just the top six neurons in thetwo hidden layers. I've omitted the input neurons, since they've gotno weights or biases to learn. I've also omitted the output neurons,since we're doing layer-wise comparisons, and it makes most sense tocompare layers with the same number of neurons. The results areplotted at the very beginning of training, i.e., immediately after thenetwork is initialized. Here they are*<span class="marginnote">*The data plotted is generated using the program <a href="https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/fig/generate_gradient.py">generate_gradient.py</a>. The same program is also used to generate the results quoted later in this section.</span>:</p><p><canvas id="initial_gradient" width=470 height=620></canvas></p><p>The network was initialized randomly, and so it's not surprising thatthere's a lot of variation in how rapidly the neurons learn. Still,one thing that jumps out is that the bars in the second hidden layerare mostly much larger than the bars in the first hidden layer. As aresult, the neurons in the second hidden layer will learn quite a bitfaster than the neurons in the first hidden layer. Is this merely acoincidence, or are the neurons in the second hidden layer likely tolearn faster than neurons in the first hidden layer in general?</p><p>To determine whether this is the case, it helps to have a global wayof comparing the speed of learning in the first and second hiddenlayers. To do this, let's denote the gradient as $\delta^l_j= \partial C /\partial b^l_j$, i.e., the gradient for the $j$th neuron in the $l$thlayer*<span class="marginnote">*Back in <a href="chap2.html#the_four_fundamental_equations_behind_backpropagation">Chapter 2</a> we referred to this as the error, but here we'll adopt the informal term "gradient". I say "informal" because of course this doesn't explicitly include the partial derivatives of the cost with respect to the weights, $\partial C / \partial w$.</span>. We canthink of the gradient $\delta^1$ as a vector whose entries determinehow quickly the first hidden layer learns, and $\delta^2$ as a vectorwhose entries determine how quickly the second hidden layer learns.We'll then use the lengths of these vectors as (rough!) globalmeasures of the speed at which the layers are learning. So, forinstance, the length $\| \delta^1 \|$ measures the speed at which thefirst hidden layer is learning, while the length $\| \delta^2 \|$measures the speed at which the second hidden layer is learning.</p><p>With these definitions, and in the same configuration as was plottedabove, we find $\| \delta^1 \| = 0.07\ldots$ and $\| \delta^2 \| =0.31\ldots$. So this confirms our earlier suspicion: the neurons inthe second hidden layer really are learning much faster than theneurons in the first hidden layer.</p><p>What happens if we add more hidden layers? If we have three hiddenlayers, in a $[784, 30, 30, 30, 10]$ network, then the respectivespeeds of learning turn out to be 0.012, 0.060, and 0.283. Again,earlier hidden layers are learning much slower than later hiddenlayers. Suppose we add yet another layer with $30$ hidden neurons.In that case, the respective speeds of learning are 0.003, 0.017,0.070, and 0.285. The pattern holds: early layers learn slower thanlater layers.</p><p>We've been looking at the speed of learning at the start of training,that is, just after the networks are initialized. How does the speedof learning change as we train our networks? Let's return to look atthe network with just two hidden layers. The speed of learningchanges as follows:</p><p><center><img src="images/training_speed_2_layers.png" width="500px"></center></p><p>To generate these results, I used batch gradient descent with just1,000 training images, trained over 500 epochs. This is a bitdifferent than the way we usually train - I've used no mini-batches,and just 1,000 training images, rather than the full 50,000 imagetraining set. I'm not trying to do anything sneaky, or pull the woolover your eyes, but it turns out that using mini-batch stochasticgradient descent gives much noisier (albeit very similar, when youaverage away the noise) results. Using the parameters I've chosen isan easy way of smoothing the results out, so we can see what's goingon.</p><p>In any case, as you can see the two layers start out learning at verydifferent speeds (as we already know). The speed in both layers thendrops very quickly, before rebounding. But through it all, the firsthidden layer learns much more slowly than the second hidden layer.</p><p>What about more complex networks? Here's the results of a similarexperiment, but this time with three hidden layers (a $[784, 30, 30,30, 10]$ network):</p><p><center><img src="images/training_speed_3_layers.png" width="500px"></center></p><p>Again, early hidden layers learn much more slowly than later hiddenlayers. Finally, let's add a fourth hidden layers (a $[784, 30, 30,30, 30, 10]$ network), and see what happens when we train:</p><p><center><img src="images/training_speed_4_layers.png" width="500px"></center></p><p>Again, early hidden layers learn much more slowly than later hiddenlayers. In this case, the first hidden layer is learning roughly 100times slower than the final hidden layer. No wonder we were havingtrouble training these networks earlier!</p><p>We have here an important observation: in at least some deep neuralnetworks, the gradient tends to get smaller as we move backwardthrough the hidden layers. This means that neurons in the earlierlayers learn much more slowly than neurons in later layers. And whilewe've seen this in just a single network, there are fundamentalreasons why this happens in many neural networks. The phenomenon isknown as the <em>vanishing gradient problem</em>*<span class="marginnote">*See <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.7321">Gradient flow in recurrent nets: the difficulty of learning long-term dependencies</a>, by Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber (2001). This paper studied recurrent neural nets, but the essential phenomenon is the same as in the feedforward networks we are studying. See also Sepp Hochreiter's earlier Diploma Thesis, <a href="http://www.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf">Untersuchungen zu dynamischen neuronalen Netzen</a> (1991, in German).</span>.</p><p>Why does the vanishing gradient problem occur? Are there ways we canavoid it? And how should we deal with it in training deep neuralnetworks? In fact, we'll learn shortly that it's not inevitable,although the alternative is not very attractive, either: sometimes thegradient gets much larger in earlier layers! This is the<em>exploding gradient problem</em>, and it's not much better news thanthe vanishing gradient problem. More generally, it turns out that thegradient in deep neural networks is <em>unstable</em>, tending to eitherexplode or vanish in earlier layers. This instability is afundamental problem for gradient-based learning in deep neuralnetworks. It's something we need to understand, and, if possible,take steps to address.</p><p>One response to vanishing (or unstable) gradients is to wonder ifthey're really such a problem. Momentarily stepping away from neuralnets, imagine we were trying to numerically minimize a function $f(x)$of a single variable. Wouldn't it be good news if the derivative$f'(x)$ was small? Wouldn't that mean we were already near anextremum? In a similar way, might the small gradient in early layersof a deep network mean that we don't need to do much adjustment of theweights and biases?</p><p>Of course, this isn't the case. Recall that we randomly initializedthe weight and biases in the network. It is extremely unlikely ourinitial weights and biases will do a good job at whatever it is wewant our network to do. To be concrete, consider the first layer ofweights in a $[784, 30, 30, 30, 10]$ network for the MNIST problem.The random initialization means the first layer throws away mostinformation about the input image. Even if later layers have beenextensively trained, they will still find it extremely difficult toidentify the input image, simply because they don't have enoughinformation. And so it can't possibly be the case that not muchlearning needs to be done in the first layer. If we're going to traindeep networks, we need to figure out how to address the vanishinggradient problem.</p><p><h3><a name="what's_causing_the_vanishing_gradient_problem_unstable_gradients_in_deep_neural_nets"></a><a href="#what's_causing_the_vanishing_gradient_problem_unstable_gradients_in_deep_neural_nets">What's causing the vanishing gradient problem? Unstable gradients in deep neural nets</a></h3></p><p>To get insight into why the vanishing gradient problem occurs, let'sconsider the simplest deep neural network: one with just a singleneuron in each layer. Here's a network with three hidden layers:<center> <img src="images/tikz37.png"/></center></p><p>Here, $w_1, w_2, \ldots$ are the weights, $b_1, b_2, \ldots$ are thebiases, and $C$ is some cost function. Just to remind you how thisworks, the output $a_j$ from the $j$th neuron is $\sigma(z_j)$, where$\sigma$ is the usual <a href="chap1.html#sigmoid_neurons">sigmoid activation function</a>, and $z_j = w_{j} a_{j-1}+b_j$ is the weightedinput to the neuron. I've drawn the cost $C$ at the end to emphasizethat the cost is a function of the network's output, $a_4$: if theactual output from the network is close to the desired output, thenthe cost will be low, while if it's far away, the cost will be high.</p><p>We're going to study the gradient $\partial C / \partial b_1$associated to the first hidden neuron. We'll figure out an expressionfor $\partial C / \partial b_1$, and by studying that expression we'llunderstand why the vanishing gradient problem occurs.</p><p>I'll start by simply showing you the expression for $\partial C /\partial b_1$. It looks forbidding, but it's actually got a simplestructure, which I'll describe in a moment. Here's the expression(ignore the network, for now, and note that $\sigma'$ is just thederivative of the $\sigma$ function):</p><p><center> <img src="images/tikz38.png"/></center></p><p>The structure in the expression is as follows: there is a$\sigma'(z_j)$ term in the product for each neuron in the network; aweight $w_j$ term for each weight in the network; and a final$\partial C / \partial a_4$ term, corresponding to the cost functionat the end. Notice that I've placed each term in the expression abovethe corresponding part of the network. So the network itself is amnemonic for the expression.</p><p>You're welcome to take this expression for granted, and skip to the<a href="#discussion_why">discussion of how it relates to the vanishing gradient problem</a>. There's no harm in doing this, since theexpression is a special case of our<a href="chap2.html#the_four_fundamental_equations_behind_backpropagation">earlier discussion of backpropagation</a>. But there's also a simpleexplanation of why the expression is true, and so it's fun (andperhaps enlightening) to take a look at that explanation.</p><p>Imagine we make a small change $\Delta b_1$ in the bias $b_1$. Thatwill set off a cascading series of changes in the rest of the network.First, it causes a change $\Delta a_1$ in the output from the firsthidden neuron. That, in turn, will cause a change $\Delta z_2$ in theweighted input to the second hidden neuron. Then a change $\Deltaa_2$ in the output from the second hidden neuron. And so on, all theway through to a change $\Delta C$ in the cost at the output. We have<a class="displaced_anchor" name="eqtn114"></a>\begin{eqnarray} \frac{\partial C}{\partial b_1} \approx \frac{\Delta C}{\Delta b_1}.\tag{114}\end{eqnarray}This suggests that we can figure out an expression for the gradient$\partial C / \partial b_1$ by carefully tracking the effect of eachstep in this cascade.</p><p>To do this, let's think about how $\Delta b_1$ causes the output $a_1$from the first hidden neuron to change. We have $a_1 = \sigma(z_1) =\sigma(w_1 a_0 + b_1)$, so<a class="displaced_anchor" name="eqtn115"></a><a class="displaced_anchor" name="eqtn116"></a>\begin{eqnarray} \Delta a_1 & \approx & \frac{\partial \sigma(w_1 a_0+b_1)}{\partial b_1} \Delta b_1 \tag{115}\\ & = & \sigma'(z_1) \Delta b_1.\tag{116}\end{eqnarray}That $\sigma'(z_1)$ term should look familiar: it's the first term inour claimed expression for the gradient $\partial C / \partial b_1$.Intuitively, this term converts a change $\Delta b_1$ in the bias intoa change $\Delta a_1$ in the output activation. That change $\Deltaa_1$ in turn causes a change in the weighted input $z_2 = w_2 a_1 +b_2$ to the second hidden neuron:<a class="displaced_anchor" name="eqtn117"></a><a class="displaced_anchor" name="eqtn118"></a>\begin{eqnarray} \Delta z_2 & \approx & \frac{\partial z_2}{\partial a_1} \Delta a_1 \tag{117}\\ & = & w_2 \Delta a_1.\tag{118}\end{eqnarray}Combining our expressions for $\Delta z_2$ and $\Delta a_1$, we seehow the change in the bias $b_1$ propagates along the network toaffect $z_2$:<a class="displaced_anchor" name="eqtn119"></a>\begin{eqnarray}\Delta z_2 & \approx & \sigma'(z_1) w_2 \Delta b_1.\tag{119}\end{eqnarray}Again, that should look familiar: we've now got the first two terms inour claimed expression for the gradient $\partial C / \partial b_1$.</p><p>We can keep going in this fashion, tracking the way changes propagatethrough the rest of the network. At each neuron we pick up a$\sigma'(z_j)$ term, and through each weight we pick up a $w_j$ term.The end result is an expression relating the final change $\Delta C$in cost to the initial change $\Delta b_1$ in the bias:<a class="displaced_anchor" name="eqtn120"></a>\begin{eqnarray}\Delta C & \approx & \sigma'(z_1) w_2 \sigma'(z_2) \ldots \sigma'(z_4) \frac{\partial C}{\partial a_4} \Delta b_1.\tag{120}\end{eqnarray}Dividing by $\Delta b_1$ we do indeed get the desired expression forthe gradient:<a class="displaced_anchor" name="eqtn121"></a>\begin{eqnarray}\frac{\partial C}{\partial b_1} = \sigma'(z_1) w_2 \sigma'(z_2) \ldots\sigma'(z_4) \frac{\partial C}{\partial a_4}.\tag{121}\end{eqnarray}</p><p><a id="discussion_why"></a></p><p><strong>Why the vanishing gradient problem occurs:</strong> To understand whythe vanishing gradient problem occurs, let's explicitly write out theentire expression for the gradient:<a class="displaced_anchor" name="eqtn122"></a>\begin{eqnarray}\frac{\partial C}{\partial b_1} = \sigma'(z_1) \, w_2 \sigma'(z_2) \, w_3 \sigma'(z_3) \, w_4 \sigma'(z_4) \, \frac{\partial C}{\partial a_4}.\tag{122}\end{eqnarray}Excepting the very last term, this expression is a product of terms ofthe form $w_j \sigma'(z_j)$. To understand how each of those termsbehave, let's look at a plot of the function $\sigma'$: <div id="sigmoid_prime_graph"><a name="sigmoid_prime_graph"></a></div><script type="text/javascript" src="js/d3.v3.min.js"></script></p><p>The derivative reaches a maximum at $\sigma'(0) = 1/4$. Now, if weuse our <a href="chap3.html#weight_initialization">standard approach</a>to initializing the weights in the network, then we'll choose theweights using a Gaussian with mean $0$ and standard deviation $1$. Sothe weights will usually satisfy $|w_j| < 1$. Putting theseobservations together, we see that the terms $w_j \sigma'(z_j)$ willusually satisfy $|w_j \sigma'(z_j)| < 1/4$. And when we take aproduct of many such terms, the product will tend to exponentiallydecrease: the more terms, the smaller the product will be. This isstarting to smell like a possible explanation for the vanishinggradient problem.</p><p>To make this all a bit more explicit, let's compare the expression for$\partial C / \partial b_1$ to an expression for the gradient withrespect to a later bias, say $\partial C / \partial b_3$. Of course,we haven't explicitly worked out an expression for $\partial C/ \partial b_3$, but it follows the same pattern described above for$\partial C / \partial b_1$. Here's the comparison of the twoexpressions:<center> <img src="images/tikz39.png"/></center>The two expressions share many terms. But the gradient $\partial C/ \partial b_1$ includes two extra terms each of the form $w_j\sigma'(z_j)$. As we've seen, such terms are typically less than $1/4$in magnitude. And so the gradient $\partial C / \partial b_1$ willusually be a factor of $16$ (or more) smaller than $\partial C/ \partial b_3$. This is the essential origin of the vanishinggradient problem.</p><p>Of course, this is an informal argument, not a rigorous proof that thevanishing gradient problem will occur. There are several possibleescape clauses. In particular, we might wonder whether the weights$w_j$ could grow during training. If they do, it's possible the terms$w_j \sigma'(z_j)$ in the product will no longer satisfy $|w_j\sigma'(z_j)| < 1/4$. Indeed, if the terms get large enough -greater than $1$ - then we will no longer have a vanishing gradientproblem. Instead, the gradient will actually grow exponentially as wemove backward through the layers. Instead of a vanishing gradientproblem, we'll have an exploding gradient problem.</p><p><strong>The exploding gradient problem:</strong> Let's look at an explicitexample where exploding gradients occur. The example is somewhatcontrived: I'm going to fix parameters in the network in just theright way to ensure we get an exploding gradient. But even though theexample is contrived, it has the virtue of firmly establishing thatexploding gradients aren't merely a hypothetical possibility, theyreally can happen.</p><p>There are two steps to getting an exploding gradient. First, wechoose all the weights in the network to be large, say $w_1 = w_2 =w_3 = w_4 = 100$. Second, we'll choose the biases so that the$\sigma'(z_j)$ terms are not too small. That's actually pretty easyto do: all we need do is choose the biases to ensure that the weightedinput to each neuron is $z_j = 0$ (and so $\sigma'(z_j) = 1/4$). So,for instance, we want $z_1 = w_1 a_0 + b_1 = 0$. We can achieve thisby setting $b_1 = -100 * a_0$. We can use the same idea to select theother biases. When we do this, we see that all the terms $w_j\sigma'(z_j)$ are equal to $100 * \frac{1}{4} = 25$. With thesechoices we get an exploding gradient.</p><p><strong>The unstable gradient problem:</strong> The fundamental problem hereisn't so much the vanishing gradient problem or the exploding gradientproblem. It's that the gradient in early layers is the product ofterms from all the later layers. When there are many layers, that'san intrinsically unstable situation. The only way all layers canlearn at close to the same speed is if all those products of termscome close to balancing out. Without some mechanism or underlyingreason for that balancing to occur, it's highly unlikely to happensimply by chance. In short, the real problem here is that neuralnetworks suffer from an <em>unstable gradient problem</em>. As aresult, if we use standard gradient-based learning techniques,different layers in the network will tend to learn at wildly differentspeeds.</p><p><h4><a name="exercise_255808"></a><a href="#exercise_255808">Exercise</a></h4><ul><li> In our discussion of the vanishing gradient problem, we made use of the fact that $|\sigma'(z)| < 1/4$. Suppose we used a different activation function, one whose derivative could be much larger. Would that help us avoid the unstable gradient problem?</ul></p><p><strong>The prevalence of the vanishing gradient problem:</strong> We've seenthat the gradient can either vanish or explode in the early layers ofa deep network. In fact, when using sigmoid neurons the gradient willusually vanish. To see why, consider again the expression $|w\sigma'(z)|$. To avoid the vanishing gradient problem we need $|w\sigma'(z)| \geq 1$. You might think this could happen easily if $w$is very large. However, it's more difficult that it looks. Thereason is that the $\sigma'(z)$ term also depends on $w$: $\sigma'(z)= \sigma'(wa +b)$, where $a$ is the input activation. So when we make$w$ large, we need to be careful that we're not simultaneously making$\sigma'(wa+b)$ small. That turns out to be a considerableconstraint. The reason is that when we make $w$ large we tend to make$wa+b$ very large. Looking at the graph of $\sigma'$ you can see thatthis puts us off in the "wings" of the $\sigma'$ function, where ittakes very small values. The only way to avoid this is if the inputactivation falls within a fairly narrow range of values (thisqualitative explanation is made quantitative in the first problembelow). Sometimes that will chance to happen. More often, though, itdoes not happen. And so in the generic case we have vanishinggradients.</p><p><h4><a name="problems_778071"></a><a href="#problems_778071">Problems</a></h4><ul><li> Consider the product $|w \sigma'(wa+b)|$. Suppose $|w \sigma'(wa+b)| \geq 1$. (1) Argue that this can only ever occur if $|w| \geq 4$. (2) Supposing that $|w| \geq 4$, consider the set of input activations $a$ for which $|w \sigma'(wa+b)| \geq 1$. Show that the set of $a$ satisfying that constraint can range over an interval no greater in width than <a class="displaced_anchor" name="eqtn123"></a>\begin{eqnarray} \frac{2}{|w|} \ln\left( \frac{|w|(1+\sqrt{1-4/|w|})}{2}-1\right). \tag{123}\end{eqnarray} (3) Show numerically that the above expression bounding the width of the range is greatest at $|w| \approx 6.9$, where it takes a value $\approx 0.45$. And so even given that everything lines up just perfectly, we still have a fairly narrow range of input activations which can avoid the vanishing gradient problem.</p><p><li><strong>Identity neuron:</strong> <a id="identity_neuron"></a> Consider a neuron with a single input, $x$, a corresponding weight, $w_1$, a bias $b$, and a weight $w_2$ on the output. Show that by choosing the weights and bias appropriately, we can ensure $w_2 \sigma(w_1 x+b) \approx x$ for $x \in [0, 1]$. Such a neuron can thus be used as a kind of identity neuron, that is, a neuron whose output is the same (up to rescaling by a weight factor) as its input. <em>Hint: It helps to rewrite $x = 1/2+\Delta$, to assume $w_1$ is small, and to use a Taylor series expansion in $w_1 \Delta$.</em></ul></p><p><h3><a name="unstable_gradients_in_more_complex_networks"></a><a href="#unstable_gradients_in_more_complex_networks">Unstable gradients in more complex networks</a></h3></p><p>We've been studying toy networks, with just one neuron in each hiddenlayer. What about more complex deep networks, with many neurons ineach hidden layer?</p><p><center><img src="images/tikz40.png"/></center></p><p>In fact, much the same behaviour occurs in such networks. In theearlier chapter on backpropagation we saw that the gradient in the$l$th layer of an $L$ layer network<a href="chap2.html#alternative_backprop">is given by</a>:</p><p><a class="displaced_anchor" name="eqtn124"></a>\begin{eqnarray} \delta^l = \Sigma'(z^l) (w^{l+1})^T \Sigma'(z^{l+1}) (w^{l+2})^T \ldots \Sigma'(z^L) \nabla_a C\tag{124}\end{eqnarray}</p><p>Here, $\Sigma'(z^l)$ is a diagonal matrix whose entries are the$\sigma'(z)$ values for the weighted inputs to the $l$th layer. The$w^l$ are the weight matrices for the different layers. And $\nabla_aC$ is the vector of partial derivatives of $C$ with respect to theoutput activations.</p><p>This is a much more complicated expression than in the single-neuroncase. Still, if you look closely, the essential form is very similar,with lots of pairs of the form $(w^j)^T \Sigma'(z^j)$. What's more,the matrices $\Sigma'(z^j)$ have small entries on the diagonal, nonelarger than $\frac{1}{4}$. Provided the weight matrices $w^j$ aren'ttoo large, each additional term $(w^j)^T \Sigma'(z^l)$ tends to makethe gradient vector smaller, leading to a vanishing gradient. Moregenerally, the large number of terms in the product tends to lead toan unstable gradient, just as in our earlier example. In practice,empirically it is typically found in sigmoid networks that gradientsvanish exponentially quickly in earlier layers. As a result, learningslows down in those layers. This slowdown isn't merely an accident oran inconvenience: it's a fundamental consequence of the approach we'retaking to learning.</p><p><h3><a name="other_obstacles_to_deep_learning"></a><a href="#other_obstacles_to_deep_learning">Other obstacles to deep learning</a></h3></p><p>In this chapter we've focused on vanishing gradients - and, moregenerally, unstable gradients - as an obstacle to deep learning. Infact, unstable gradients are just one obstacle to deep learning,albeit an important fundamental obstacle. Much ongoing research aimsto better understand the challenges that can occur when training deepnetworks. I won't comprehensively summarize that work here, but justwant to briefly mention a couple of papers, to give you the flavour ofsome of the questions people are asking.</p><p>As a first example, in 2010 Glorot andBengio*<span class="marginnote">*<a href="http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf">Understanding the difficulty of training deep feedforward neural networks</a>, by Xavier Glorot and Yoshua Bengio (2010). See also the earlier discussion of the use of sigmoids in <a href="http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf">Efficient BackProp</a>, by Yann LeCun, Léon Bottou, Genevieve Orr and Klaus-Robert Müller (1998).</span>found evidence suggesting that the use of sigmoid activation functionscan cause problems training deep networks. In particular, they foundevidence that the use of sigmoids will cause the activations in thefinal hidden layer to saturate near $0$ early in training,substantially slowing down learning. They suggested some alternativeactivation functions, which appear not to suffer as much from thissaturation problem.</p><p>As a second example, in 2013 Sutskever, Martens, Dahl andHinton*<span class="marginnote">*<a href="http://www.cs.toronto.edu/~hinton/absps/momentum.pdf">On the importance of initialization and momentum in deep learning</a>, by Ilya Sutskever, James Martens, George Dahl and Geoffrey Hinton (2013).</span> studied the impact on deep learning of both the randomweight initialization and the momentum schedule in momentum-basedstochastic gradient descent. In both cases, making good choices madea substantial difference in the ability to train deep networks.</p><p>These examples suggest that "What makes deep networks hard totrain?" is a complex question. In this chapter, we've focused on theinstabilities associated to gradient-based learning in deep networks.The results in the last two paragraphs suggest that there is also arole played by the choice of activation function, the way weights areinitialized, and even details of how learning by gradient descent isimplemented. And, of course, choice of network architecture and otherhyper-parameters is also important. Thus, many factors can play a rolein making deep networks hard to train, and understanding all thosefactors is still a subject of ongoing research. This all seems ratherdownbeat and pessimism-inducing. But the good news is that in thenext chapter we'll turn that around, and develop several approaches todeep learning that to some extent manage to overcome or route aroundall these challenges.</p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p><script src="js/misc.js" type="text/javascript"></script><script src="js/canvas.js" type="text/javascript"></script><script src="js/neuron.js" type="text/javascript"></script><script src="js/chap5.js" type="text/javascript"></script></p><p></div><div class="footer"> <span class="left_footer"> In academic work,
please cite this book as: Michael A. Nielsen, "Neural Networks and
Deep Learning", Determination Press, 2015
<br/>
<br/>
This work is licensed under a <a rel="license"
href="http://creativecommons.org/licenses/by-nc/3.0/deed.en_GB"
style="color: #eee;">Creative Commons Attribution-NonCommercial 3.0
Unported License</a>. This means you're free to copy, share, and
build on this book, but not to sell it. If you're interested in
commercial use, please <a
href="mailto:[email protected]">contact me</a>.
</span>
<span class="right_footer">
Last update: Sun Dec 21 14:49:05 2014
<br/>
<br/>
<br/>
<a rel="license" href="http://creativecommons.org/licenses/by-nc/3.0/deed.en_GB"><img alt="Creative Commons Licence" style="border-width:0" src="http://i.creativecommons.org/l/by-nc/3.0/88x31.png" /></a>
</span>
</div>
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-44208967-1', 'neuralnetworksanddeeplearning.com');
ga('send', 'pageview');
</script>
</body>
</html>