Skip to content

Commit

Permalink
막다운 번역
Browse files Browse the repository at this point in the history
  • Loading branch information
rickiepark committed Apr 24, 2021
1 parent b26fc99 commit 73c80fa
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion 11_training_deep_neural_networks.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -3221,7 +3221,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Warning**: In the `on_batch_end()` method, `logs[\"loss\"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the \"noisy\" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):\n",
"**경고**: `on_batch_end()` 메서드에서 `logs[\"loss\"]`로 배치 손실을 모으지만 텐서플로 2.2.0에서 (에포크의) 평균 손실로 바뀌었습니다. (텐서플로 2.2 이상을 사용한다면) 이런 이유로 아래 그래프가 이전보다 훨씬 부드럽습니다. 이는 그래프에서 배치 손실이 폭주하기 시작하는 지점과 그렇지 않은 지점 사이에 지연이 있다는 뜻입니다. 따라서 변동이 심한 그래프에서는 조금 더 작은 학습률을 선택해야 합니다. 또한 `ExponentialLearningRate` 콜백을 조금 바꾸어 (현재 평균 손실과 이전 평균 손실을 기반으로) 배치 손실을 계산할 수 있습니다:\n",
"\n",
"```python\n",
"class ExponentialLearningRate(keras.callbacks.Callback):\n",
Expand Down

0 comments on commit 73c80fa

Please sign in to comment.