- In the previous lesson, we checked how well our neural networks were performing by looking at the accuracy on the test sets.
- To avoid overfitting, we often introduce a validation set. After every training epoch, we check how the model is doing by looking at the training loss and the loss in the validation set.

- It's important to note that the model does not use any part of the validation set to tune the weights and biases. The validation step only tells us if that model is doing well on the validation set.
- The idea is that since the neural network doesn't use the validation set for deciding its weights and biases, then it can tell us if we're overfitting the training set.
- Very high accuracy on the training set, and a much lower on the validation set -> clear sign of overfitting

<Early Stopping : avoid overfitting>
- One way to avoid overfitting is by taking a look at the plot of the training and validation loss as a function of epochs.
- We can see that after some epochs, the validation loss starts to increase, while the training loss keeps decreasing.

- In other words, this plot is telling us that after just a few epochs, our neural network is just memorizing the training data, and it's not generalizing well to the data in the validation set.
- The validation set can help us determine the number of epochs we should train our CNN for.
- The architecture that gives you the lowest validation loss will be the best model.
- Couldn't we just use the test set for validation? --> The problem is that even though we don't use the validation set to tune the weights and biases during training, we ultimately end up tuning our models such that it performs well on both the validation set and the training set.
- Therefore, our neural network will end up being biased in favor of the validation set.

<Image augmentation : avoid overfitting>
- Overfitting can be avoided by Using technique called image augmentation.
- Image augmentation works by creating new training images by applying a number of random transformation to the images in the original training set.

<Dropout : avoid overfitting>

- Notice that since we specify a probabillity, some neurons may get turned off more than others and some may never get turned off at all.
- This is not a problem because we're doing it many many times over. Therfore on average, each neuron will get the same chance of being turned off.
<Colab Notebook>
- To access the Colab Notebook, login to your Google account and click on the link below:
Cats and Dogs with Image Augmentation
Google Colaboratory
colab.research.google.com
- In this lesson we saw three different techniques to prevent overfitting:
- Early Stopping: In this method, we track the loss on the validation set during the training phase and use it to determine when to stop training such that the model is accurate but not overfitting.
- Image Augmentation: Artificially boosting the number of images in our training set by applying random image transformations to the existing images in the training set.
- Dropout: Removing a random selection of a fixed number of neurons in a neural network during training.
- You can read more about other techniques in the link below:
Memorizing is not learning! — 6 tricks to prevent overfitting in machine learning
Memorizing is not learning! — 6 tricks to prevent overfitting in machine learning. | Hacker Noon
Memorizing is not learning! — 6 tricks to prevent overfitting in machine learning. Introduction Overfitting may be the most frustrating issue of Machine Learning. In this article, we’re going to see what it is, how to spot it, and most importantly how
hackernoon.com
- In this lesson, we learned how Convolutional Neural Networks work with color images and saw various techniques that we can use to avoid overfitting.
<소스 코드>
HoYoungChun/TensorFlow_study
Udacity의 Intro to TensorFlow for Deep Learning 강좌 for TF_Certificate 취득 - HoYoungChun/TensorFlow_study
github.com
'인공지능(AI) > Udacity tensorflow 강의' 카테고리의 다른 글
| [Lesson 6] Saving and Loading Models (0) | 2020.09.28 |
|---|---|
| [Lesson 5] Transfer Learning (0) | 2020.09.28 |
| [Lesson 4] Going Further with CNNs (2) (0) | 2020.09.27 |
| [Lesson 4] Going Further with CNNs (1) (0) | 2020.09.27 |
| [Lesson 3] Introduction to CNNs (2) (0) | 2020.09.26 |