- There are many types of neural network architectures. However, no matter what architecture you choose, the math it contains (what calculations are being performed, and in what order) is not modified during training.
- Instead, it is the internal variables (“weights” and “biases”) which are updated during training.
ex) F= C * ? + ?

- The machine learning approach consists of using a neural network to learn the relation between inputs and outputs.
- You can think of a neural network as a stack of layers where each layer consists of some predefined Math and internal variables.
- In order for the neural network to learn the correct relationship between inputs and outputs, we have to train it.
- We train our neural network by repeatedly letting the network try to map the input to the output.
- While training, tuning the internal variables in the layers until the network learns to produce the output given the inputs.
<Colab Notebook>
- To access the Colab Notebook, login to your Google account and click on the link below:
Converting Celsius to Fahrenheit
Google Colaboratory
colab.research.google.com
- Train my first Machine Learning model! I will give TensorFlow some sample Celsius values (0, 8, 15, 22, 38) and their corresponding Fahrenheit values (32, 46, 59, 72, 100). Then, I will train a model that figures out the formula f=c×1.8+32 through the training process.
- Some Machine Learning terminology
- Feature — The input(s) to our model. In this case, a single value — the degrees in Celsius.
- Labels — The output our model predicts. In this case, a single value — the degrees in Fahrenheit.
- Example — A pair of inputs/outputs used during training. In our case a pair of values from celsius_q and fahrenheit_a at a specific index, such as (22,72).
- Since the problem is straightforward, this network will require only a single layer, with a single neuron.
- I can create the layer by instantiating tf.keras.layers.Dense with the following configuration:
- input_shape=[1] — This specifies that the input to this layer is a single value.
- units=1 — This specifies the number of neurons in the layer.
- ex) l0 = tf.keras.layers.Dense(units=1, input_shape=[1])
- Once layers are defined, they need to be assembled into a model. The Sequential model definition takes a list of layers as an argument, specifying the calculation order from the input to the output.
ex) model = tf.keras.Sequential([l0])
- Before training, the model has to be compiled. When compiled for training, the model is given:
- Loss function — A way of measuring how far off predictions are from the desired outcome. (The measured difference is called the "loss".)
- Optimizer function — A way of adjusting internal values in order to reduce the loss.
- ex) model.compile(loss='mean_squared_error', optimizer=tf.keras.optimizers.Adam(0.1))
- Train the model by calling the fit method. The difference between the actual output and the desired output is calculated using the loss function, and the optimizer function directs how the weights should be adjusted.
ex) history = model.fit(celsius_q, fahrenheit_a, epochs=500, verbose=False)
print("Finished training the model")
- The fit method returns a history object. We can use this object to plot how the loss of our model goes down after each training epoch.
- Now I have a model that has been trained to learn the relationship between celsius_q and fahrenheit_a. You can use the predict method to have it calculate the Fahrenheit degrees for a previously unknown Celsius degrees.
<소스 코드>
HoYoungChun/TensorFlow_study
Udacity의 Intro to TensorFlow for Deep Learning 강좌 for TF_Certificate 취득 - HoYoungChun/TensorFlow_study
github.com
'인공지능(AI) > Udacity tensorflow 강의' 카테고리의 다른 글
| [Lesson 2] Your First Model: Fashion MNIST (2) (0) | 2020.09.25 |
|---|---|
| [Lesson 2] Your First Model: Fashion MNIST (1) (0) | 2020.09.25 |
| [Lesson 1] Introduction to Machine Learning (2) (0) | 2020.09.24 |
| [Intro] Welcome to the Course (0) | 2020.09.24 |
| Intro to TensorFlow for Deep Learning (Udacity) - Syllabus (0) | 2020.09.24 |