Now we have a rough understanding of how a fully-connected, multi-layer neural network can be trained using backpropagation. Instead of focusing on further theory, let's test creating and training a neural network in practice. There are multiple neural network frameworks. At the time of the writing, the dominant frameworks are TensorFlow (by Google) and PyTorch (by Facebook).
We won't dive deep into this subject, but you will get to train a neural network using TensorFlow framework.
By convention, TensorFlow is usually imported as tf.
import tensorflow as tf
TensorFlow runs computations involving tensors. Tensors are multi-dimensional arrays with a uniform type (dtype), similar but not the same as NumPy arrays.
Creating a tensor is also very similar to numpy array:
my_tensor = tf.constant([[1.0, 2.0], [3.0, 4.0]])
In this case, the returned
my_tensor object is a 2-dimensional Tensor (or EagerTensor) with a shape of
(2, 2) with a dtype
tf.float32. For more information, read the documentation about Tensors. Using
tf.data module, you can create input pipelines from various sources. For example, this tutorial explains how to import data from Pandas DataFrame.
TensorFlow 2.x will automatically use GPU is a CUDA-capable Graphics Card and CUDA Toolkit has been installed. If you are interested in trying out the TensorFlow in your home computer and you own a suitable Nvidia Graphics card, follow the installation guide from the official documentation.
# This test will return True only if CUDA device is available tf.test.is_built_with_cuda()
You will most likely not see any performance increase with shallow neural network models, but when you start adding depth and width to the models, the difference is easily noticeable. On a home computer, you might realistically train the same model in 20 hours (CPU) or in 4 hours (GPU).
When learning TensorFlow, you might get confused about what is the "Keras" that is mentioned all-around the documentation. Keras is a high-level API for deep learning networks. In TensorFlow, an alternative API is the Estimator API, although the documentation itself recommends using Keras since it is easier to use.
Note: Keras is not specific to TensorFlow, so don't be confused if you find a tutorial that is running Keras as an API for Theano backend.
Since TensorFlow version 2.x, the Keras has been included in TensorFlow installation by default. This built-in implementation is in the
tensorflow.keras module. When reading various tutorials, you need to pay attention to which TensorFlow version the tutorial has been written in. Migration the code from TensorFlow 1.x to 2.x is possible, but not something you might want to do as a beginner.
# TensorFlow 1.x from keras.models import Sequential # Tensorflow 2.x from tensorflow.keras.models import Sequential
The Jupyter Notebook exercise has been written for TensorFlow 2.x.