Neural Network

Today I have started looking into Long Short Term Memory. LSTM is a kind of Recurrent Neural Network (RNN). The first impression that I had was LSTM is a combination of Neural Network (NN) and State Space Models, specifically Hidden Markov Models (HMM). That would form RNN. LSTM is RNN with filter. To understand them fully, I decided to code it, starting off with Neural Network.

Following is my short note. There are no fancy visualizations for now and I try to include as much related terms as possible as they used to confuse me a lot.

Sample Neural Network coded from scratch

Background (Skip this if one has some background on Neural Networks)

Let’s begin with the basics, Neural Network. Neural Network without hidden layer can be thought of Linear Regression, weighted sum. Adding the Sigmoid function it then becomes Logistic Regression.

Weighted sum,

\begin{aligned} & z_i = \sum_{j} W_{j}X_{ij} \\ \end{aligned}

where i is the index of data point, j is the index of feature and z_i is the predicted output.

With Sigmoid function,

\begin{aligned} & y_i & & = \frac{1}{1 + e^{-z_i}} \\ &&& = \frac{1}{1 + e^{-\sum_{j} w_{j}x_{ij}}}\\ \end{aligned}

Now y_i has become the predicted output instead of z_i. “How do we find the solution, W?” you asked. We can use Least Squares or Gradient Descent. As mentioned, Gradient Descent

Generalizing this,

\begin{aligned} & y_i & & = f_1(z_i) \\ &&& = f_1(f_2(x_i))\\ \end{aligned}

where f_2 is the weighted sum of features and f_1 is 1 for weighted sum and Sigmoid function in the case of “Logistic Regression”. f_2 is called as a logic gate or activation function, z_i is “input” for Neuron i and y_i is the “output”. This “Regression” is also called as Perceptron or Neuron. For simplicity, using it as a classifier, a threshold could be set such that the output above this threshold belongs to 1 of those 2 possible classes. 1 Neuron can classify 2 classes whereas 2 Neurons can classify 3 classes. This is called as 1 vs All logical structure.

Hidden layer(s) is introduced to allow the “system” to perform more complicated work, y_{i, new} = f_{hidden}(y_{i}). For an example, when we were young, firstly we learned about alphabets, then W was taught and hence we learned how to make words from them. After that, a “hidden layer” is added so that we know how to make sentences and so on. One might ask, “Why don’t we just learn W that could allow us to write essay directly?”. One could think of the combinations of alphabets vs the combinations of words in an essay, the number of combinations is less with words, hence it is faster and easier to learn. Learning alphabets is easier compared to learning words. Putting that in this context, number of Neurons per hidden layer may not be the same. The layer is hidden because the learned weights might not make any sense to us, it is a knowledge storing capacity.

When there are more layers, the learning capacity increases and hence the buzzword “Deep Learning”. Don’t get shook by it! One could think of it as aligning several batteries serially to increase the Voltage and guess what, Deep Learning is not limited to just Neural Network, it could be a serial combination of any other Machine Learning (ML) algorithms, even a different one at every level. However for the sake of simplicity, let’s limit it to just Neural Network.

What is left to know now is how do we train Neural Network.

For a classification problem, assuming that there are only 3 possible classes and 0 hidden layer, 2 neurons are needed

\begin{aligned} E_{total} = E_1 + E_2\\ E_1 = \frac{1}{2}(y_1 - \hat{y}_1)^2 \\ \hat{y}_1 = \frac{1}{1 + e^{-\sum_{j} w_{j}x_{ij}}} \end{aligned}

where y_1 \in \{0, 1\} is the ground truth, i.e.: y_1 = 1, y_2 = 0 if a certain data point belongs to class 1.

So the learning of weight between feature 1, x_1 and input for Neuron 1, z_1 would be

\begin{aligned} w_{x_1z_1} \gets w_{x_1z_1} - \eta\frac{\partial E_{total}}{\partial w_{x_1z_1}} \\ \frac{\partial E_{total}}{\partial w_{x_1z_1}} = \frac{\partial E_{total}}{\partial y_1}\frac{\partial y_1}{\partial w_{x_1z_1}} \\ \frac{\partial y_1}{\partial w_{x_1z_1}} = \frac{\partial y_1}{\partial z_1}\frac{\partial z_1}{\partial w_{x_1z_1}} \end{aligned}

Neural Network with a single hidden layer is also known as Multi Layer Perceptron (MLP). Assuming that there are 3 classes, and 1 hidden layer with n_{h_1} neurons, there are 2 layer of weights to learn, 1. between hidden layer and output layer, 2. between raw input and hidden layer.

1. Between hidden layer and output layer

\begin{aligned} w_{h_{11}z_1} \gets w_{h_{11}z_1} - \eta\frac{\partial E_{total}}{\partial w_{h_{11}z_1}} \\ \frac{\partial E_{total}}{\partial w_{h_{11}z_1}} = \frac{\partial E_{total}}{\partial y_1}\frac{\partial y_1}{\partial w_{h_{11}z_1}} \\ \frac{\partial y_1}{\partial w_{h_{11}z_1}} = \frac{\partial y_1}{\partial z_1}\frac{\partial z_1}{\partial w_{h_{11}z_1}} \end{aligned}

2. Between input layer and hidden layer

\begin{aligned} w_{x_1h_{11}} \gets w_{x_1h_{11}} - \eta\frac{\partial E_{total}}{\partial w_{x_1h_{11}}} \\ \end{aligned}
\begin{aligned} \frac{\partial E_{total}}{\partial w_{x_1h_{11}}} = \sum_{i}\bigg(\frac{\partial E_i}{\partial y_i}\frac{\partial y_i}{\partial z_i}\frac{\partial z_i}{\partial y_{h_{11}}}\bigg)\frac{\partial y_{h_{11}}}{\partial z_{h_{11}}}\frac{\partial z_{h_{11}}}{\partial w_{x_1h_{11}}} \\ \end{aligned}

where

\begin{aligned} & \frac{\partial E_i}{\partial y_i} & & = -(y_i - \hat{y}_i) \\ & \frac{\partial y_i}{\partial z_i} & & = -\hat{y}_i(1 - \hat{y}_i) \\ & \frac{\partial z_i}{\partial y_{h_{11}}} & & = w_{h_{11}z_i} \\ & \frac{\partial y_{h_{11}}}{\partial z_{h_{11}}} & & = 1 - \hat{y}_{h_{11}} \\ & \frac{\partial z_{h_{11}}}{\partial w_{x_1h_{11}}} & & = x_1 \\ \end{aligned}

to be continued… (RNN and LSTM)

Reference(s):

  1. LSTM
  2. RNN
  3. NN

3 thoughts on “Neural Network

Leave a Reply