So far we have only looked at single neurons. As we have seen, they are able to perform linear classification tasks. However, it can be shown that single neurons cannot solve XOR (exclusive-or) problems (we will see an example later). Moreover, they are binary classifiers. Multi-class classification is only possible by using techniques like One-vs-Rest (OvR), where one neuron is trained for each class.

In the introduction to artificial neurons, we said that they should mimic brain cells. Since the brain contains billions of interconnected cells, we can try to do something similar and build a network of neurons. This can be done in many different ways. We will discuss here only one type, the so-called multilayer perceptron, which has the advantage that we already have all the ingredients and only need to extend them to more neurons.


Multilayer perceptron (MLP)

A multilayer perceptron is a fully connected feedforward net with at least three layers (input, hidden and output), each consisting of several neurons, also called nodes. Feedforward means that there are no circles within the connections of the neurons, while fully connected means that each neuron is connected to all neurons in the next layer. The next figure shows an example.


An artificial neuronal network (ANN)

Because the input layer does not contain real neurons, the counting of the layer count is sometimes done differently (see 1st and 2nd layer in the figure). This leads to the statement that a single neuron is a single layer ANN.


Notation

If we want to describe the calculations behind an MLP, we need to extend our previous notation. Some of this is already included in the figure above.

Note that the threshold function is only calculated for the output layer, all hidden layers only calculate the activation function and pass the result to the next layer.

In the following, we will assume that our training data has \(n\) features and the neural network has \(k\) hidden layer nodes and \(ℓ\) output nodes. For the figure above, this means \(n=2\), \(k=4\), \(ℓ=3\). To perform the actual computation, we extend the computation of the net input \(z=w^Tx+b\) to a matrix multiplication of the net input for the hidden layer \[ \mathbf{z^{(h)}}=\mathbf{W^{(h)}}^T\mathbf{x^{(in)}}+\mathbf{b^{(h)}} \] Where \(z(h)\) is a \(k\)-component vector containing the net input for each node, \(b\) is a vector with the entry \(b(h)\) in each of the \(k\) components. \(W(h)^T\) is a bit more complicated because, we now have \(n\) weights at each of the \(k\) nodes. This means that we have a matrix \[ \mathbf{W^{(h)}}^T= \begin{pmatrix} w_{1,1} & w_{1,2} & \dots & w_{1,n}\\ w_{2,1} & w_{2,2} & \dots & w_{2,n}\\ \vdots & \vdots & & \vdots\\ w_{k,1} & w_{k,2} & \dots & w_{k,n} \end{pmatrix} \] Note: Above we calculated the net input for one training record. However, it is also possible to take m records at the same time (as it is done in minibatch gradient descent). Then \(x(in)\) will be expanded to a \((n×m)-\)matrix and consequently \(B(h)\) and \(Z(h)\) will be \((k×m)-\)matrices. The same extension to matrices applies to the output layer (or any other hidden layer that might be included in the network).


Forward propagation

Now we are ready to calculate the results of the ANN step by step:

  1. The net input of the hidden layer is \(\mathbf{z^{(h)}}=\mathbf{W^{(h)}}^T\mathbf{x^{(in)}}+\mathbf{b^{(h)}}\).
  2. The activation of the hidden layer is \(\mathbf{a^{(h)}}=\sigma(\mathbf{z^{(h)}})\).
  3. The net input of the output layer is \(\mathbf{z^{(out)}}=\mathbf{W^{(out)}}^T\mathbf{a^{(h)}}+\mathbf{b^{(out)}}\).
  4. The activation of the hidden layer is \(\mathbf{a^{(out)}}=\sigma(\mathbf{z^{(out)}})\).
  5. The output of the whole neural net is \(\mathbf{\hat{y}^{(out)}}=\text{step}(\mathbf{a^{(out)}})\).

Of course, \(\sigma\) and \(\text{step}\) act on each vector component individually.

Backpropagation

As for the single neuron, the actual learning is done by updating the weights and bias units. Therefore, we use stochastic gradient descent and define a loss function as \[ L=L(\mathbf{W^{(h)}}, \mathbf{W^{(out)}}, \mathbf{b^{(h)}}, \mathbf{b^{(out)}}) \] and update the weights and bias units as \[ w_{i,j, new}=w_{i,j, old}-\eta\frac{\partial L}{\partial w_{i,j}}\qquad b_{new}=b_{old}-\eta\frac{\partial L}{\partial b}. \] Let us compute the update for the two weights marked red in the above figure. For \(w_{3,4}^{(out)}\) the update is given by \[ \frac{\partial L}{\partial w_{3,4}^{(out)}}=\frac{\partial L}{\partial a_3^{(out)}}\frac{\partial a_3^{(out)}}{\partial w_{3,4}^{(out)}} \] where we used the chain rule.

For \(w_{4,2}^{(h)}\) it gets a little more complicated, because the weight appears in the activation of all output nodes. The update is then \[ \frac{\partial L}{\partial w_{4,2}^{(out)}}=\left[\frac{\partial L}{\partial a_1^{(out)}}\frac{\partial a_1^{(out)}}{\partial a_4^{(h)}}\frac{\partial a_4^{(h)}}{\partial w_{4,2}^{(h)}}+\frac{\partial L}{\partial a_2^{(out)}}\frac{\partial a_2^{(out)}}{\partial a_4^{(h)}}\frac{\partial a_4^{(h)}}{\partial w_{4,2}^{(h)}}+\frac{\partial L}{\partial a_3^{(out)}}\frac{\partial a_3^{(out)}}{\partial a_4^{(h)}}\frac{\partial a_4^{(h)}}{\partial w_{4,2}^{(h)}}\right] \] This seems to be computationally quite expensive. But if we start with the weights for the output layer and store some of the partial derivatives, we can reuse them in the calculation for the hidden layer, e.g. \(\frac{\partial L}{\partial a_3^{(out)}}\). This is the reason why this method is usually called backpropagation. The input is propagated forward through the network, while the weight correction is propagated backwards through the network. In the end, it is just a clever use of the chain rule.


An Example

Finally, let us apply the above MLP to our toy data.

### Plot Function ###

library(ggplot2)

plot_decision_boundary <- function(X, y, model) {
  # Predicted probabilities from the model
  prob_grid <- expand.grid(
    X1 = seq(min(X[, 1]) - 1, max(X[, 1]) + 1, by = 0.02),
    X2 = seq(min(X[, 2]) - 1, max(X[, 2]) + 1, by = 0.02)
  )
  prob_grid$pred <- predict(model, newdata = prob_grid, type = "response")

  # Plot
  ggplot() +
    geom_tile(data = prob_grid, aes(x = X1, y = X2, fill = pred), alpha = 0.3) +
    geom_contour(data = prob_grid, aes(x = X1, y = X2, z = pred), breaks = 0.5, color = "black") +
    geom_point(data = data.frame(X, y), aes(x = X1, y = X2, color = factor(y), shape = factor(y)), size = 3) +
    scale_fill_gradient(low = "dodgerblue", high = "firebrick1") +
    scale_color_manual(values = c("dodgerblue", "firebrick1")) +
    scale_shape_manual(values = c("0" = 15, "1" = 19)) +
    labs(title = "Multi-Layer Perceptron", x = "Feature 1", y = "Feature 2", fill = "Prediction", color = "Class", shape = "Class") +
    theme_minimal() +
    theme(plot.title.position = "plot", plot.title = element_text(hjust = 0.5, size = 18))
}
### Toy Data ###

# Set seed for reproducibility
set.seed(0)

# Create some toy data
X <- matrix(rnorm(60), ncol = 2)
colnames(X) <- c("X1", "X2")

# Divide the data into two classes along the line x2 = -x1
y <- ifelse(X[, 1] + X[, 2] >= 0, 1, 0)
library(neuralnet)

mlp <- neuralnet(y ~ X1 + X2,
  data = data.frame(X, y = y), hidden = 4,
  algorithm = "rprop+", linear.output = FALSE,
  stepmax = 500, rep = 1
)

plot_decision_boundary(X, y, mlp)



A XOR problem

As an example that the MLP is able to classify XOR problems, we divide the data according to the 4 quadrants, where quadrant 1 and 3 correspond to class 1 and quadrant 2 and 4 correspond to class 0.

set.seed(0)

# Create some toy data
X <- matrix(rnorm(30 * 2), ncol = 2)

# Divide the data
y <- ifelse(X[, 1] * X[, 2] >= 0, 1, 0)

First, we use a logistic regression that tries to find a linear decision boundary and obviously fails.

logreg <- glm(y ~ X1 + X2, family = binomial(), data = data.frame(X, y))

plot_decision_boundary(X, y, logreg) + labs(title = "Logistic Regression")


Now, we use the MLP, which is able to find a nonlinear decision boundary.

mlp <- neuralnet(y ~ X1 + X2,
  data = data.frame(X, y = y), hidden = 4,
  algorithm = "rprop+", linear.output = FALSE,
  stepmax = 500, rep = 1
)

plot_decision_boundary(X, y, mlp)



Deep learning

The neural network described above contains only one hidden layer. It is now possible to extend the network layout by adding more hidden layers with numbers of nodes. This is the subject of deep learning.

As we saw above, adding more layers and more nodes simply requires different sized matrices. GPUs are specialized to perform linear algebra calculations on large data sets very efficiently. This is the reason why today’s machine learning algorithms for big data are preferred to be run on high performance GPU clusters rather than on traditional CPUs.


Citation

The E-Learning project SOGA-R was developed at the Department of Earth Sciences by Kai Hartmann, Joachim Krois and Annette Rudolph. You can reach us via mail by soga[at]zedat.fu-berlin.de.

Creative Commons License
You may use this project freely under the Creative Commons Attribution-ShareAlike 4.0 International License.

Please cite as follow: Hartmann, K., Krois, J., Rudolph, A. (2023): Statistics and Geodata Analysis using R (SOGA-R). Department of Earth Sciences, Freie Universitaet Berlin.