Press ESC to exit fullscreen
πŸ“– Lesson ⏱️ 90 minutes

Neural Network Fundamentals

Understanding perceptrons, backpropagation, and optimization

Introduction

Neural networks form the foundation of modern deep learning, allowing us to build models capable of handling images, text, time series, and structured data at scale.

In this tutorial, you will:

βœ… Understand what neural networks are and why they work.
βœ… Learn about perceptrons, activation functions, and layers.
βœ… Implement forward and backward propagation intuitively.
βœ… Build a simple neural network in Python.


What is a Neural Network?

A neural network is inspired by the human brain, using interconnected units (neurons) to process information.

Key concepts:

  • Input Layer: Receives features.
  • Hidden Layers: Learn intermediate representations.
  • Output Layer: Produces predictions.
  • Weights and Biases: Parameters adjusted during learning.
  • Activation Functions: Introduce non-linearity.

Perceptron and Activation Functions

A perceptron is a single neuron that computes a weighted sum of inputs and applies an activation function.

Common Activation Functions:

  • Sigmoid: Maps output between 0 and 1.
  • ReLU (Rectified Linear Unit): max(0, x) for faster convergence.
  • Tanh: Maps output between -1 and 1.

Forward Propagation

Forward propagation involves:

βœ… Taking inputs.
βœ… Calculating weighted sums.
βœ… Applying activation functions layer-by-layer.
βœ… Generating the output prediction.


Backward Propagation

Backpropagation is used to update weights and biases to minimize the loss function using gradients calculated via the chain rule.

Key steps:

βœ… Compute the loss (e.g., MSE, cross-entropy).
βœ… Calculate gradients of the loss with respect to weights.
βœ… Update weights using optimization methods (e.g., SGD).


Implementing a Simple Neural Network

1️⃣ Import Libraries

import numpy as np

2️⃣ Define Activation Functions

def sigmoid(x):
    return 1 / (1 + np.exp(-x))

def sigmoid_derivative(x):
    return sigmoid(x) * (1 - sigmoid(x))

3️⃣ Initialize Parameters

np.random.seed(1)
weights = np.random.randn(2, 1)
bias = np.zeros((1,))

4️⃣ Forward Pass

def forward(X, weights, bias):
    z = np.dot(X, weights) + bias
    a = sigmoid(z)
    return a

5️⃣ Backward Pass and Update

def backward(X, y, a, weights, learning_rate=0.1):
    m = X.shape[0]
    dz = a - y
    dw = np.dot(X.T, dz) / m
    db = np.sum(dz) / m
    weights -= learning_rate * dw
    bias -= learning_rate * db
    return weights, bias

6️⃣ Training Loop

X = np.array([[0,0],[0,1],[1,0],[1,1]])
y = np.array([[0],[1],[1],[0]])  # XOR (note: a single-layer network won't learn XOR perfectly)

for i in range(10000):
    a = forward(X, weights, bias)
    weights, bias = backward(X, y, a, weights)

print("Final output after training:")
print(a)

Conclusion

βœ… You now understand the fundamental concepts behind neural networks.
βœ… You have learned how forward and backward propagation work.
βœ… You can implement a simple neural network in Python to solidify your learning.


What’s Next?

βœ… Explore deep networks with multiple layers.
βœ… Learn about advanced optimizers (Adam, RMSProp).
βœ… Dive into convolutional and recurrent neural networks for advanced applications.


Join the SuperML Community to share your progress and continue mastering deep learning!


Happy Learning! πŸ€–