Neural Network Basics

Learn the fundamental concepts behind neural networks, including perceptrons, activation functions, forward and backward propagation, and how they power deep learning systems.

⚡ intermediate
⏱️ 45 minutes
👤 SuperML Team

· Deep Learning · 2 min read

📋 Prerequisites

  • Understanding of machine learning workflows
  • Basic linear algebra and calculus
  • Python and NumPy familiarity

🎯 What You'll Learn

  • Understand the structure and function of neural networks
  • Grasp forward and backward propagation intuitively
  • Learn about activation functions and their role
  • Build a simple neural network from scratch in Python

Introduction

Neural networks form the foundation of modern deep learning, allowing us to build models capable of handling images, text, time series, and structured data at scale.

In this tutorial, you will:

✅ Understand what neural networks are and why they work.
✅ Learn about perceptrons, activation functions, and layers.
✅ Implement forward and backward propagation intuitively.
✅ Build a simple neural network in Python.


What is a Neural Network?

A neural network is inspired by the human brain, using interconnected units (neurons) to process information.

Key concepts:

  • Input Layer: Receives features.
  • Hidden Layers: Learn intermediate representations.
  • Output Layer: Produces predictions.
  • Weights and Biases: Parameters adjusted during learning.
  • Activation Functions: Introduce non-linearity.

Perceptron and Activation Functions

A perceptron is a single neuron that computes a weighted sum of inputs and applies an activation function.

Common Activation Functions:

  • Sigmoid: Maps output between 0 and 1.
  • ReLU (Rectified Linear Unit): max(0, x) for faster convergence.
  • Tanh: Maps output between -1 and 1.

Forward Propagation

Forward propagation involves:

✅ Taking inputs.
✅ Calculating weighted sums.
✅ Applying activation functions layer-by-layer.
✅ Generating the output prediction.


Backward Propagation

Backpropagation is used to update weights and biases to minimize the loss function using gradients calculated via the chain rule.

Key steps:

✅ Compute the loss (e.g., MSE, cross-entropy).
✅ Calculate gradients of the loss with respect to weights.
✅ Update weights using optimization methods (e.g., SGD).


Implementing a Simple Neural Network

1️⃣ Import Libraries

import numpy as np

2️⃣ Define Activation Functions

def sigmoid(x):
    return 1 / (1 + np.exp(-x))

def sigmoid_derivative(x):
    return sigmoid(x) * (1 - sigmoid(x))

3️⃣ Initialize Parameters

np.random.seed(1)
weights = np.random.randn(2, 1)
bias = np.zeros((1,))

4️⃣ Forward Pass

def forward(X, weights, bias):
    z = np.dot(X, weights) + bias
    a = sigmoid(z)
    return a

5️⃣ Backward Pass and Update

def backward(X, y, a, weights, learning_rate=0.1):
    m = X.shape[0]
    dz = a - y
    dw = np.dot(X.T, dz) / m
    db = np.sum(dz) / m
    weights -= learning_rate * dw
    bias -= learning_rate * db
    return weights, bias

6️⃣ Training Loop

X = np.array([[0,0],[0,1],[1,0],[1,1]])
y = np.array([[0],[1],[1],[0]])  # XOR (note: a single-layer network won't learn XOR perfectly)

for i in range(10000):
    a = forward(X, weights, bias)
    weights, bias = backward(X, y, a, weights)

print("Final output after training:")
print(a)

Conclusion

✅ You now understand the fundamental concepts behind neural networks.
✅ You have learned how forward and backward propagation work.
✅ You can implement a simple neural network in Python to solidify your learning.


What’s Next?

✅ Explore deep networks with multiple layers.
✅ Learn about advanced optimizers (Adam, RMSProp).
✅ Dive into convolutional and recurrent neural networks for advanced applications.


Join the SuperML Community to share your progress and continue mastering deep learning!


Happy Learning! 🤖

Back to Tutorials

Related Tutorials

⚡intermediate ⏱️ 25 minutes

2-Stage Backpropagation in Python

A practical, step-by-step tutorial explaining 2-Stage Backpropagation with PyTorch code examples for better convergence and generalization in training neural networks.

Deep Learning6 min read
Deep LearningPyTorchBackpropagation +2
⚡intermediate ⏱️ 60 minutes

Advanced Training Techniques for Deep Learning Models

Explore advanced training techniques in deep learning, including learning rate scheduling, gradient clipping, mixed precision training, and data augmentation for stable and efficient model training.

Deep Learning2 min read
deep learningadvanced trainingmachine learning +1
⚡intermediate ⏱️ 50 minutes

Dilation and Upconvolution in PyTorch

Learn how to implement dilation and upconvolution (transposed convolution) in PyTorch for tasks like semantic segmentation and feature map upsampling with clear, practical examples.

Deep Learning2 min read
deep learningpytorchdilation +2
⚡intermediate ⏱️ 50 minutes

Dilation and Upconvolution in Deep Learning

Learn what dilation and upconvolution are, how they work, and why they are important for tasks like semantic segmentation and feature expansion in deep learning.

Deep Learning2 min read
deep learningdilationupconvolution +2