An Analog Approach to Nonlinear Classification Using Trainable Perceptron Circuits

by arun in Circuits > Electronics

65 Views, 1 Favorites, 0 Comments

An Analog Approach to Nonlinear Classification Using Trainable Perceptron Circuits

Neural-Networks-Cover-Photo.png

Imagine teaching a circuit to think— but wait, not with Python or TensorFlow, but with resistors, op-amps, comparators. Yes, you heard it right. This project brings that vision to life by building a completely analog neural network capable of solving the classic non-linear problem !! —a challenge that even AI models struggled with in the beginning. The network we have trained is capable of solving the XOR function.This network can also be retrained to effectively solve any nonlinear problem within its limits.

Here’s the twist:

No microcontrollers
No digital processors
No software or code

Just a network of cleverly arranged electronic components mimicking how real neurons sum, activate, and output decisions—live, in real time.

In an era dominated by GPUs and cloud-based AI, this project takes a bold step back to the roots—where intelligence is wired, not programmed. It's a functional, hands-on demonstration of hardware AI in its rawest, most elegant form. Whether you’re an electronics hobbyist, an AI enthusiast, or a curious student, this analog neural network will blow your mind—and light up an LED—as it learns to think like a brain, one voltage at a time.

Supplies

  1. LM324N Op-Amp IC: 2 units – For summing and threshold operations.
  2. Trimmer Potentiometers (10kΩ): 6 units – Used as adjustable weights.
  3. 1N4148 Diodes: 3 to 4 units – For threshold-based activation.
  4. Resistors (1kΩ, 10kΩ, 100kΩ): Several – For biasing and voltage scaling.
  5. Breadboard or PCB: 1 unit – For building the circuit.
  6. Power Supply (±5V to ±9V): 1 unit – To power the analog network.
  7. Dual-color LED: 1 unit – To display the output state.
  8. Switches: 2 units – For input logic A and B.
  9. Connecting Wires: As needed – For circuit connections.

Understand the Concept

HNN_NN.png

A fundamental artificial neuron operates through two primary functions:

  1. Computing a weighted sum of its inputs and then
  2. Applying a non-linear activation function to that sum.

The first step involves multiplying each input by its corresponding weight, which determines the input's influence on the neuron's output. Weights can be positive (excitatory) or negative (inhibitory), and the summation of these weighted inputs forms the basis of the neuron's computation.

Mathematically, this process is expressed as:

a = squash(∑(iᵢ × wᵢ)),

where iᵢ represents the input, wᵢ the weight, and a the final output after activation.

The activation function—sometimes referred to as a squashing function—introduces non-linearity by bounding the output to a specific range. Common examples include the sigmoid function (output range: 0 to 1) and tanh (output range: –1 to 1). In simple circuits like ours, this can be approximated using a threshold function: the output jumps to a high value if the weighted sum exceeds a set threshold, and remains low otherwise. This is where the comparators come in handy. This thresholding behavior is crucial for enabling neural networks to perform complex decision-making and pattern recognition tasks, such as solving the XOR problem.

The XOR Gate:

The XOR function outputs TRUE only when the inputs differ. A single-layer perceptron cannot solve this because XOR is not linearly separable. This calls for a two-layer neural network:

  1. Input Layer: Pass inputs to hidden neurons.
  2. Hidden Layer: Two neurons perform linear combinations.
  3. Output Layer: One neuron combines hidden outputs to produce XOR.

Design the Circuit

Screenshot 2025-07-10 221834.png

The XOR function is a classic example of a non-linearly separable problem, and solving it

using a neural network demonstrates the need for multi-layered architectures. In this project, we

emulate a 2-2-1 network (2 input neurons, 2 hidden neurons, and 1 output neuron) entirely in

analog form.

Each perceptron is implemented on an individual PCB, enhancing modularity and clarity.

Trimmer potentiometers are used as adjustable resistive elements, allowing manual tuning

(training) of weights.

Design Highlights:

  1. Each neuron is designed using an op-amp in a non-inverting configuration. The activation function is achieved through a comparator with an appropriate reference voltage.
  2. They are trained, that is the weights are controlled by the potentiometers/trimmers.

Simulate in LTSpice

1.png
3.png
2.png
4.png

Setup:

  1. Each neuron circuit was simulated separately to verify summation and threshold response.
  2. Voltage sources simulated binary input states (0V for logic 0, 5V for logic 1).
  3. Trimmer potentiometers were modelled using adjustable resistor components.

Execution:

  1. Four test cases corresponding to the XOR truth table (00, 01, 10, 11) were applied.
  2. Output voltages were measured after the hidden and output layers.
  3. Thresholds were fine-tuned to differentiate logic 0 and logic 1 (set near 2.5V).

Key Results:

  1. The network correctly outputs high (~5V) only for inputs 01 and 10.
  2. Output remains low (~0V) for inputs 00 and 11, replicating the XOR behaviour.
  3. Voltage transitions were clean and consistent with design expectations

Train Your Network - Manually

Training a neural network involves adjusting the weights of the connections between neurons so that the network produces the correct output for given inputs. In our analog implementation, this is done manually by fine-tuning trimmer potentiometers. For the XOR function, specific weight combinations must be set so that the output neuron activates only for input pairs (0,1) and (1,0).

In general, training is the most critical phase of any neural network, as it enables the system to learn patterns, make decisions, and generalize to unseen data. Whether done digitally through algorithms like backpropagation or physically using analog components, training determines the intelligence and accuracy of the network.

Test Your XOR

Use switches to apply all combinations:

































Input A Input B Output (LED)
0 0 OFF
0 1 ON
1 0 ON
1 1 OFF

Results

Screenshot 2025-07-10 221440.png

The successful construction and demonstration of the analog neural network circuit on a breadboard validates the feasibility of implementing logical functions like XOR using entirely analog components. The image showcases the real-world prototype with trimmer potentiometers, op-amps, and passive components neatly arranged and functioning as intended. Despite the visible complexity of the wiring, the modular design ensured clarity in layout and troubleshooting. The glowing LED output and the circuit’s ability to respond accurately to different input combinations reflect the effectiveness of manual training and precise analog computation. This project not only demonstrates the power of analog hardware for AI applications but also highlights the educational value of hands-on neural network implementation at the circuit level.