How to Build an Artificial Neural Through the Perceptron Model?

Did you ever dream to build your own neural network? In this post, we will cover the topic of Perceptron Model. I will teach you how to build a single Perceptron Model based on real biological neural.

What is a biological neural ?

Deep learning is based on biological neural. It try to mimic them in an artificial way with computers. Therefore, we should understand first, how a biological neural works.

Here you can see how a neural looks, we need to make an artificial abstraction of this to better understand it.

We will simplify a lot and we will just conserve the Dendrites, the Nucleus and the Axon.

In fact, to over simplify, neural accepts some inputs signals through Dendrite and then compute an output in the nucleus (contained in the cell body) to then communicate it to another neural through the axon.

How a Perceptron model works?

So we have a very simplified biological neuron, now, we are going to replace Dendrite, Nucleus and Axon by mathematical concepts.

First, we are going to define some set of inputs going into this single point which we call the neuron and pass them into a single output.

Imagine two data points X1 and X2 and that inside the neuron there is a function called f.

We could begin very simply by saying that f is just as sum. So, the output will be y=X1+X2.

Add weights to learn

In reality, we would like to be able to modify some parameter in order to learn from inputs. What we are going to do is to multiply X1 and X2 by weights W1 and W2. Note that weights can be positive or negative. Therefore, now we have: y = X1W1 + X2W2. So, we could adjust the weights to get the correct output and that is what we call “learn” in a certain way.

Add bias

But what happen if X1=0? W1 will not have effect on the output. In order to solve this issue, we add a bias term called B to the inputs. So here, for this simple example, we have y = (X1W2 + B) + (X2W2 + B). A good way to explain this is to imagine that the weights should overcome the bias to begin to have an impact on Y.

We could expand this concepts of weights and bias to much more inputs.

Therefore we achieved to model a biological neuron as a very simple perceptron. In a mathematical way our generalization give:


In this post, you learned how we can mimic real biological neural to build artificial neural with the Perceptron Model.

In the next parts, we will see together how to add a single perceptron to a full neural network and we talk about more complex concepts such as activation functions, cost functions and back propagation.

Thanks for reading, I hope you enjoyed this post. Do not hesitate to ask me questions in DM. See you on!