Artificial Intelligence - Neurons, Perceptrons, and Neural Networks - Video
PUBLISHED:  Dec 08, 2013
DESCRIPTION:
Sound levels rebalanced compared to the last upload, and a small visual tweak made. No difference in script or general animation however.

An animated video providing a brief introduction to neurons, perceptrons, and neural networks.

Script:

"Jordan: Say we want to get a computer to make decisions. How do we do this? Perceptrons are one answer.

What is a perceptron? The perceptron is a machine learning algorithm for supervised classification of an input into one of several possible non-binary outputs. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector describing a given input using the delta rule. The learning algorithm for perceptrons is an online algorithm, in that it processes elements in the training set one at a time.

Wait... this is getting too technical. Why don't we start over...

Albert Einstein was once quoted with the saying, "Look deep into nature, and then you will understand everything better." And so, AI researchers tend to look to nature as a template for how to create their intelligent systems. This is the case with perceptrons. A perceptron is an artificial representation of a neuron, similar in function to neurons in the brain. In an actual brain, neurons receive information from the outside world through the five senses and encode this into electrical signals. These signals are fed as input into some of these neurons, like input into a program. If the electrical input breaks the neuron's internal threshold, the neuron will fire, sending signals to other neurons. Researchers designed the perceptron to emulate this behavior. In a perceptron, we have several inputs, a threshold function, and several outputs. These inputs are binary, in that they can either be on or off, and they're weighted based on their relative importance. When the perceptron receives input, it sums up the weights of all the inputs that are on. If this sum exceeds the pre-specified threshold, then the perceptron will "fire" by activating its outputs.

Ryan: But this is only half of the story. The neuron itself is rather simple, so where does the vast complexity of the brain come from? How, from this simple act of firing, can we get emotion, personality, even consciousness? It's in the way the neurons themselves are connected. The brain is made up of 100 billion neurons and over 1000 trillion connections, and these connections are constantly changing. The ability of the brain to change its own structure allows a person to learn, create memories, and change the way they act in the future. And so, to achieve this kind of complexity with perceptrons, we connect them together. Most of the time, the perceptron's outputs feed into another perceptron's input, thus modeling the interconnectedness of neurons in the brain. And, just like the brain learns by altering the connections between neurons, we can simulate learning in a computer by modifying the connections between the perceptrons based on whether it reaches its specified goal state.

And so, this is a very high level description of the perceptron, how it works, and how it parallels the function of the neuron. If you're interested in learning more, there's a lot of great resources out there. Thanks for watching."


Music Source: https://soundcloud.com/glitchhop/lumberjack-by-paurini
follow us on Twitter      Contact      Privacy Policy      Terms of Service
Copyright © BANDMINE // All Right Reserved
Return to top