In a typical computer, made according to what is called a Von Neumann architecture, memory banks live in an isolated module. There is only one processor, which processes instructions and memory rewrites one by one, using a serial architecture. A different approach to computing is the neural network. In a neural network, made up of thousands or even millions of individual “neurons” or “nodes,” all processing is highly parallel and distributed. “Memories” are stored within the complex interconnections and weightings between nodes.
Neural networking is the type of computing architecture used by animal brains in nature. This isn’t necessarily because the neural network is an inherently superior mode of processing than serial computing, but because a brain that uses serial computing would be much more difficult to evolve incrementally. Neural networks also tend to deal with “noisy data” better than serial computers.
In a feedforward neural network, an “input layer” filled with specialized nodes takes in information, then sends a signal to a second layer based on the information it received from the outside. This information is usually a binary “yes or no” signal. Sometimes, to move from a “no” to a “yes,” the node has to experience a certain threshold amount of excitement or stimulation.
Data moves from the input layer to the secondary and tertiary layers, and so on, until it reaches a final “output layer” which displays results on a screen for programmers to analyze. The human retina works based on neural networks. First level nodes detect simple geometric features in the visual field, like colors, lines, and edges. Secondary nodes begin to abstract more sophisticated features, such as motion, texture, and depth. The final “output” is what our consciousness registers when we look at the visual field. The initial input is just a complex arrangement of photons that would mean little without the neurological hardware to make sense of it in terms of meaningful qualities, such as the idea of an enduring object.
In backpropagating neural networks, outputs from earlier layers can return to those layers to constrain further signals. Most of our senses work this way. The initial data can prompt an “educated guess” at the final result, followed by looking at future data in the context of that educated guess. In optical illusions, our senses make educated guesses that turn out to be wrong.
Instead of programming neural networks algorithmically, programmers must configure a neural network with training or delicate tuning of individual neurons. For example, training a neural network to recognize faces would require many training runs in which different “facelike” and “unfacelike” objects were shown to the network, accompanied by positive or negative feedback to coax the neural network into improving recognition skills.