Posts Tagged ‘ANN’
Neural network or Artificial Neural Network (ANN) is a massively parallel distributed processor made up of simple processing units, which has a natural propensity for storing experiential knowledge and making it available for use. A neural network contains a large number of simple neuron like processing elements and a large number of weighted connections encode the knowledge of a network. Though biologically inspired, many of the neural network models developed do not duplicate the operation of the human brain.
The intelligence of a neural network emerges from the collective behavior of neurons. Each neuron performs only very limited operation. Even though each individual neuron works slowly, they can still quickly find a solution by working in parallel. This fact can explain why humans can recognize a visual scene faster than a digital computer though an individual brain cell responds much more slowly than a digital cell in a VLSI circuit.
The brain-style computation points out a new direction for building an intelligent system, a direction which is fundamentally different from the symbolic approach. By now, more than a dozen well-known neural network models have been built. These include Backpropagation net, ART, Hopfield net, Bolzmann machine, etc., each has different performance features.
ANN represents a technology that is rooted in many disciplines: neuroscience, mathematics, statistics, physics, computer science and engineering. ANN find applications in such diverse fields such as modelling, time series analysis, pattern recognition, signal processing, and control by virtue of an important property: the ability to learn from input data with or without a teacher.
Backpropagation is a kind of neural network. A Neural Network (or artificial neural network) is a collection of interconnected processing elements or nodes. The nodes are termed simulated neurons as they attempt to imitate the functions of biological neurons. The nodes are connected together via links. We can compare this with axon-synapse-dendrite connections in the human brain.
How Backpropagation Works?
Initially, a weight is assigned at random to each link in order to determine the strength of one node’s influence on the other. When the sum of input values reaches a threshold value, the node will produce the output 1 or 0 otherwise. By adjusting the weights the desired output can be obtained. This training process makes the network learn. The network, in other words, acquires knowledge in much the same way human brains acquire namely learning from experience. Backpropagation is one of the powerful artificial neural network technique which is used acquire knowledge automatically.
Backpropagation method is the basis for training a supervised neural network. The output is a real value which lies between 0 and 1 based on the sigmoid function. The formula for the output is,
Output = 1 / (1+e-sum)
As the sum increases, the output approaches 1. As the sum decreases, the output approaches 0.
A multilayer network is a kind of neural network which consists of one or more layers of nodes between the input and the output nodes. The input nodes pass values to the hidden layer, which in turn passes to the output layer. A network with a layer of input units, a layer of hidden units and a layer of output units is a two-layer network. A network with two layers of hidden units is a three-layer network, and so on. The multilayer network is the basis for backpropagation network.
As the name implies, there is a backward pass of error to each internal node within the network, which is then used to calculate weight gradients for that node. The network learns by alternately propagating forward the activations and propagating backward the errors as and when they occur.
Backpropagation network can deal with various types of data. It also has the ability to model a complex decision system. If the problem domain involves large amount of data, the network will require to have more input units or hidden units. Consequently this will increase the complexity of the model and also increase its computational complexity. Moreover it takes more time in solving complex problems. In order to overcome this problem multi-backpropagation network is used.
In the beginning of the training process, weights are assigned to the connections at random. The training process is iterative. The training cases are given to the network iteratively. The actual outputs are compared with desired outputs and the errors are calculated. The errors are then propagated back to the network in order to adjust the weights. The training process repeats till the desired outputs are obtained or the error reaches an acceptable level.
Types of backpropagation networks
Static back-propagation is one kind of backpropagation networks that produces a mapping of a static input to a static output. These networks can solve static classification problems such as optical character recognition (OCR). Recurrent Backpropagation is another kind of type used for fixed-point learning. NeuroSolutions, for example, is software that has this ability. In recurrent backpropagation, activations are fed forward until a fixed value is achieved. There after the error is computed and propagated backwards. The difference between static and recurrent backpropagation is that the mapping is instantaneous in static back-propagation while it is not in the case of latter type. Moreover training a network using fixed-point learning is more difficult than with static backpropagation.
Application of Backpropagation
The researchers, Thomas Riga, Angelo Cangelosi (University of Plymouth) and Alberto Greco (University of Genoa), have developed a model for grounding of symbol. It assumes the brain as a symbol system and explains cognition as a manipulation of symbols governed by rules.
The symbol grounding problem is nothing but connecting meaning to the symbols or images of objects received from input stimuli. The model uses two learning algorithms: Kohonen Self-Organizing Feature Map and backpropagation algorithm.
To perform the tasks, it has two modules and a retina for input. The first module uses Kohonen Self-Organizing Feature Map and categorizes the images projected on the retina. It expresses the intrinsic order of the stimulus set in a bi-dimensional matrix known as activation matrix.
The second module relates visual input, symbolic input and output stimuli. Now it uses backpropagation algorithm for learning. The error in the output is computed with respect to the training set and is sent back to input unit and the weight distribution is corrected in order to get the correct output. In this process, the symbols which are grounded constitute the knowledge. The knowledge so acquired is then used to generate and describe new meanings for the symbols.