back propagation learning routine - aartificial


Back propagation Learning Routine - Aartificial intelligence

As with perceptrons, the information in the network is stored in the weights, so the learning problem comes down to the question: how do we train the weights to best categories the training instance. Then we hope that this representation provides a nice way to categories unseen examples.

In outline, the back propagation method is the similar as for perceptrons:

1. We select and fix our architecture for the network, which will include input, hidden and output units, all of which will include sigmoid functions.

2. We randomly assign the weights between all the nodes. The assignments would be to smaller numbers, typically between -0.5 and 0.5.

3. Each training instance is used, 1 after another, to re-train the weights in the network. The way this is done is given in detail below.

4.   After each epoch (run through all the training examples), a termination situation is examine (also detailed below). For this method, Note down that, we are not guaranteed to search weights which give the network the global minimum error, for example, absolutely right categorization of the training examples. Hence the termination condition can have to be in terms of a (possibly small) number of miscategorisations. We see later that this must not be such a fine idea, though.

 

Request for Solution File

Ask an Expert for Answer!!
Computer Engineering: back propagation learning routine - aartificial
Reference No:- TGS0162354

Expected delivery within 24 Hours