Skip to main content
edited for English; light formatting
Source Link
gung - Reinstate Monica
  • 146.7k
  • 89
  • 401
  • 710

Can someone please tell me how I am supposed to build batch method ina neural network using the batch method?
I

I have read that, in batch mode, for all samples in the training set, we calculate the error, delta and thus delta weights for each neuron in the network and then instead of immediately updating the weights, we accumulate them, and then before starting the next epoch, we update the weights.

I also read somewhere that, the batch method is like the online method but with athe difference, that being one only needs to sum the errors for all samples in thethe training set and then take the average of it and then use that for updating the weights just like heone does in the online method, ( thethe difference is just that average) sth like this  :

for epoch=1 to numberOfEpochs

   for all i samples in training set
      
         calculate the errors in output layer
         SumOfErrors += (d[i] - y[i])
   end
 
   errorAvg = SumOfErrors / number of Samples in training set
       
   now update the output layer with this error
   update all other previous layers
   
   go to the next epoch

end

Which one of these are truly the correct form of batch method? In case of the first one, doesn't accumulating all the delta weights result in a huge number?

  • Which one of these are truly the correct form of batch method?
  • In case of the first one, doesn't accumulating all the delta weights result in a huge number?

Can someone please tell me how I am supposed to build batch method in neural network ?
I have read that, in batch mode, for all samples in the training set, we calculate the error, delta and thus delta weights for each neuron in the network and then instead of immediately updating the weights, we accumulate them, and then before starting the next epoch, we update the weights.

I also read somewhere that, batch method is like online method but with a difference, that being one only needs to sum the errors for all samples in the training set and then take the average of it and then use that for updating weights just like he does in the online method, ( the difference is just that average) sth like this  :

for epoch=1 to numberOfEpochs

   for all i samples in training set
      
         calculate the errors in output layer
         SumOfErrors += (d[i] - y[i])
   end
 
   errorAvg = SumOfErrors / number of Samples in training set
       
   now update the output layer with this error
   update all other previous layers
   
   go to the next epoch

end

Which one of these are truly the correct form of batch method? In case of the first one, doesn't accumulating all the delta weights result in a huge number?

Can someone please tell me how I am supposed to build a neural network using the batch method?

I have read that, in batch mode, for all samples in the training set, we calculate the error, delta and thus delta weights for each neuron in the network and then instead of immediately updating the weights, we accumulate them, and then before starting the next epoch, we update the weights.

I also read somewhere that, the batch method is like the online method but with the difference being one only needs to sum the errors for all samples in the training set and then take the average of it and then use that for updating the weights just like one does in the online method (the difference is just that average) like this:

for epoch=1 to numberOfEpochs

   for all i samples in training set
      
         calculate the errors in output layer
         SumOfErrors += (d[i] - y[i])
   end
 
   errorAvg = SumOfErrors / number of Samples in training set
       
   now update the output layer with this error
   update all other previous layers
   
   go to the next epoch

end
  • Which one of these are truly the correct form of batch method?
  • In case of the first one, doesn't accumulating all the delta weights result in a huge number?
Source Link
Hossein
  • 2.4k
  • 4
  • 23
  • 35

How are weights updated in the batch learning method in neural networks?

Can someone please tell me how I am supposed to build batch method in neural network ?
I have read that, in batch mode, for all samples in the training set, we calculate the error, delta and thus delta weights for each neuron in the network and then instead of immediately updating the weights, we accumulate them, and then before starting the next epoch, we update the weights.

I also read somewhere that, batch method is like online method but with a difference, that being one only needs to sum the errors for all samples in the training set and then take the average of it and then use that for updating weights just like he does in the online method, ( the difference is just that average) sth like this :

for epoch=1 to numberOfEpochs

   for all i samples in training set
      
         calculate the errors in output layer
         SumOfErrors += (d[i] - y[i])
   end
 
   errorAvg = SumOfErrors / number of Samples in training set
       
   now update the output layer with this error
   update all other previous layers
   
   go to the next epoch

end

Which one of these are truly the correct form of batch method? In case of the first one, doesn't accumulating all the delta weights result in a huge number?