Timeline for How are weights updated in the batch learning method in neural networks?
Current License: CC BY-SA 3.0
4 events
when toggle format | what | by | license | comment | |
---|---|---|---|---|---|
Feb 14, 2017 at 12:30 | comment | added | Sean Easter | That's the generally given definition: Update parameters using one subset of the training data at a time. (There are some methods in which mini-batches are randomly sampled until convergence, i.e. The batch won't be traversed in an epoch.) See if this is helpful. | |
Feb 14, 2017 at 10:55 | comment | added | Hossein | Are mini-batch gradient descent the same batch-gradient descent? I'm lost here! if not what's the difference between these? Correct me if I'm wrong, in Batch mode, the whole dataset needs to be read in batches, gradients get calculated, and when all are read, then they are averaged and then parameters are updated, while, in mini-batch, each batch is read, gradients get calculated and then parameters get updated, and then the next mini batch is read till the one epoch is over. | |
Jan 5, 2016 at 16:23 | vote | accept | Hossein | ||
Jan 4, 2016 at 21:36 | history | answered | Sean Easter | CC BY-SA 3.0 |