Skip to main content
deleted 3186 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28

notes about code

neuron_weights is of the structure 

[
layers[neuron[weights]]
]

where weights are initially a random float. weight_changes is the exact same structure as weights.

neurons is of the stucture

[
layers[neuron]
]

where neuron is the activated neuron value

...so if there are three neuron layers (neurons[0] #inputs,neurons[1]#hidden,neurons[2]#outputs), there are two weight layers neuron_weights[0], neuron_weights[1]

...the following back prop function gets called for every matrix in a loop elsewhere, just after it has been fed forward (which i have tested an works).So assume the neurons and weights are set and ready to be back propagated.

my back prop code

desired_list = self.get_desired_list(desired_number) #returns list of 0s and a 1 (e.g [0,0,0,0,0,1,0,0,0,0]) for comparison to output

for weight_column in range(len(self.neuron_weights)-1,-1,-1): #loop through weight columns

    e_total = 0

    for neuron_weight_num in range(0, len(self.neuron_weights[weight_column])): #loop backwards through each neuron in weight column (group of weights in each column)

        neuron_weight_value = self.neurons[weight_column+1][neuron_weight_num]

        act_to_sum_step = neuron_weight_value * (1-neuron_weight_value) #get value from sigmoid to before sigmoid

        
        for singleweight_num in range(0, len(self.neuron_weights[weight_column][neuron_weight_num])): #loop through each single weight to update

            curr_weight_value = self.neuron_weights[weight_column][neuron_weight_num][singleweight_num]
            
            if(weight_column == len(self.neuron_weights)-1): #if output column, step back from desired values

                step_back_error_value = neuron_weight_value - desired_list[neuron_weight_num-1]

                e_total += (0.5*step_back_error_value)**2

            else: #otherwise, sum up previous changes in previous column of weights
                weight_column_to_sum = weight_column + 1

                step_back_error_value  = 0

                for weight_change_neuron_num in range(0, len(self.weight_changes[weight_column_to_sum])):
                    before_change_weight = self.weight_changes[weight_column_to_sum][weight_change_neuron_num][neuron_weight_num]
                    step_back_error_value += before_change_weight

            input_to_weight_neuron_value = self.neurons[weight_column][singleweight_num]
             
            #derivative of activated neuron value to weight
            act_to_weight_val = act_to_sum_step * input_to_weight_neuron_value

            complete_step_back_value = step_back_error_value * act_to_weight_val
             
            #save weight change value for later use (if back prop goes further back)
            self.weight_changes[weight_column][neuron_weight_num][singleweight_num] = complete_step_back_value_value
            
            #update weight value
            new_w_value = curr_weight_value - (self.learn_rate * complete_step_back_value)

            self.neuron_weights[weight_column][neuron_weight_num][singleweight_num] = new_w_value

print(e_total)

when using a learning rate of 0.5, e_total starts at 2.252 and within a minute gets to 0.4462, and then within 5 mins gets no lower than 0.2.

notes about code

neuron_weights is of the structure 

[
layers[neuron[weights]]
]

where weights are initially a random float. weight_changes is the exact same structure as weights.

neurons is of the stucture

[
layers[neuron]
]

where neuron is the activated neuron value

...so if there are three neuron layers (neurons[0] #inputs,neurons[1]#hidden,neurons[2]#outputs), there are two weight layers neuron_weights[0], neuron_weights[1]

...the following back prop function gets called for every matrix in a loop elsewhere, just after it has been fed forward (which i have tested an works).So assume the neurons and weights are set and ready to be back propagated.

my back prop code

desired_list = self.get_desired_list(desired_number) #returns list of 0s and a 1 (e.g [0,0,0,0,0,1,0,0,0,0]) for comparison to output

for weight_column in range(len(self.neuron_weights)-1,-1,-1): #loop through weight columns

    e_total = 0

    for neuron_weight_num in range(0, len(self.neuron_weights[weight_column])): #loop backwards through each neuron in weight column (group of weights in each column)

        neuron_weight_value = self.neurons[weight_column+1][neuron_weight_num]

        act_to_sum_step = neuron_weight_value * (1-neuron_weight_value) #get value from sigmoid to before sigmoid

        
        for singleweight_num in range(0, len(self.neuron_weights[weight_column][neuron_weight_num])): #loop through each single weight to update

            curr_weight_value = self.neuron_weights[weight_column][neuron_weight_num][singleweight_num]
            
            if(weight_column == len(self.neuron_weights)-1): #if output column, step back from desired values

                step_back_error_value = neuron_weight_value - desired_list[neuron_weight_num-1]

                e_total += (0.5*step_back_error_value)**2

            else: #otherwise, sum up previous changes in previous column of weights
                weight_column_to_sum = weight_column + 1

                step_back_error_value  = 0

                for weight_change_neuron_num in range(0, len(self.weight_changes[weight_column_to_sum])):
                    before_change_weight = self.weight_changes[weight_column_to_sum][weight_change_neuron_num][neuron_weight_num]
                    step_back_error_value += before_change_weight

            input_to_weight_neuron_value = self.neurons[weight_column][singleweight_num]
             
            #derivative of activated neuron value to weight
            act_to_weight_val = act_to_sum_step * input_to_weight_neuron_value

            complete_step_back_value = step_back_error_value * act_to_weight_val
             
            #save weight change value for later use (if back prop goes further back)
            self.weight_changes[weight_column][neuron_weight_num][singleweight_num] = complete_step_back_value_value
            
            #update weight value
            new_w_value = curr_weight_value - (self.learn_rate * complete_step_back_value)

            self.neuron_weights[weight_column][neuron_weight_num][singleweight_num] = new_w_value

print(e_total)

when using a learning rate of 0.5, e_total starts at 2.252 and within a minute gets to 0.4462, and then within 5 mins gets no lower than 0.2.

when using a learning rate of 0.5, e_total starts at 2.252 and within a minute gets to 0.4462, and then within 5 mins gets no lower than 0.2.

Notice removed Authoritative reference needed by harry lakins
Bounty Ended with A. STEFANI's answer chosen by harry lakins
deleted 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
added 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28

UPDATE:

As recommend, I have used my system to try and learn XOR function - with 2 inputs, 1 hidden layer of 2 neurons, and 1 output. meaning, the desired_list is now a single element array, either [1] or [0]. Output values seem to be random >0.5 and < 0.7, with no clear relation to desired output. Just to confirm, I have manually tested my feed forward and back prop many times, and they defiantly work how explained in tutorials i've linked.

UPDATE:

As recommend, I have used my system to try and learn XOR function - with 2 inputs, 1 hidden layer of 2 neurons, and 1 output. meaning, the desired_list is now a single element array, either [1] or [0]. Output values seem to be random >0.5 and < 0.7, with no clear relation to desired output. Just to confirm, I have manually tested my feed forward and back prop many times, and they defiantly work how explained in tutorials i've linked.

added 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
deleted 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
added 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
deleted 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
Notice added Authoritative reference needed by harry lakins
Bounty Started worth 100 reputation by harry lakins
edited title
Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
added 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
deleted 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
added 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
added 62 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
deleted 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
added 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
deleted 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
added 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
deleted 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
deleted 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
added 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
deleted 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading
added 2 characters in body
Source Link
harry lakins
  • 833
  • 2
  • 13
  • 28
Loading