Solved – How to adjust neural network weights for several inputs

Currently I'm working on solving the XOR problem with a homemade NN in C++. Several (worthy) people have recommended my weight adjustment formula to be:

$$
weight=weight + (error cdot input)
.$$

That's all fine and dandy, but considering there are multiple inputs in XOR how does one decide the current input to be multiplied by the error for the new weight? Also, in the future I may need several outputs. In that case, which error do I use to find the new weight?

Remember that you can see this adjustment formula as operation of matrixes. Consider an example:

You have 2 inputs and 3 neurons on input layer. For each connection between input and neuron, we have a weight. So, in this case we could have a matrix of weights 3×2. We know that our input is 1×2, therefore our error matrix must be 3×1. This sounds a little bit strange because error for a single output (in XOR example) is scalar.

Then, in your formula this is not the absolute error, this could be called the gradient (if you're implementing a MLP), which is calculated by:

$delta_i^L(n) = f'_k(x_i^L(n)) cdot e_i^L(n)$ , L as the output layer

$delta_i^k(n) = f'_k(x_i^k(n)) cdot sum_1^{k+1}weight_{ij}(n) cdot delta_i^{k+1}(n)$, if k is not the output layer L.

Notations: Consider $f_k'$ as the derivate of activation function of the refered layer $k$. Consider $e_i^L$ as absolute error ($desired – observated$). Consider $x_i^k$ as each $i$ input of $k$ layer.

Obs: If you're not implementing MLP, please search for the "equivalent".

Finally, your weight adjustment could be realize with your formula using matrixes' point of view of the data to help NN to "decide" with whom each input and neurons is interacting. These approach is valid for multiple outputs as well.

Similar Posts:

Rate this post

Leave a Comment