### neuron networks, part 2

Following the previous article, the network was trained with free-running feedback .

In addition, a second output neuron was created, wich output, instead of following the conway’s game of life rules, was trained to be the sine of the expected normal, ‘game of life ruled’ output.

The trained network’s output for output 0 is almost the same (used 8 on hidden instead of 5 )

**The screen capture of outputs 0 and 1 :**

*Left : output zero (normal, game of life ruled output) , and right : the ’sine’ like output 1 for the same 8 input cells :*

**This networks’ complete weights dump :**

Neuron network array numInputs :2 numHidden :8 numOutputs :2

INPUT to HIDDEN weights: INPUT[0]:18.4715 -15.7549 -21.2166 -19.4792 2.0692 -2.9851 -14.6416 -17.5079 INPUT[1]:-8.0632 -13.0431 4.0268 -12.4184 -7.6292 -8.4492 12.2782 -7.1637 HIDDEN to OUTPUT weights: OUTPUT[0]:1.8568 1.2939 1.7122 -1.2514 0.8039 -0.7578 1.2156 -0.5770 OUTPUT[1]:0.5297 -0.1719 0.1888 -0.7751 0.1120 -0.1462 0.2815 0.4162

This network uses tanh’d outputs on both hidden and output layer ( tanOutput[n]=tanh(50.0*NormalOut[n]) )

The output 1 shows groups of cells and highlights some interresting shapes that the normal output[0] does not permit to view :

————-

Now, the same network is trained the same way, but the output[1] with no tanh() function applied is graphed.This renders the subtle values for this output in the range -1.0 / 1.0. ( the supervision’s expected output[1] rule was : output[1] = ( actual_output[0] + the new 8 cell’s sum value ) divided by 2.0

————-

The network is then modified : we add 1 new input, namely the x coordinates of the 2D plane that’s rendering. the actual 2D area is a 64×64 cell array, so the 0-64 value for X will be maped to a -1.0 / 1.0 vector for this new input.

This time, output[1] is trained in a unmonitored manner again. we want to have output[1] to be the copy of the actual X value.

So, the new network as 3 inputs :

- input[0] is the sum of the 8 surrounding cells at T
- input[1] is the actual value of the output[0] at T-1 ( namely, the state of the cell at T )
- input[2] is the X coordinates of the cell beeing processed ( 0-64 range mapped to -1.0 / +1.0 )

- output[0] is the result of applying the rulesof the Game of life.
- output[1] is expected to be the ‘image’ of the X coordinates, with nothing more.

Neuron network arraynumInputs :3numHidden :10numOutputs :2INPUT to HIDDEN weights:INPUT[0]:-17.4559 0.0378 0.0916 -1.1608 -2.2167 -15.6072 -14.7210 -16.3537 -1.0468 -25.0423INPUT[1]:-15.0747 -1.8141 3.3090 -7.1765 1.9960 -8.8084 6.3518 14.2116 6.8359 4.5404INPUT[2]:0.3854 2.4042 2.6311 -0.1052 13.9876 0.0769 0.1231 0.1707 0.6056 -0.0481HIDDEN to OUTPUT weights:OUTPUT[0]:1.4326 -0.0190 0.1235 -0.9902 -0.0714 -1.4984 -1.5762 1.4177 -1.1382 1.7470OUTPUT[1]:0.0721 1.9509 1.7133 -0.4680 0.7523 -0.0126 -0.0061 0.0173 -0.8408 0.0440

A closer look at the weights shows that input[2] is ‘mainly’ linked to hidden neuron 4 ( weight about 13) and this hidden neuron 4 is then linked to output[1] by a 0.7523 value.

*For information, following output is the one for a trained network with :*

- output[0] following the game of life rules
- output[1] outputs the 8 surrounding cell’s value , multiplied by the X coordinates of the cell into the area :

————-