### neuron networks, part I

A neuron network is trained using back propagation learning, to achieve a successfull copying of the game of life rules (conway’s rules)

The neuron network description is :

- 2 input neurons :

- Input 1 is the sum of the 8 surrounding cells at T ( range is 0.0 to 1.0 == 0 to 8 cells )
- Input 2 is the output neuron state at T-1 ( namely, the previous state of the output at T) (-1.0/+1.0)

- 1 output neuron : the new cell’s state for this epoch. , -1.0 / +1.0 range
- 5 hidden neurons

- input/output ranges from -1.0 to 1.0
- hidden layer neurons uses the sigmoid like function for states : { hiddenval[k]=tanh(hiddenval[k]); }
- output layer neurons uses the sigmoid like function with magnifier for outputs : { out[k]=tanh(out[k]*20.0); }

It was achieved after 72mega epochs, with Learning rates Hidden-to-Output of 0.03 and Learning rate Input-to-hidden of 0.3.

Weights,INPUT 1 to Hidden | -22.13 | -29.4693 | 14.7538 | 21.5601 | -21.9645 |

Weights,INPUT 2 to Hidden | -17.1351 | -19.9037 | -6.5521 | -3.7809 | 17.136 |

Weights,Hidden to OUTPUT 1 | 1.8878 | -1.906 | 1.8706 | -1.3286 | 1.5681 |

Needless to say, the problem of the game of life rules as been exposed as : “output depends of the sum of the 8 surrounding cells AND the state of the center cell @ time T”.

A more complex approach is to use 8 input neurons ( the 8 surrounding cells) , plus the input neuron for the T-1 state of the cell.

(See part III for details about this)

Using the surrounding cells sum as an input value is one approach only: it’s a ’simplification’ provided to the neuron network previously described.

The game of life’s rule has two stages :

- sum the surrounding cell’s values beein ‘on’
- from this sum value, set the output value :

- range 0..1 is the first solution case
- range 2 is the second solution case
- range 3 is the third solution case
- range 4 to 8 is the fourth solution case

We can compute a simple overview of the actual neuron network : 2×5 + 5×1 = 15 ’synapses’ computed.

From Part III of this post series, we have 9×3 + 3×1 = 30 synapses ( so , its obvious that the summing the surrounding cell’s values and feeding it into the network improves cpu time for computation of an output state. but the setup of part III is somehow the ‘real’ cellular automata model..

*This network just has different weight results (see last picture for values)*

**Here is the 64×64 pixels output MAPPING for the output neuron .**

X axis (horizontal) is INPUT 1 :

The sum of the surrounding cells,

remapped from (0.0->8.0) to the ( 0.0 ; +1.0 ) range is used as input value.

*(left is 0.0, right is +1.0 (ie there are 8 cells around if it’s +1.0) )*

Y axis (vertical) is INPUT 2 :

the network’s output at T-1 ( last output state ).top is -1.0 , bottom is +1.0

**Finally, here is a snapshot of the weights used for this results :**

(Top of image : Hidden to Output neuron weights)

(Bottom : input 1, input 2 to Hidden neurons weights)