EihiS

August 12, 2015

neuron networks, part I

Filed under: Uncategorized — Tags: , , , , — admin @ 11:15 am

A neuron network is trained using back propagation learning, to achieve a successfull copying of the game of life rules (conway’s rules)

The neuron network description is :

  • 2 input neurons :
  1. Input 1 is the sum of the 8 surrounding cells at T ( range is 0.0 to 1.0 == 0 to 8 cells )
  2. Input 2 is the output neuron state at T-1 ( namely, the previous state of the output at T) (-1.0/+1.0)
  • 1 output neuron : the new cell’s state for this epoch. , -1.0 / +1.0 range
  • 5 hidden neurons
Network specifications :
  • input/output ranges from -1.0 to 1.0
  • hidden layer neurons uses the sigmoid like function for states : { hiddenval[k]=tanh(hiddenval[k]);  }
  • output layer neurons uses the sigmoid like function with magnifier for outputs :   { out[k]=tanh(out[k]*20.0); }
Many convergences with various weights append. using less than 5 hidden neurons can’t achieve convergence.
More than 5 hidden neurons does not improve convergence at all.
Here is weight dump of achieved convergence, with overall error less than 0.0036 % :
It was achieved after 72mega epochs, with Learning rates Hidden-to-Output of 0.03 and Learning rate Input-to-hidden of 0.3.
Weights,INPUT 1 to Hidden -22.13 -29.4693 14.7538 21.5601 -21.9645
Weights,INPUT 2 to Hidden -17.1351 -19.9037 -6.5521 -3.7809 17.136
Weights,Hidden to OUTPUT 1 1.8878 -1.906 1.8706 -1.3286 1.5681

Needless to say, the problem of the game of life rules as been exposed as : “output depends of the sum of the 8 surrounding cells AND the state of the center cell @ time T”.

A more complex approach is to use 8 input neurons ( the 8 surrounding cells) , plus the input neuron for the T-1 state of the cell.
(See part III for details about this)
Using the surrounding cells sum as an input value is one approach only: it’s a ’simplification’ provided to the neuron network previously described.

The game of life’s rule has two stages :

  1. sum the surrounding cell’s values beein ‘on’
  2. from this sum value, set the output value :
  • range 0..1 is the first solution case
  • range 2 is the second solution case
  • range 3 is the third solution case
  • range 4 to 8 is the fourth solution case
less than 5 hidden neurons will not converge.
We can compute a simple overview of the actual neuron network : 2×5 + 5×1 = 15 ’synapses’ computed.
From Part III of this post series, we have 9×3 + 3×1 = 30 synapses  ( so , its obvious that the summing the surrounding cell’s values and feeding it into the network improves cpu time for computation of an output state. but the setup of part III is somehow the ‘real’ cellular automata model..
The following images illustrate the results of the previous network (pre-computed sum of the surrounding cells )
This network just has different weight results (see last picture for values)
( snapshot of self running , once convergence was ok )
Here is the 64×64 pixels output MAPPING for the output neuron .
X axis (horizontal) is INPUT 1 :
The sum of the surrounding cells,
remapped from (0.0->8.0) to the ( 0.0 ; +1.0 ) range is used as input value.
(left is 0.0, right is  +1.0 (ie there are 8 cells around if it’s +1.0) )

Y axis (vertical) is INPUT 2 :
the network’s output at T-1 ( last output state ).top is -1.0 , bottom is +1.0

Finally, here is a snapshot of the weights used for this results  :
(Top of image : Hidden to Output neuron weights)
(Bottom : input 1, input 2 to Hidden neurons weights)
// … \\
This is a test network description. (to be continued)
314159265358979323846264338327950288
419716939937510582097494459230781640
628620899862803482534211706798214808

cat{ } { post_717 } { } 2009-2015 EIhIS Powered by WordPress