EihiS

August 13, 2015

neuron networks : Z-universe hot notes

Filed under: Uncategorized — admin @ 9:17 am

Using a special trained network, a 2-dimensional “answer map” was generated.
The map is used to compute the Cell’s value for the next generation, as for a classic conway’s game of life.

I’ve started to take snapshots of the generators and pieces than create good interactions using that special rules.

in the following pictures, full green is -1.0 , full red is +1.0 . stimulation values of the generators ( marked as a white dot) are +0.5 value. 2 dots on a picture does not significate that the 2 points are needed. any of the 2 will result in the ‘action’ picture.
Top picture is at rest, bottom picture is once fired (in action) , some epochs later

Generator , version B (permanent, not destroyed by the action of firing) :

And also .. ( grey cells are the Generator, each grey cell has a value +0.5 )

Generator, version C ( destroyed by the firing ) :
a 3 vertical cells element, 0.5f value each , horizontal symetric ejection

Generator , version C2 ( 2 vertical cells , same as C for ejection )

Generator , version D ( destroyed by the firing )
It throws crawlers in the 2 axis symetry

Now some asymetric elements …

Generator A , 4 cells , asymetric firing :

Generator A3 , disappears in a small , local explosion-activity :
( also acts on single cells as an attractor in its direction )

314159265358979323846264338327950288
419716939937510582097494459230781640
628620899862803482534211706798214808

August 12, 2015

neuron networks, part I

Filed under: Uncategorized — Tags: , , , , — admin @ 11:15 am

A neuron network is trained using back propagation learning, to achieve a successfull copying of the game of life rules (conway’s rules)

The neuron network description is :

  • 2 input neurons :
  1. Input 1 is the sum of the 8 surrounding cells at T ( range is 0.0 to 1.0 == 0 to 8 cells )
  2. Input 2 is the output neuron state at T-1 ( namely, the previous state of the output at T) (-1.0/+1.0)
  • 1 output neuron : the new cell’s state for this epoch. , -1.0 / +1.0 range
  • 5 hidden neurons
Network specifications :
  • input/output ranges from -1.0 to 1.0
  • hidden layer neurons uses the sigmoid like function for states : { hiddenval[k]=tanh(hiddenval[k]);  }
  • output layer neurons uses the sigmoid like function with magnifier for outputs :   { out[k]=tanh(out[k]*20.0); }
Many convergences with various weights append. using less than 5 hidden neurons can’t achieve convergence.
More than 5 hidden neurons does not improve convergence at all.
Here is weight dump of achieved convergence, with overall error less than 0.0036 % :
It was achieved after 72mega epochs, with Learning rates Hidden-to-Output of 0.03 and Learning rate Input-to-hidden of 0.3.
Weights,INPUT 1 to Hidden -22.13 -29.4693 14.7538 21.5601 -21.9645
Weights,INPUT 2 to Hidden -17.1351 -19.9037 -6.5521 -3.7809 17.136
Weights,Hidden to OUTPUT 1 1.8878 -1.906 1.8706 -1.3286 1.5681

Needless to say, the problem of the game of life rules as been exposed as : “output depends of the sum of the 8 surrounding cells AND the state of the center cell @ time T”.

A more complex approach is to use 8 input neurons ( the 8 surrounding cells) , plus the input neuron for the T-1 state of the cell.
(See part III for details about this)
Using the surrounding cells sum as an input value is one approach only: it’s a ’simplification’ provided to the neuron network previously described.

The game of life’s rule has two stages :

  1. sum the surrounding cell’s values beein ‘on’
  2. from this sum value, set the output value :
  • range 0..1 is the first solution case
  • range 2 is the second solution case
  • range 3 is the third solution case
  • range 4 to 8 is the fourth solution case
less than 5 hidden neurons will not converge.
We can compute a simple overview of the actual neuron network : 2×5 + 5×1 = 15 ’synapses’ computed.
From Part III of this post series, we have 9×3 + 3×1 = 30 synapses  ( so , its obvious that the summing the surrounding cell’s values and feeding it into the network improves cpu time for computation of an output state. but the setup of part III is somehow the ‘real’ cellular automata model..
The following images illustrate the results of the previous network (pre-computed sum of the surrounding cells )
This network just has different weight results (see last picture for values)
( snapshot of self running , once convergence was ok )
Here is the 64×64 pixels output MAPPING for the output neuron .
X axis (horizontal) is INPUT 1 :
The sum of the surrounding cells,
remapped from (0.0->8.0) to the ( 0.0 ; +1.0 ) range is used as input value.
(left is 0.0, right is  +1.0 (ie there are 8 cells around if it’s +1.0) )

Y axis (vertical) is INPUT 2 :
the network’s output at T-1 ( last output state ).top is -1.0 , bottom is +1.0

Finally, here is a snapshot of the weights used for this results  :
(Top of image : Hidden to Output neuron weights)
(Bottom : input 1, input 2 to Hidden neurons weights)
// … \\
This is a test network description. (to be continued)
314159265358979323846264338327950288
419716939937510582097494459230781640
628620899862803482534211706798214808

October 8, 2014

RPI : openGL ES 2.0 functions in C

Filed under: Uncategorized — Tags: , , , , , , , — admin @ 9:11 am

This post will be updated when i have enough time.
// list GL capabilities & Extensions

void show_GLcapabilities(void)
{
	//
      printf("GL_RENDERER   = %s\n", (char *) glGetString(GL_RENDERER));
      printf("GL_VERSION    = %s\n", (char *) glGetString(GL_VERSION));
      printf("GL_VENDOR     = %s\n", (char *) glGetString(GL_VENDOR));
      printf("GL_EXTENSIONS = %s\n", (char *) glGetString(GL_EXTENSIONS));
	//
}
// list GLES parameters
void show_GLparameters(void)
{
	GLint myget[256];
	glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_FORMAT,&myget);printf("GL_IMPLEMENTATION_COLOR_READ_FORMAT=%04x %04x %04x %d \n",myget[0],myget[1],myget[2],myget[3]);
	glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_TYPE,&myget);printf("GL_IMPLEMENTATION_COLOR_READ_TYPE=%04x %04x %04x %d \n",myget[0],myget[1],myget[2],myget[3]);
	glGetIntegerv(GL_COLOR_WRITEMASK,&myget);printf("GL_COLOR_WRITE_MASK=%d %d %d %d \n",myget[0],myget[1],myget[2],myget[3]);
	glGetIntegerv(GL_BLEND,&myget);printf("GL_BLEND =%d\n",myget[0]);
	glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE,&myget);printf("GL_MAX_RENDER_BUFFER_SIZE =%d\n",myget[0]);
	glGetIntegerv(GL_MAX_VERTEX_UNIFORM_VECTORS,&myget);printf("GL_MAX_VERTEX_UNIFORM_VECTORS =%d\n",myget[0]);
	glGetIntegerv(GL_MAX_FRAGMENT_UNIFORM_VECTORS,&myget);printf("GL_MAX_FRAGMENT_UNIFORM_VECTORS =%d\n",myget[0]);
	glGetIntegerv(GL_MAX_VERTEX_ATTRIBS,&myget);printf("GL_MAX_VERTEX_ATTRIBS =%d\n",myget[0]);
	//
	glGetIntegerv(GL_MAX_VIEWPORT_DIMS,&myget);printf("GL_MAX_VIEWPORT_DIMS =%dx%d\n",myget[0],myget[1]);
	glGetIntegerv(GL_VIEWPORT,&myget);printf("GL_VIEWPORT =(%d,%d %d,%d)\n",myget[0],myget[1],myget[2],myget[3]);

}
//
// Videocore debugging
in /opt/vc/bin :
sudo vcdbg help
see : https://github.com/nezticle/RaspberryPi-BuildRoot/wiki/VideoCore-Tools
314159265358979323846264338327950288
419716939937510582097494459230781640
628620899862803482534211706798214808
« Newer Posts

cat{ 1 } { post_660 } { } 2009-2015 EIhIS Powered by WordPress