EihiS

October 3, 2015

Neuron networks , part III-2

Filed under: Uncategorized — Tags: , , , , , , — admin @ 8:52 am

Now we use the following settings:

#define _NEU_USES_ASYMETRIC_OUTPUTS
#define _neu_asym_magnify 200.0//2.0 // was 20 8.0  is default , schm-trig amplifier for tanh
#define _neu_asym_value 0.1 // was 0.01// default 0.1 , scm-trig delta from 0 , rising/falling. 0.1 : schmidt trig asymetry
#define _neu_asym_rescale (float) (( (float) _neu_asym_magnify+ (float) _neu_asym_magnify + (float) _neu_asym_value) / (float) (_neu_asym_magnify) )
//
// Hidden and output neurons (post) modes
//
#define _NEU_USES_TANH_HIDDEN
#define _NEU_USES_TANH_OUTPUT
// learning rates ( not for this case , just calcs the net )
#define _NEU_INITIAL_LR_HO (float)0.08
#define _NEU_INITIAL_LR_IH (float)0.008
Then the network weights :
Neuron network array
numInputs :10
numHidden :3
numOutputs :1

INPUT to HIDDEN weights:
INPUT[0]:-0.0661 0.9147 -0.5076
INPUT[1]:-2.5167 3.7290 -2.4142
INPUT[2]:0.2267 1.8123 -2.5271
INPUT[3]:0.1382 1.8078 -2.5157
INPUT[4]:0.0097 1.8425 -2.5152
INPUT[5]:0.2858 1.8065 -2.5307
INPUT[6]:0.0063 1.8392 -2.5163
INPUT[7]:0.2006 1.8155 -2.5255
INPUT[8]:0.0994 1.8121 -2.5118
INPUT[9]:-0.0390 1.8455 -2.5131
HIDDEN to OUTPUT weights:
OUTPUT[0]:1.0985 1.8016 1.8651

This network’s output is the one for a normal conways LG.

We use the calcnet() for each cell of the world ( here, a 64x64 pixels )
We compute the cells from 0+1 to world's width-1 , and 0+1 to world's height-1
We transform the output of the neuron , using formula : neuron_new[yy][xx]=sin(NE_outPred[0]); 
Notice we use outPred[] wich is the tanh'd but not 'trigger' output.
Then , varying the bias values, we get a special automata able to create organic like patterns 

Screen captures follow (bias 0.30):

Then a little epochs later (bias : 0.30) :

With Bias 0.40 :

314159265358979323846264338327950288
419716939937510582097494459230781640
628620899862803482534211706798214808

September 27, 2015

Neuron networks , part III

Filed under: Uncategorized, linux — Tags: , , , , , , — admin @ 8:46 am

the NEU.c header looks like:

// activate extra computation : NE_outPulse[] mode
#define _NEU_USES_PULSES_OUTPUTS
// activate extra computation : NE_outTrigger[], schmidt trigger tanh'd output
#define _NEU_USES_ASYMETRIC_OUTPUTS
//
#define _neu_asym_magnify 200.0//2.0 // 8.0  is default , schm-trig amplifier for tanh
#define _neu_asym_value 0.1 //  default 0.1 , scm-trig 'delta' from 0 , rising/falling. 0.1 : schmidt trig asymetry
#define _neu_asym_rescale (float) (( (float) _neu_asym_magnify+ (float) _neu_asym_magnify + (float) _neu_asym_value) / (float) (_neu_asym_magnify) )
//
// Hidden and output neurons (post) modes
//
#define _NEU_USES_TANH_HIDDEN
#define _NEU_USES_TANH_OUTPUT
// learning rates
#define _NEU_INITIAL_LR_HO (float)0.08
#define _NEU_INITIAL_LR_IH (float)0.008
// we use some globals :
int NE_numInputs;   //  define howmuch inputs
int NE_numPatterns; // howmuch patterns for learning if a set is created
int NE_numHidden;   // hidden layer neuron number
int NE_numOutputs;  // output layer neuron quantity
int NE_numEpochs;   // used to count learning epochs (convergence)
//
// NEU variables
int NE_patNum = 0;
double NE_errThisPat[_max_outputs];
double NE_outPred[_max_outputs];
// asymetrics
#ifdef _NEU_USES_ASYMETRIC_OUTPUTS
double NE_last_outPred[_max_outputs];	// for last state saves
double NE_outTrigger[_max_outputs];
#endif
// pulsed mode output (another , alternate , output mode)
#ifdef _NEU_USES_PULSES_OUTPUTS
double  NE_outPredPulsed[_max_outputs];
double NE_outPulse[_max_outputs];
double NE_outPeriod[_max_outputs];
#endif
//
double NE_RMSerror[_max_outputs];
double NE_bias_value = 0.0 ;	// variable bias - used for 1 of the inputs
//
double hiddenVal[_max_hidden];	// mximum hidden neuron count
double hiddenPulse[_max_hidden];
double hiddenPeriod[_max_hidden];
//
double weightsIH[_max_inputs][_max_hidden];
double weightsHO[_max_hidden][_max_outputs];
//
double trainInputs[_max_patterns+1][_max_inputs];
double trainOutput[_max_patterns][_max_outputs];
//

And the CalcNet() function ( heavy optimizations could be applied ) .btw, sorry for the bad identations (blog’s content editor is not good at this )

// calculates the network output : you setup NE_patNum. trainOutputs have to be set only if
// in learning mode, to calculate errors.
void NEU_calcNet(void)
{
    //
    int j,k;
    int i = 0;
    for(i = 0;i<NE_numHidden;i++)
    {
  hiddenVal[i] = 0.0;
  //
for(j = 0;j<NE_numInputs;j++)
{
if(trainInputs[NE_patNum][j]>1.0) trainInputs[NE_patNum][j]=1.0;
if(trainInputs[NE_patNum][j]<-1.0) trainInputs[NE_patNum][j]=-1.0;
hiddenVal[i] = hiddenVal[i] + (trainInputs[NE_patNum][j] * weightsIH[j][i]);
}
// uses tanh'd mode ?
#ifdef _NEU_USES_TANH_HIDDEN
hiddenVal[i] = tanh(hiddenVal[i]); // hidden state is tanh'd
#endif
// uses pulsed mode?
#ifdef _NEU_USES_PULSES_OUTPUTS
if((hiddenPulse[i]>-0.001)&&(hiddenPulse[i]<0.001))
{
hiddenPeriod[i]=abs(hiddenVal[i])*0.7;// assign new period
if (hiddenVal[i]>0.0) {hiddenPulse[i]=1.0;}else {hiddenPulse[i]=-1.0;} // set corresponding output pulse pos/neg
}/
else { hiddenPulse[i]*=0.7-hiddenPeriod[i];} // looses amplitude downto < abs 0.001
#endif
    }
   //calculate the output of the network
   //the output neuron is linear
   for (k=0;k<NE_numOutputs;k++)
   {
NE_outPred[k] = 0.0;
#ifdef _NEU_USES_PULSES_OUTPUTS
NE_outPredPulsed[k]=0.0;// pulsed mode intermediaire output
#endif
for(i = 0;i<NE_numHidden;i++)
{
NE_outPred[k] = NE_outPred[k] + hiddenVal[i] * weightsHO[i][k];
#ifdef _NEU_USES_PULSES_OUTPUTS
NE_outPredPulsed[k] = NE_outPredPulsed[k] + hiddenPulse[i] * weightsHO[i][k];
#endif
}
//calculate the error
NE_errThisPat[k] = NE_outPred[k] - trainOutput[NE_patNum][k];
// tanh'd
#ifdef _NEU_USES_TANH_OUTPUT
// uses asymetric (Triggers out) ?
#ifdef  _NEU_USES_ASYMETRIC_OUTPUTS
if(NE_outPred[k]-NE_last_outPred[k]>0.0) // rising edge
{ 
NE_last_outPred[k]=NE_outPred[k];
NE_outTrigger[k]=(NE_outPred[k]-_neu_asym_value)*_neu_asym_magnify;
//NE_outPred[k]-=0.1;
//NE_outPred[k]*=_neu_asym_magnify;
} 
else 
{ 
NE_last_outPred[k]=NE_outPred[k];	// falling edge
NE_outTrigger[k]=(NE_outPred[k]+_neu_asym_value)*_neu_asym_magnify;
} 
// tanh'd final result
NE_outTrigger[k]=tanh(NE_outTrigger[k])/_neu_asym_rescale;
#endif
// uses tanh'd output, with no asymetry
NE_outPred[k]=tanh(NE_outPred[k]); 
#endif
//
#ifdef _NEU_USES_PULSES_OUTPUTS
// pulse mode : uses ne_outprepulsed
if((NE_outPulse[k]>-0.001)&&(NE_outPulse[k]<0.001))
{
NE_outPeriod[k]=abs(NE_outPredPulsed[k])*0.7;// assign new period
if (NE_outPredPulsed[k]>0.0) {NE_outPulse[k]=1.0;}else {NE_outPulse[k]=-1.0;} // set corresponding output pulse pos/neg
}
else { NE_outPulse[k]*=0.7-NE_outPeriod[k];} // looses amplitude downto < abs 0.001
#endif
}
}

Here is an example of float weights settings  ( 1+1+8 inputs , 1 output )

//

Neuron network array
numInputs :10
numHidden :3
numOutputs :1
INPUT to HIDDEN weights:
INPUT[0]:0.7249 -0.3254 -0.0132 
INPUT[1]:2.6476 -1.7487 1.8793 
INPUT[2]:1.6000 -2.1098 -0.1300 
INPUT[3]:1.5954 -2.1106 -0.1713 
INPUT[4]:1.5973 -2.1053 -0.0823 
INPUT[5]:1.5956 -2.1127 -0.2124 
INPUT[6]:1.5922 -2.1073 -0.1447 
INPUT[7]:1.5945 -2.1101 -0.1476 
INPUT[8]:1.5912 -2.1103 -0.1890 
INPUT[9]:1.5921 -2.1062 -0.0817 
HIDDEN to OUTPUT weights:
OUTPUT[0]:1.8294 1.8561 -1.0271
// input 0 is the cell's state at this moment.
// input 1 is the bias input. the network works 'out of the box' with 1.0 as a start value
// input 2 to 9 are the 8 surrounding cell's values
// output 0 is obviously the new cell's value once Calcnet() function is executed.
//
// nb : this weights have been obtained using a kind of unsupervised network training 
// followings are other working results, in bulk..
Neuron network array
numInputs :10
numHidden :3
numOutputs :1

INPUT to HIDDEN weights:
INPUT[0]:-0.0229 0.2218 0.7648
INPUT[1]:-2.0418 1.8457 3.1429
INPUT[2]:0.2120 2.2448 1.8066
INPUT[3]:0.2278 2.2442 1.8049
INPUT[4]:0.1597 2.2419 1.8062
INPUT[5]:0.2398 2.2457 1.8072
INPUT[6]:0.1784 2.2429 1.8051
INPUT[7]:0.1867 2.2433 1.8063
INPUT[8]:0.2036 2.2434 1.8054
INPUT[9]:0.1458 2.2410 1.8064
HIDDEN to OUTPUT weights:
OUTPUT[0]:1.0483 -1.7011 1.6570 

Yet another one
Neuron network array
numInputs :10
numHidden :3
numOutputs :1

INPUT to HIDDEN weights:
INPUT[0]:-0.3511 -0.7469 -0.3062
INPUT[1]:-1.9654 -2.7043 -1.9516
INPUT[2]:0.0046 -1.6099 -2.3226
INPUT[3]:0.0888 -1.6007 -2.3241
INPUT[4]:0.0356 -1.6225 -2.3280
INPUT[5]:0.0363 -1.6139 -2.3275
INPUT[6]:-0.0243 -1.6261 -2.3254
INPUT[7]:-0.0636 -1.6227 -2.3232
INPUT[8]:-0.0350 -1.6153 -2.3215
INPUT[9]:-0.0603 -1.6307 -2.3269
HIDDEN to OUTPUT weights:
OUTPUT[0]:1.0404 -1.8403 1.8139
Finally, another one (for that particular one, EXACT conway's rules is outputed with a bias value of 2.3 )
Neuron network array
numInputs :10
numHidden :3
numOutputs :1

INPUT to HIDDEN weights:
INPUT[0]:0.9719 0.4218 -0.4701
INPUT[1]:3.6418 2.0367 -2.4421
INPUT[2]:1.8977 3.3946 -0.1189
INPUT[3]:1.8869 3.4056 0.0242
INPUT[4]:1.8758 3.4251 0.1371
INPUT[5]:1.9026 3.3867 -0.1617
INPUT[6]:1.8715 3.4222 0.2166
INPUT[7]:1.9007 3.3728 -0.1277
INPUT[8]:1.8897 3.4263 0.0333
INPUT[9]:1.8785 3.3993 0.1416
HIDDEN to OUTPUT weights:
OUTPUT[0]:1.1289 -1.1504 1.0407
314159265358979323846264338327950288
419716939937510582097494459230781640
628620899862803482534211706798214808

cat{ } { post_773 } { } 2009-2015 EIhIS Powered by WordPress