EihiS

May 21, 2016

windows update “recherche de mises à jour” bloqué

Filed under: Uncategorized — Tags: , , , — admin @ 7:36 am

Oui, j’ai désactivé les mises a jour automatiques vers le début de l’année, et en décidant de passer finalement de windows 7 vers windows 10 sur mon asus k73sv, j’ai la désagréable surprise de me trouver face a ce problème. ayant parcouru un nombre impressionnant de forums ou le sujet est évoqué, et ayant appliqué plusieurs ‘correctifs’ microsoft n’ayant rien résolu, je suis finalement tombé sur le patch qui résoud le problème. il s’agit du KB3102810 , a télécharger sur le site microsoft, en version 32 ou 64bits . Symptôme de ma panne : un des processeurs sur les 4 est 100% occupé ( windows update ?) , et aucune mise a jour n’est trouvée. pour appliquer le patch, le plus efficace est de :

  • méthode bricolo : recopier l’executable sur le bureau, redemarrer la machine, et dès qu’on a accès au bureau windows, lancer l’execution du patch. ( le plus tot possible , donc, afin de le lancer AVANT que s’execute le service windows update existant )
  • méthode admin : executer, en mode admin, cmd.exe, puis dans la console :
  • net stop wuauserv
  • fermer la console, et executer le patch , redémarrer.
Ok, ça ne marche toujours pas ? (mon cas..)
Télécharger et appliquer un bunch de patches+’fix-it’,  recommandés par microsoft :
  • http://download.microsoft.com/download/8/3/D/83DA9B2F-3246-4C1E-996B-1381F667247D/MicrosoftEasyFix50202.msi
  • https://support.microsoft.com/fr-fr/kb/971058
  • http://windows.microsoft.com/fr-fr/windows/troubleshoot-problems-installing-updates#1TC=windows-7
ensuite, dans la console cmd.exe, en executée en mode admin :
  1. net stop wuauserv
  2. cd %systemroot%\SoftwareDistribution
  3. ren Download Download.old
  4. regsvr32 %windir%\system32\wups2.dll
  5. net start wuauserv
-> reboot.
on stoppe le service windows update, on va dans le répertoire SoftwareDistribution, on renomme le dossier Download en Download.old, on enregistre wups2.dll dans la base de registre, on redémarre windowsupdate service.
314159265358979323846264338327950288
419716939937510582097494459230781640
628620899862803482534211706798214808

October 3, 2015

Neuron networks , part III-2

Filed under: Uncategorized — Tags: , , , , , , — admin @ 8:52 am

Now we use the following settings:

#define _NEU_USES_ASYMETRIC_OUTPUTS
#define _neu_asym_magnify 200.0//2.0 // was 20 8.0  is default , schm-trig amplifier for tanh
#define _neu_asym_value 0.1 // was 0.01// default 0.1 , scm-trig delta from 0 , rising/falling. 0.1 : schmidt trig asymetry
#define _neu_asym_rescale (float) (( (float) _neu_asym_magnify+ (float) _neu_asym_magnify + (float) _neu_asym_value) / (float) (_neu_asym_magnify) )
//
// Hidden and output neurons (post) modes
//
#define _NEU_USES_TANH_HIDDEN
#define _NEU_USES_TANH_OUTPUT
// learning rates ( not for this case , just calcs the net )
#define _NEU_INITIAL_LR_HO (float)0.08
#define _NEU_INITIAL_LR_IH (float)0.008
Then the network weights :
Neuron network array
numInputs :10
numHidden :3
numOutputs :1

INPUT to HIDDEN weights:
INPUT[0]:-0.0661 0.9147 -0.5076
INPUT[1]:-2.5167 3.7290 -2.4142
INPUT[2]:0.2267 1.8123 -2.5271
INPUT[3]:0.1382 1.8078 -2.5157
INPUT[4]:0.0097 1.8425 -2.5152
INPUT[5]:0.2858 1.8065 -2.5307
INPUT[6]:0.0063 1.8392 -2.5163
INPUT[7]:0.2006 1.8155 -2.5255
INPUT[8]:0.0994 1.8121 -2.5118
INPUT[9]:-0.0390 1.8455 -2.5131
HIDDEN to OUTPUT weights:
OUTPUT[0]:1.0985 1.8016 1.8651

This network’s output is the one for a normal conways LG.

We use the calcnet() for each cell of the world ( here, a 64x64 pixels )
We compute the cells from 0+1 to world's width-1 , and 0+1 to world's height-1
We transform the output of the neuron , using formula : neuron_new[yy][xx]=sin(NE_outPred[0]); 
Notice we use outPred[] wich is the tanh'd but not 'trigger' output.
Then , varying the bias values, we get a special automata able to create organic like patterns 

Screen captures follow (bias 0.30):

Then a little epochs later (bias : 0.30) :

With Bias 0.40 :

314159265358979323846264338327950288
419716939937510582097494459230781640
628620899862803482534211706798214808

September 27, 2015

Neuron networks , part III

Filed under: Uncategorized, linux — Tags: , , , , , , — admin @ 8:46 am

the NEU.c header looks like:

// activate extra computation : NE_outPulse[] mode
#define _NEU_USES_PULSES_OUTPUTS
// activate extra computation : NE_outTrigger[], schmidt trigger tanh'd output
#define _NEU_USES_ASYMETRIC_OUTPUTS
//
#define _neu_asym_magnify 200.0//2.0 // 8.0  is default , schm-trig amplifier for tanh
#define _neu_asym_value 0.1 //  default 0.1 , scm-trig 'delta' from 0 , rising/falling. 0.1 : schmidt trig asymetry
#define _neu_asym_rescale (float) (( (float) _neu_asym_magnify+ (float) _neu_asym_magnify + (float) _neu_asym_value) / (float) (_neu_asym_magnify) )
//
// Hidden and output neurons (post) modes
//
#define _NEU_USES_TANH_HIDDEN
#define _NEU_USES_TANH_OUTPUT
// learning rates
#define _NEU_INITIAL_LR_HO (float)0.08
#define _NEU_INITIAL_LR_IH (float)0.008
// we use some globals :
int NE_numInputs;   //  define howmuch inputs
int NE_numPatterns; // howmuch patterns for learning if a set is created
int NE_numHidden;   // hidden layer neuron number
int NE_numOutputs;  // output layer neuron quantity
int NE_numEpochs;   // used to count learning epochs (convergence)
//
// NEU variables
int NE_patNum = 0;
double NE_errThisPat[_max_outputs];
double NE_outPred[_max_outputs];
// asymetrics
#ifdef _NEU_USES_ASYMETRIC_OUTPUTS
double NE_last_outPred[_max_outputs];	// for last state saves
double NE_outTrigger[_max_outputs];
#endif
// pulsed mode output (another , alternate , output mode)
#ifdef _NEU_USES_PULSES_OUTPUTS
double  NE_outPredPulsed[_max_outputs];
double NE_outPulse[_max_outputs];
double NE_outPeriod[_max_outputs];
#endif
//
double NE_RMSerror[_max_outputs];
double NE_bias_value = 0.0 ;	// variable bias - used for 1 of the inputs
//
double hiddenVal[_max_hidden];	// mximum hidden neuron count
double hiddenPulse[_max_hidden];
double hiddenPeriod[_max_hidden];
//
double weightsIH[_max_inputs][_max_hidden];
double weightsHO[_max_hidden][_max_outputs];
//
double trainInputs[_max_patterns+1][_max_inputs];
double trainOutput[_max_patterns][_max_outputs];
//

And the CalcNet() function ( heavy optimizations could be applied ) .btw, sorry for the bad identations (blog’s content editor is not good at this )

// calculates the network output : you setup NE_patNum. trainOutputs have to be set only if
// in learning mode, to calculate errors.
void NEU_calcNet(void)
{
    //
    int j,k;
    int i = 0;
    for(i = 0;i<NE_numHidden;i++)
    {
  hiddenVal[i] = 0.0;
  //
for(j = 0;j<NE_numInputs;j++)
{
if(trainInputs[NE_patNum][j]>1.0) trainInputs[NE_patNum][j]=1.0;
if(trainInputs[NE_patNum][j]<-1.0) trainInputs[NE_patNum][j]=-1.0;
hiddenVal[i] = hiddenVal[i] + (trainInputs[NE_patNum][j] * weightsIH[j][i]);
}
// uses tanh'd mode ?
#ifdef _NEU_USES_TANH_HIDDEN
hiddenVal[i] = tanh(hiddenVal[i]); // hidden state is tanh'd
#endif
// uses pulsed mode?
#ifdef _NEU_USES_PULSES_OUTPUTS
if((hiddenPulse[i]>-0.001)&&(hiddenPulse[i]<0.001))
{
hiddenPeriod[i]=abs(hiddenVal[i])*0.7;// assign new period
if (hiddenVal[i]>0.0) {hiddenPulse[i]=1.0;}else {hiddenPulse[i]=-1.0;} // set corresponding output pulse pos/neg
}/
else { hiddenPulse[i]*=0.7-hiddenPeriod[i];} // looses amplitude downto < abs 0.001
#endif
    }
   //calculate the output of the network
   //the output neuron is linear
   for (k=0;k<NE_numOutputs;k++)
   {
NE_outPred[k] = 0.0;
#ifdef _NEU_USES_PULSES_OUTPUTS
NE_outPredPulsed[k]=0.0;// pulsed mode intermediaire output
#endif
for(i = 0;i<NE_numHidden;i++)
{
NE_outPred[k] = NE_outPred[k] + hiddenVal[i] * weightsHO[i][k];
#ifdef _NEU_USES_PULSES_OUTPUTS
NE_outPredPulsed[k] = NE_outPredPulsed[k] + hiddenPulse[i] * weightsHO[i][k];
#endif
}
//calculate the error
NE_errThisPat[k] = NE_outPred[k] - trainOutput[NE_patNum][k];
// tanh'd
#ifdef _NEU_USES_TANH_OUTPUT
// uses asymetric (Triggers out) ?
#ifdef  _NEU_USES_ASYMETRIC_OUTPUTS
if(NE_outPred[k]-NE_last_outPred[k]>0.0) // rising edge
{ 
NE_last_outPred[k]=NE_outPred[k];
NE_outTrigger[k]=(NE_outPred[k]-_neu_asym_value)*_neu_asym_magnify;
//NE_outPred[k]-=0.1;
//NE_outPred[k]*=_neu_asym_magnify;
} 
else 
{ 
NE_last_outPred[k]=NE_outPred[k];	// falling edge
NE_outTrigger[k]=(NE_outPred[k]+_neu_asym_value)*_neu_asym_magnify;
} 
// tanh'd final result
NE_outTrigger[k]=tanh(NE_outTrigger[k])/_neu_asym_rescale;
#endif
// uses tanh'd output, with no asymetry
NE_outPred[k]=tanh(NE_outPred[k]); 
#endif
//
#ifdef _NEU_USES_PULSES_OUTPUTS
// pulse mode : uses ne_outprepulsed
if((NE_outPulse[k]>-0.001)&&(NE_outPulse[k]<0.001))
{
NE_outPeriod[k]=abs(NE_outPredPulsed[k])*0.7;// assign new period
if (NE_outPredPulsed[k]>0.0) {NE_outPulse[k]=1.0;}else {NE_outPulse[k]=-1.0;} // set corresponding output pulse pos/neg
}
else { NE_outPulse[k]*=0.7-NE_outPeriod[k];} // looses amplitude downto < abs 0.001
#endif
}
}

Here is an example of float weights settings  ( 1+1+8 inputs , 1 output )

//

Neuron network array
numInputs :10
numHidden :3
numOutputs :1
INPUT to HIDDEN weights:
INPUT[0]:0.7249 -0.3254 -0.0132 
INPUT[1]:2.6476 -1.7487 1.8793 
INPUT[2]:1.6000 -2.1098 -0.1300 
INPUT[3]:1.5954 -2.1106 -0.1713 
INPUT[4]:1.5973 -2.1053 -0.0823 
INPUT[5]:1.5956 -2.1127 -0.2124 
INPUT[6]:1.5922 -2.1073 -0.1447 
INPUT[7]:1.5945 -2.1101 -0.1476 
INPUT[8]:1.5912 -2.1103 -0.1890 
INPUT[9]:1.5921 -2.1062 -0.0817 
HIDDEN to OUTPUT weights:
OUTPUT[0]:1.8294 1.8561 -1.0271
// input 0 is the cell's state at this moment.
// input 1 is the bias input. the network works 'out of the box' with 1.0 as a start value
// input 2 to 9 are the 8 surrounding cell's values
// output 0 is obviously the new cell's value once Calcnet() function is executed.
//
// nb : this weights have been obtained using a kind of unsupervised network training 
// followings are other working results, in bulk..
Neuron network array
numInputs :10
numHidden :3
numOutputs :1

INPUT to HIDDEN weights:
INPUT[0]:-0.0229 0.2218 0.7648
INPUT[1]:-2.0418 1.8457 3.1429
INPUT[2]:0.2120 2.2448 1.8066
INPUT[3]:0.2278 2.2442 1.8049
INPUT[4]:0.1597 2.2419 1.8062
INPUT[5]:0.2398 2.2457 1.8072
INPUT[6]:0.1784 2.2429 1.8051
INPUT[7]:0.1867 2.2433 1.8063
INPUT[8]:0.2036 2.2434 1.8054
INPUT[9]:0.1458 2.2410 1.8064
HIDDEN to OUTPUT weights:
OUTPUT[0]:1.0483 -1.7011 1.6570 

Yet another one
Neuron network array
numInputs :10
numHidden :3
numOutputs :1

INPUT to HIDDEN weights:
INPUT[0]:-0.3511 -0.7469 -0.3062
INPUT[1]:-1.9654 -2.7043 -1.9516
INPUT[2]:0.0046 -1.6099 -2.3226
INPUT[3]:0.0888 -1.6007 -2.3241
INPUT[4]:0.0356 -1.6225 -2.3280
INPUT[5]:0.0363 -1.6139 -2.3275
INPUT[6]:-0.0243 -1.6261 -2.3254
INPUT[7]:-0.0636 -1.6227 -2.3232
INPUT[8]:-0.0350 -1.6153 -2.3215
INPUT[9]:-0.0603 -1.6307 -2.3269
HIDDEN to OUTPUT weights:
OUTPUT[0]:1.0404 -1.8403 1.8139
Finally, another one (for that particular one, EXACT conway's rules is outputed with a bias value of 2.3 )
Neuron network array
numInputs :10
numHidden :3
numOutputs :1

INPUT to HIDDEN weights:
INPUT[0]:0.9719 0.4218 -0.4701
INPUT[1]:3.6418 2.0367 -2.4421
INPUT[2]:1.8977 3.3946 -0.1189
INPUT[3]:1.8869 3.4056 0.0242
INPUT[4]:1.8758 3.4251 0.1371
INPUT[5]:1.9026 3.3867 -0.1617
INPUT[6]:1.8715 3.4222 0.2166
INPUT[7]:1.9007 3.3728 -0.1277
INPUT[8]:1.8897 3.4263 0.0333
INPUT[9]:1.8785 3.3993 0.1416
HIDDEN to OUTPUT weights:
OUTPUT[0]:1.1289 -1.1504 1.0407
314159265358979323846264338327950288
419716939937510582097494459230781640
628620899862803482534211706798214808

August 22, 2015

neuron networks, part 2

Filed under: Uncategorized — Tags: , , , , , — admin @ 2:53 pm

Following the previous article, the network was trained with free-running feedback .
In addition, a second output neuron was created, wich output, instead of following the conway’s game of life rules, was trained to be the sine of the expected normal, ‘game of life ruled’ output.

The trained network’s output for output 0 is almost the same (used 8 on hidden instead of 5 )

The screen capture of outputs 0 and 1 :

Left : output zero (normal, game of life ruled output) , and right : the ’sine’ like output 1 for the same 8 input cells  :

This networks’ complete weights dump :

Neuron network array
numInputs :2
numHidden :8
numOutputs :2
INPUT to HIDDEN weights:
INPUT[0]:18.4715 -15.7549 -21.2166 -19.4792 2.0692 -2.9851 -14.6416 -17.5079 
INPUT[1]:-8.0632 -13.0431 4.0268 -12.4184 -7.6292 -8.4492 12.2782 -7.1637 
HIDDEN to OUTPUT weights:
OUTPUT[0]:1.8568 1.2939 1.7122 -1.2514 0.8039 -0.7578 1.2156 -0.5770 
OUTPUT[1]:0.5297 -0.1719 0.1888 -0.7751 0.1120 -0.1462 0.2815 0.4162

This network uses tanh’d outputs on both hidden and output layer ( tanOutput[n]=tanh(50.0*NormalOut[n]) )

The output 1 shows groups of cells and highlights some interresting shapes that the normal output[0] does not permit to view :

————-

Now, the same network is trained the same way, but the output[1] with no tanh() function applied is graphed.This renders the subtle values for this output in the range -1.0 / 1.0. ( the supervision’s expected output[1] rule was : output[1] = ( actual_output[0] + the new 8 cell’s sum value ) divided by 2.0

————-

The network is then modified : we add 1 new input, namely the x coordinates of the 2D plane that’s rendering. the actual 2D area is a 64×64 cell array, so the 0-64 value for X will be maped to a -1.0 / 1.0 vector for this new input.
This time, output[1] is trained in a unmonitored manner again. we want to have output[1] to be the copy of the actual X value.

So, the new network as 3 inputs :

  1. input[0] is the sum of the 8 surrounding cells at T
  2. input[1] is the actual value of the output[0] at T-1 ( namely, the state of the cell at T )
  3. input[2] is the X coordinates of the cell beeing processed ( 0-64 range mapped to -1.0 / +1.0 )
The outputs are expected to be :
  1. output[0] is the result of applying the rulesof the Game of life.
  2. output[1] is expected to be the ‘image’ of the X coordinates, with nothing more.
remarks : the network is trained with 10 neurons. convergence is longer, because the input[2] value is seen as an unwanted value for the output[0] problem. for this reason, convergence takes more time because the weight of input[2] have to be lowered at the maximum, to get output[0] to converge the good value. meanwhile, this input[2] value is absolutely needed for the correct output of output[1]. this makes the overall convergence time longer.
Here is the snapshot after a 2300000 epochs learning :
The output[1] values are almost the good one. some errors can be seen at the fringe. remember that our hidden and ouput neurons are tanh’d ( output can’t be linear )
LR_IH and LR_HO were 0.05 for this training.
There is the dump of the network’s weights, for information :
Neuron network array
numInputs :3
numHidden :10
numOutputs :2
INPUT to HIDDEN weights:
INPUT[0]:-17.4559 0.0378 0.0916 -1.1608 -2.2167 -15.6072 -14.7210 -16.3537 -1.0468 -25.0423
INPUT[1]:-15.0747 -1.8141 3.3090 -7.1765 1.9960 -8.8084 6.3518 14.2116 6.8359 4.5404
INPUT[2]:0.3854 2.4042 2.6311 -0.1052 13.9876 0.0769 0.1231 0.1707 0.6056 -0.0481
HIDDEN to OUTPUT weights:
OUTPUT[0]:1.4326 -0.0190 0.1235 -0.9902 -0.0714 -1.4984 -1.5762 1.4177 -1.1382 1.7470
OUTPUT[1]:0.0721 1.9509 1.7133 -0.4680 0.7523 -0.0126 -0.0061 0.0173 -0.8408 0.0440

A closer look at the weights shows that input[2] is ‘mainly’ linked to hidden neuron 4 ( weight about 13) and this hidden neuron 4 is then linked to output[1] by a 0.7523 value.

For information, following output is the one for a trained network with :

  1. output[0] following the game of life rules
  2. output[1] outputs the 8 surrounding cell’s value , multiplied by the X coordinates of the cell into the area :

————-

314159265358979323846264338327950288
419716939937510582097494459230781640
628620899862803482534211706798214808
Older Posts »

cat{ 1 } { post_744 } { } 2009-2015 EIhIS Powered by WordPress