What context should I be in ? Should I leave global context as 127 all the time, as I only have one set of data to learn and classify.
Thanks for reaching out!
I'd like to quote a paragraph of the NeuroMem API (https://www.general-vision.com/documentation/TM_NeuroMem_API.pdf%29: https://www.general-vision.com/documentation/TM_NeuroMem_API.pdf):
The following is the description of the function void SetContext(int context, int minif, int maxif):
"...Select the context and associated minimum and maximum influence fields for the next neurons to be committed.
The context can be associated to a sensor, a type of feature extraction, a scale, a combination of these parameters or else. For more details refer to the NeuroMem Technology Reference Guide (http://www.general-vision.com/documentation/TM_NeuroMem_Technology_Reference_Guide.pdf http://www.general-vision.com/documentation/TM_NeuroMem_Technology_Reference_Guide.pdf)..."
I believe the word context in the General Vision documents refer to what you are trying to classify. I believe this is supposed to be set by you depending by what you are trying to do.
Nevertheless, if you have doubts related to General Vision's API, you can also contact them (http://www.generalvision.com/contact-us/ http://www.generalvision.com/contact-us/). They might be able to provide you a more accurate answer.
There can be 127 contexts in the NN. A context is like a grouping of neurons, and if you did have 127 contexts there would only be one neuron per sub NN ! So not very useful !
If you have many things you'd like to classify, but they are not the same data, for example OCR images, and peoples voice patterns. One is visual data, and one if audio. They might even have different feature vectors (length of data pattern), so we would not want all this to share a single NN. But with contexts you can !
You could say
audio data is context 100,
image data is context 20.
When learning audio pass in context 100, then when learning image data pass in 20. Then do the same when testing/classiying. Classify image data with context 20 etc
PLUS - If both of these contexts only utilise 20 neurons, then adding another contextis possible, maybe this time context 35 is peoples faces image data, so we could have one audio and 2 image NNs all sharing the same hardware, but on different NNs !
Its a way of splitting up the NN, making sure you can utilise more nodes. It's a bad co-incidence that there are 128 neurons, 128 contexts and 128 feature vector length, as all 3 are totally not related !
Thanks for sharing this information, we appreciate that you post it on the community. It will be of much help for other users that could have the same doubts.