I'm trying to load the network weights from a memory buffer on Openvino but it seems that it doesn't work properly, strangely this happens on one network but works fine on another -- so I'm assuming a) there is a bug somewhere b) I'm abusing something.
// typecasting from char to uint8_t std::vector<uint8_t> weight_file8U(weight_file.begin(), weight_file.end()); SizeVector dims; dims.push_back(weight_file.size()); //creating tensor descriptor TensorDesc td(InferenceEngine::Precision::U8, dims, Layout::ANY); //create blob InferenceEngine::TBlob<uint8_t>::Ptr wblob = InferenceEngine::make_shared_blob<uint8_t>(td, (uint8_t *)weight_file8U.data(), weight_file8U.size()); //set weights networkReader.SetWeights(wblob);
My question is whether the above snippet is the right way to load the network weights.
Also another question, not totally related, is it possible to use Valgrind with OpenVino?
sorry that didn't see your question till now, currently the coding style is totally different from what you wrote above. You can refer to the sample code in the package, if you still have problem in coding it, feel free to ask question again.
Dear Wong, Jeremy,
Please look for SetWeights() examples in our https://github.com/opencv/dldt . A quick grep shows plenty of examples in the OpenVino Inference Engine source code. Right now only 2019R1 source code is available but R2 should be added very soon (as OpenVino 2019R2 was just released). For instance look at https://github.com/opencv/dldt/blob/2019/inference-engine/tests/unit/engines/mkldnn/graph/structure/... which contains plenty of SetWeights() examples.
Hope it helps,