- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello.
I have two models loaded onto one myriad Movidius2 device (on a usb stick).
Both models have the same input.
Can I supply the input once?
Now I make two nets, A and B, and assigns the same data twice, and transfers the same data twice to the usb;
executable_networkA = core.LoadNetwork(networkA, device_name); executable_networkB = core.LoadNetwork(networkB, device_name); infer_requestA = executable_networkA.CreateInferRequest(); infer_requestB = executable_networkB.CreateInferRequest(); input_blobA = infer_requestA.GetBlob(networkA.getInputsInfo().begin()->first); input_blobB = infer_requestB.GetBlob(networkB.getInputsInfo().begin()->first); auto *dataA = input_blobA->buffer().as<PrecisionTrait<Precision::FP16>::value_type *>(); auto *dataB = input_blobB->buffer().as<PrecisionTrait<Precision::FP16>::value_type *>(); dataA[..] = data; dataB[..] = data;
Is it possible to have input_blobB to point to the internal (on the usb/vpu) memory assigned to input_blobA?
And thereby only transfer the input data once?
Thanks and Please have a Good day.
/Brian
NB: This post is also posted in the forum watercooler(?!). I have reposted it here.
コピーされたリンク
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
AAhh ha
In section 6) Prepare input
we are presented with the shared blob option ()
auto roiBlob = InferenceEngine::make_shared_blob(inputBlob, cropRoi); infer_request2->SetBlob(input_name, roiBlob);
I will try that, and return, probably next week.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
The ROI approach did not work for me. Maybe because I was using batch=4?
I succeeded with another variant of make_shared_blob()
input_blob = infer_requests[0].GetBlob(networks[0].getInputsInfo().begin()->first); .. .. auto *dat = input_blob->buffer().as<PrecisionTrait<Precision::FP16>::value_type *>(); InferenceEngine::Blob::Ptr shared_blob = InferenceEngine::make_shared_blob(input_blob->getTensorDesc(), dat, input_n_channels * input_height * input_width * input_batchsize ); infer_requests[model_indx].SetBlob(networks[model_indx].getInputsInfo().begin()->first, shared_blob);
Have a nice one
/brian
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi brian,
I am glad you could figure this out!
Please, feel free to reach out again if you have additional questions.
Best Regards,
David
