I create a c++ DLL project (VS2017) that call Inference Engine to load the network and weight. But when i use a simple example to call the dll, it's always failed on readnetwork function.
for example in my dll.cpp:
network_reader.ReadNetwork(face_model); ===> always failed on this steps.
network_reader.ReadWeights(face_model.substr(0, face_model.size() - 4) + ".bin");
CNNNetwork network = network_reader.getNetwork();
I can get the InferencePlugin but can't initial the network by dll.
It seems that network_reader will initial a variable [std::shared_ptr<ICNNNetReader> actual], but it always get failed status(-1).
Anyone have used DLL to call the inference engine successful? Any suggestion to solve this problem?
openvino version [2018.5.456]
In my experience it's best to start with an existing OpenVino sample and add changes incrementally. In other words, it's not advised to write a new Inference Engine application from scratch straightaway. Why not simply copy the classification_sample folder into a new one, modify that code, then re-run create_msvc2017_solution.bat ? If you follow this approach, there is less chance for errors.
Please refer to this online documentation, where you'll find "Common Workflow" for Inference Engine Development:
Thanks for using OpenVino !
Thanks for your reply. Yes, i have created my project from the OpenVino sample and can works well with execution file. But We want to export the project function with DLL for C# used. So i have wrote a simple function in DLL that only load the network and weight for testing all workflow.
I also reference the config in open vino sample, but it always failed when i use the network_reader to load network.
Does OpenVino have any restriction on DLL function used?
OK. I understand what you mean - you are trying to create a DLL using Inference Engine libraries ( so that this DLL will be callable by C#) and you are experiencing errors. Am I correct in understanding your issue ?
We have released a new version of OpenVino 2019 R1 this week - can you kindly try your code with the new release ? Please report the results here. If it still fails for you then I will PM you, enabling you to send me a zip version of your DLL export code.
Thanks for your help and sorry for late reply. And the issue as your description is right.
We found that the library version will cause this issue. when we use the version [2018.5.445] instead of [2018.5.456], it can works well.
But we have experiencing another error on batch size. We have use the tensorflow offical classification model (mobilenet_v1_1.0_224) and set the batch-size more than one on network. In DLL, it always failed on plug-in to load the network which we have setup the batch_size more than one(ex. setBatchSize(16)). But if we setup the batch_size only one, its works well (DLL).
We also use the sample "hello classification", to load the same model and setup the batch size more than one, it also works.
We have try the latest release (2019 R1), and it also failed on this case(use DLL to load model and set batchsize more than one). Any suggestion for this issue?
Dear kao, mars,
Please read my response to this forum post:
I recommended that the poster experiment with benchmark_app for batch sizes. If benchmark_app works with different batch sizes then your code should work with different batch sizes.
Thanks for your reply. i will try the example to test my code.
Another question, when we want to enable the dynamic batch with config KEY_DYN_BATCH_ENABLED, the tensorflow model for classification (the official tf modol ex. mobilenet) always get "MKLDNNGraph::CreateGraph: such topology cannot be compiled for dynamic batch!".
But the intel pretrain caffe seems works well. Does open vino have a supported model list for dynamic batch size with tensorflow?