Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

GNAPluginNS::ModelQuantizer exception in [GNAPlugin]

Karol_D_Intel
Employee
266 Views

Hi All, 

   I'm getting a very strange exception when I try to call LoadNetwork() on my inference engine object. When I catch the exception and print it I can only manage to get as match as:  

[GNAPlugin] in function std::shared_ptr<InferenceEngine::ICNNNetwork> GNAPluginNS::ModelQuantizer<GNAPluginNS::details::QuantPair<GNAPluginNS::details::QuantI16, GNAPluginNS::details::QuantI16>>::quantize<lambda [](std::shared_ptr<InferenceEngine::ICNNNet
../include/details/ie_exception_conversion.hpp:71

The code I'm using for printing out the xception is pretty basic:

	try
	{
            //loading the model here
	}
	catch (std::exception& ex)
	{
		printf("Exception: %s", ex.what());
		return 1;
	}

Unfortunately, that does not tell me much :( Any ideas how I can get more information from the GNA plugin about what is wrong with the model I'm trying to load?

Regards, 

Karol

 

0 Kudos
1 Reply
Shubha_R_Intel
Employee
266 Views

Dear Karol Duzinkiewicz,

You can build a Open Source DLDT version of Inference Engine in Debug configuration and step through the code. You can at least root cause down to the lowest Inference Engine code.Unfortunately the GNA plugin is not open-sourced. If it were, you could debug down to the plugin code as well. plugin source code is available for virtually all the plugins. Unfortunately not yet for GNA though.

Hope it helps,

Thanks,

Shubha

0 Kudos
Reply