Application Acceleration With FPGAs
Programmable Acceleration Cards (PACs), DCP, FPGA AI Suite, Software Stack, and Reference Designs
477 Discussions

Is there any a little detail source code of the loadnetwork in Inference engine

YJing8
Novice
815 Views

"ExecutableNetwork Core::LoadNetwork(CNNNetwork network, const std::string & deviceName, const std::map<std::string, std::string> & config)"

The return of this function is "_impl->GetCPPPluginByName(deviceName_).LoadNetwork(network, config_);"

Is there any a little detail source code of the loadnetwork in Inference engine

0 Kudos
1 Reply
JohnT_Intel
Employee
458 Views

Hi,

 

It is used to load the network for you application. You may refer to https://docs.openvinotoolkit.org/latest/classInferenceEngine_1_1InferencePlugin.html#a0ca00d832aa35ecdefdfb456b62e51d4 for each of the source code information.

0 Kudos
Reply