Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

load converted facenet model failed.

Wei__Wayne
Beginner
341 Views

Windows 10 64, visual studio 2017 community.

I'm working on facenet with openvino R3 and R4 release.  I converted two one model (20180402-114759) to test, but when I use the following code to load the model (using Load Network() debug mode) I got internal ie_exception "Unknown exception". And when I continue to debug, it will jump into the LoadNetwork() "if (ret.get() == nullptr) THROW_IE_EXCEPTION << "Internal error: pointer to executable network is null";".

I want to know what is wrong with my code or do I foget something else?

code:

int FaceRecognitionClass::initialize(std::string modelfile){
    InferenceEngine::PluginDispatcher dispatcher({""});
    //InferencePlugin plugin(dispatcher.getPluginByDevice("CPU")); 
#if 1
    plugin=dispatcher.getPluginByDevice("CPU"); 
#else
    plugin = dispatcher.getPluginByDevice("GPU");
#endif
    cout<< "============Initialize FaceRecognition ================="<<endl;
    try {
        networkReader.ReadNetwork(modelfile);
    }
    catch (InferenceEngineException ex) {
        cerr << "Failed to load network: "  << endl;
        return 1;
    }

    cout << "Network loaded." << endl;
    auto pos=modelfile.rfind('.');
    if (pos !=string::npos) {
        string binFileName=modelfile.substr(0,pos)+".bin";
        std::cout<<"binFileName="<<binFileName<<std::endl;
        networkReader.ReadWeights(binFileName.c_str());
    }
    else {
        cerr << "Failed to load weights: " << endl;
        return 1;
    }

    auto network = networkReader.getNetwork();

    // --------------------
    // Set batch size
    // --------------------
    networkReader.getNetwork().setBatchSize(1);
    size_t batchSize = network.getBatchSize();

    cout << "Batch size = " << batchSize << endl;

    //----------------------------------------------------------------------------
    //  Inference engine input setup
    //----------------------------------------------------------------------------

    cout << "Setting-up input, output blobs..." << endl;

    // ---------------
    // set input configuration
    // ---------------
    //InferenceEngine::InputsDataMap input_info(network.getInputsInfo());
    input_info=network.getInputsInfo();
    InferenceEngine::SizeVector inputDims;
    plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());

    if (input_info.size() != 1) {
        cout << "This sample accepts networks having only one input." << endl;
        return 1;
    }

    for (auto &item : input_info) {
        auto input_data = item.second;
        input_data->setPrecision(Precision::FP32);
        input_data->setLayout(Layout::NCHW);
        inputDims=input_data->getDims();
    }
    cout << "inputDims=";
    for (int i=0; i<inputDims.size(); i++) {
        cout << (int)inputDims << " ";
    }
    cout << endl;

    const int infer_width=inputDims[0];
    const int infer_height=inputDims[1];
    const int num_channels=inputDims[2];
    const int channel_size=infer_width*infer_height;
    const int full_image_size=channel_size*num_channels;

    /** Get information about topology outputs **/
    output_info=network.getOutputsInfo();
    InferenceEngine::SizeVector outputDims;
    for (auto &item : output_info) {
        auto output_data = item.second;
        output_data->setPrecision(Precision::FP32);
        output_data->setLayout(Layout::NCHW);
        outputDims=output_data->getDims();
    }
    cout << "outputDims=";
    for (int i=0; i<outputDims.size(); i++) {
        cout << (int)outputDims << " ";
    }
    cout << endl;

    //const int output_data_size=outputDims[1]*outputDims[2]*outputDims[3];

    // --------------------------------------------------------------------------
    // Load model into plugin
    // --------------------------------------------------------------------------
    cout << "Loading model to plugin..." << endl;

    std::map<std::string, std::string> config;
    config[PluginConfigParams::KEY_DYN_BATCH_ENABLED] = PluginConfigParams::YES;

    executable_network = plugin.LoadNetwork(network, config);

    .........

}

 

0 Kudos
0 Replies
Reply