Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

TF EfficientNet OpenVino model conversion issue

B__M
Beginner
2,019 Views

I am trying to convert the EfficientNet TF model https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/ckptsaug/efficientnet-b7.tar.gz at 
https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet using the following command,

python mo_tf.py --input_meta_graph efficientnet-b7\model.ckpt.meta

But it generates the following error,

[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.output_cut.OutputCut'>): Graph contains 0 node after executing <class 'extensions.front.output_cut.OutputCut'>. It considered as error because resulting IR will be empty which is not usual

Please advise. Thanks.

0 Kudos
1 Solution
Shubha_R_Intel
Employee
2,020 Views

Dear Dear B, M,

hello_classification.exe is actually incomplete. It is lacking this line:

ie.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>(), "CPU");

Here is a complete version of hello_classification code:

// Copyright (C) 2018-2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#include <vector>
#include <memory>
#include <string>
#include <samples/common.hpp>

#ifdef UNICODE
#include <tchar.h>
#endif

#include <inference_engine.hpp>
#include <samples/ocv_common.hpp>
#include <samples/classification_results.h>
#include <ext_list.hpp>

using namespace InferenceEngine;

#ifndef UNICODE
#define tcout std::cout
#define _T(STR) STR
#else
#define tcout std::wcout
#endif

#ifndef UNICODE
int main(int argc, char *argv[]) {
#else
int wmain(int argc, wchar_t *argv[]) {
#endif
    try {
        // ------------------------------ Parsing and validation of input args ---------------------------------
        if (argc != 4) {
            tcout << _T("Usage : ./hello_classification <path_to_model> <path_to_image> <device_name>") << std::endl;
            return EXIT_FAILURE;
        }

        const file_name_t input_model{argv[1]};
        const file_name_t input_image_path{argv[2]};
        const std::string device_name{argv[3]};

        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 1. Load inference engine instance -------------------------------------
        Core ie;

        ie.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>(), "CPU");

        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
        CNNNetReader network_reader;
        network_reader.ReadNetwork(fileNameToString(input_model));
        network_reader.ReadWeights(fileNameToString(input_model).substr(0, input_model.size() - 4) + ".bin");
        network_reader.getNetwork().setBatchSize(1);
        CNNNetwork network = network_reader.getNetwork();
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 3. Configure input & output ---------------------------------------------
        // --------------------------- Prepare input blobs -----------------------------------------------------
        InputInfo::Ptr input_info = network.getInputsInfo().begin()->second;
        std::string input_name = network.getInputsInfo().begin()->first;

        /* Mark input as resizable by setting of a resize algorithm.
         * In this case we will be able to set an input blob of any shape to an infer request.
         * Resize and layout conversions are executed automatically during inference */
        input_info->getPreProcess().setResizeAlgorithm(RESIZE_BILINEAR);
        input_info->setLayout(Layout::NHWC);
        input_info->setPrecision(Precision::U8);

        // --------------------------- Prepare output blobs ----------------------------------------------------
        DataPtr output_info = network.getOutputsInfo().begin()->second;
        std::string output_name = network.getOutputsInfo().begin()->first;

        output_info->setPrecision(Precision::FP32);
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 4. Loading model to the device ------------------------------------------
        ExecutableNetwork executable_network = ie.LoadNetwork(network, device_name);
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 5. Create infer request -------------------------------------------------
        InferRequest infer_request = executable_network.CreateInferRequest();
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 6. Prepare input --------------------------------------------------------
        /* Read input image to a blob and set it to an infer request without resize and layout conversions. */
        cv::Mat image = cv::imread(input_image_path);
        Blob::Ptr imgBlob = wrapMat2Blob(image);  // just wrap Mat data by Blob::Ptr without allocating of new memory
        infer_request.SetBlob(input_name, imgBlob);  // infer_request accepts input blob of any size
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 7. Do inference --------------------------------------------------------
        /* Running the request synchronously */
        infer_request.Infer();
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 8. Process output ------------------------------------------------------
        Blob::Ptr output = infer_request.GetBlob(output_name);
        // Print classification results
        ClassificationResult classificationResult(output, {fileNameToString(input_image_path)});
        classificationResult.print();
        // -----------------------------------------------------------------------------------------------------
    } catch (const std::exception & ex) {
        std::cerr << ex.what() << std::endl;
        return EXIT_FAILURE;
    }
    std::cout << "This sample is an API example, for any performance measurements "
                 "please use the dedicated benchmark_app tool" << std::endl;
    return EXIT_SUCCESS;
}

 

If you use it, it will work.

Thanks,

Shibha

View solution in original post

0 Kudos
6 Replies
Shubha_R_Intel
Employee
2,020 Views

Dear B, M,

I reproduced the same error as you. 

 But it seems to me that the failure occurred during the "freezing" stage and keep in mind https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py is a TensorFlow tool, not an OpenVino tool.  All the stuff that happens right before the error is about freezing the data, i.e.  see below. Now it's a fact that efficientnet is not an OpenVino validated and supported Tensorflow model. Here is a Supported OpenVino List of Tensorflow models and though we've added support for several new models, efficientnet is not one of them. 

That said, if you can manage to use the Tensorflow freeze_graph.py tool to create a frozen.pb from the checkpoint, maybe you can still get Model Optimizer to work. Not guaranteeing this but it's certainly possible, since it looks to me that this is where the failure occurred - during the freezing stage.

save_1/Identity_949
save_1/Identity_95
save_1/Identity_950
save_1/Identity_951
save_1/Identity_952
save_1/Identity_953
save_1/Identity_954
save_1/Identity_955
save_1/Identity_956
save_1/Identity_957
save_1/Identity_958
save_1/Identity_959
save_1/Identity_96
save_1/Identity_960
save_1/Identity_961
save_1/Identity_962
save_1/Identity_963
save_1/Identity_964
save_1/Identity_965
save_1/Identity_966
save_1/Identity_967
save_1/Identity_968
save_1/Identity_969
save_1/Identity_97
save_1/Identity_970
save_1/Identity_971
save_1/Identity_972
save_1/Identity_973
save_1/Identity_974
save_1/Identity_975
save_1/Identity_976
save_1/Identity_977
save_1/Identity_978
save_1/Identity_979
save_1/Identity_98
save_1/Identity_980
save_1/Identity_981
save_1/Identity_982
save_1/Identity_983
save_1/Identity_984
save_1/Identity_985
save_1/Identity_986
save_1/Identity_987
save_1/Identity_988
save_1/Identity_989
save_1/Identity_99
save_1/Identity_990
save_1/Identity_991
save_1/Identity_992
save_1/Identity_993
save_1/Identity_994
save_1/Identity_995
save_1/Identity_996
save_1/Identity_997
save_1/Identity_998
save_1/Identity_999
save_1/RestoreV2
save_1/RestoreV2/shape_and_slices
save_1/RestoreV2/tensor_names
save_1/SaveV2
save_1/SaveV2/shape_and_slices
save_1/SaveV2/tensor_names
save_1/control_dependency
save_1/filename
save_1/filename/input
sub
truediv

Hope it helps,

Shubha

0 Kudos
B__M
Beginner
2,020 Views

Thanks for you suggestion.

I was able to get the model optimizer to work using the EfficientNet ONNX version using instructions at https://github.com/lukemelas/EfficientNet-PyTorch.

But hello_classification.exe sample on the throws "Unsupported primitive of type: Squeeze name: 941" error. Have attached the model files. Please advise.

0 Kudos
Shubha_R_Intel
Employee
2,020 Views

Dear B, M,

That's great ! Yey to ONNX ! Sure, let me debug your problem. I will get back to you on your forum. And thanks for your patience ! 

Shubha

 

0 Kudos
Shubha_R_Intel
Employee
2,021 Views

Dear Dear B, M,

hello_classification.exe is actually incomplete. It is lacking this line:

ie.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>(), "CPU");

Here is a complete version of hello_classification code:

// Copyright (C) 2018-2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#include <vector>
#include <memory>
#include <string>
#include <samples/common.hpp>

#ifdef UNICODE
#include <tchar.h>
#endif

#include <inference_engine.hpp>
#include <samples/ocv_common.hpp>
#include <samples/classification_results.h>
#include <ext_list.hpp>

using namespace InferenceEngine;

#ifndef UNICODE
#define tcout std::cout
#define _T(STR) STR
#else
#define tcout std::wcout
#endif

#ifndef UNICODE
int main(int argc, char *argv[]) {
#else
int wmain(int argc, wchar_t *argv[]) {
#endif
    try {
        // ------------------------------ Parsing and validation of input args ---------------------------------
        if (argc != 4) {
            tcout << _T("Usage : ./hello_classification <path_to_model> <path_to_image> <device_name>") << std::endl;
            return EXIT_FAILURE;
        }

        const file_name_t input_model{argv[1]};
        const file_name_t input_image_path{argv[2]};
        const std::string device_name{argv[3]};

        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 1. Load inference engine instance -------------------------------------
        Core ie;

        ie.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>(), "CPU");

        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
        CNNNetReader network_reader;
        network_reader.ReadNetwork(fileNameToString(input_model));
        network_reader.ReadWeights(fileNameToString(input_model).substr(0, input_model.size() - 4) + ".bin");
        network_reader.getNetwork().setBatchSize(1);
        CNNNetwork network = network_reader.getNetwork();
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 3. Configure input & output ---------------------------------------------
        // --------------------------- Prepare input blobs -----------------------------------------------------
        InputInfo::Ptr input_info = network.getInputsInfo().begin()->second;
        std::string input_name = network.getInputsInfo().begin()->first;

        /* Mark input as resizable by setting of a resize algorithm.
         * In this case we will be able to set an input blob of any shape to an infer request.
         * Resize and layout conversions are executed automatically during inference */
        input_info->getPreProcess().setResizeAlgorithm(RESIZE_BILINEAR);
        input_info->setLayout(Layout::NHWC);
        input_info->setPrecision(Precision::U8);

        // --------------------------- Prepare output blobs ----------------------------------------------------
        DataPtr output_info = network.getOutputsInfo().begin()->second;
        std::string output_name = network.getOutputsInfo().begin()->first;

        output_info->setPrecision(Precision::FP32);
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 4. Loading model to the device ------------------------------------------
        ExecutableNetwork executable_network = ie.LoadNetwork(network, device_name);
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 5. Create infer request -------------------------------------------------
        InferRequest infer_request = executable_network.CreateInferRequest();
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 6. Prepare input --------------------------------------------------------
        /* Read input image to a blob and set it to an infer request without resize and layout conversions. */
        cv::Mat image = cv::imread(input_image_path);
        Blob::Ptr imgBlob = wrapMat2Blob(image);  // just wrap Mat data by Blob::Ptr without allocating of new memory
        infer_request.SetBlob(input_name, imgBlob);  // infer_request accepts input blob of any size
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 7. Do inference --------------------------------------------------------
        /* Running the request synchronously */
        infer_request.Infer();
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 8. Process output ------------------------------------------------------
        Blob::Ptr output = infer_request.GetBlob(output_name);
        // Print classification results
        ClassificationResult classificationResult(output, {fileNameToString(input_image_path)});
        classificationResult.print();
        // -----------------------------------------------------------------------------------------------------
    } catch (const std::exception & ex) {
        std::cerr << ex.what() << std::endl;
        return EXIT_FAILURE;
    }
    std::cout << "This sample is an API example, for any performance measurements "
                 "please use the dedicated benchmark_app tool" << std::endl;
    return EXIT_SUCCESS;
}

 

If you use it, it will work.

Thanks,

Shibha

0 Kudos
6__liu
Beginner
2,020 Views

Hello. Same problem I meet when I load EfficientNet-b4. It is OK to load resnet18, resnet34 and mobilenet_v2 which is converted from torchvision 0.4.0 and pytorch-onnx.

 

Here is the error: 

There is an unhandled exception at 0x00007FFF6686F218 (located in openvino_test.exe): Microsoft C++ exception: InferenceEngine::details::InferenceEngineException, located at memory location 0x000000E8D9BEDAD0.

And the error code is:

    InferenceEngine::ExecutableNetwork executable_network = ie.LoadNetwork(nw, "CPU",config);
Among it, based on tutorial, the config is:

    std::map<std::string, std::string> config = { { PluginConfigParams::KEY_PERF_COUNT, PluginConfigParams::YES } };
 

How can I manually change config with enhancing the infer-perfomance? Is the code "

ie.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>(), "CPU"); " suitable for any device with Intel CPU? 

 

0 Kudos
Shubha_R_Intel
Employee
2,020 Views

Dear 6, liu,

I guess any Intel CPU found in the universe may not work. But see the below document :

OpenVino System Requirements Document

If your CPU is supported according to that document, then yes, it should work with OpenVino.

Thanks,

Shubha

0 Kudos
Reply