Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6503 Discussions

compile model on Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz occurs panic

freyr
Beginner
2,774 Views

Hi,

I am running a sample code in  Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz which results in a  coredump while runing but in AMD EPYC 7K83 64-Core Processor ok.

My code:

try {
    std::string model_full_path = "xxx";
    ov::Core core;
    auto network = core.read_model(model_full_path);
    compiled_model_ = make_shared<ov::CompiledModel>(core.compile_model(network, "CPU"));
} catch (const ov::Exception& exception) {
    std::cout<<exception.what());
    return ;
}

// infer code
when i run the code on Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz occurs panic:

Program received signal SIGSEGV, Segmentation fault.

0x00007ffff33b9775 in ?? () from /lib/libopenvino.so.2310

Traceback  info:
(gdb) bt
#0 0x00007ffff33b9775 in ?? () from /lib/libopenvino.so.2310
#1 0x00007ffff3023c45 in ov::is_cpu_map_available() () from /lib/libopenvino.so.2310
#2 0x00007ffff7697d0c in ?? () from /lib/libopenvino_intel_cpu_plugin.so
#3 0x00007ffff2f82ae0 in InferenceEngine::IInferencePlugin::LoadNetwork(InferenceEngine::CNNNetwork const&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > > const&, std::shared_ptr<InferenceEngine::RemoteContext> const&) ()
from /lib/libopenvino.so.2310
#4 0x00007ffff2f733c8 in InferenceEngine::IInferencePlugin::LoadNetwork(InferenceEngine::CNNNetwork const&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > > const&) () from /lib/libopenvino.so.2310
#5 0x00007ffff2fb8f9d in ?? () from /lib/libopenvino.so.2310
#6 0x00007ffff2fa06c2 in ?? () from /lib/libopenvino.so.2310
#7 0x00007ffff2fa54d6 in ?? () from /lib/libopenvino.so.2310
#8 0x00007ffff2f6112c in ov::Core::compile_model(std::shared_ptr<ov::Model const> const&, std::string const&, std::map<std::string, ov::Any, std::less<std::string>, std::allocator<std::pair<std::string const, ov::Any> > > const&) () from /lib/libopenvino.so.2310

 

Can someone help decipher this issue?

Thanks

0 Kudos
5 Replies
Megat_Intel
Moderator
2,751 Views

Hi Freyr,

Thank you for reaching out to us.

 

The Segmentation Fault (SIGSEGV) error you received is a kind of error caused when the program tries to access the memory. To investigate this issue further, could you please provide us with the details below:

  • OS and version
  • How you installed the OpenVINO™ toolkit (e.g., build from source, archive)
  • Full code sample for replication purposes.

 

 

Regards,

Megat


0 Kudos
freyr
Beginner
2,740 Views

Hi! Megat_Intel

Thanks for the reply.

Here's information about the environment in which I ran the code.

class OpenvinoBackend {
public:
void init () {
try {
std::string model_full_path = "xxx";
ov::Core core;
auto network = core.read_model(model_full_path);
compiled_model_ = make_shared<ov::CompiledModel>(core.compile_model(network, "CPU"));
} catch (const ov::Exception& exception) {
std::cout<<exception.what());
return ;

}
}

void printInfo() {
auto inputs = compiled_model->inputs();
for (const ov::Output<const ov::Node> input : inputs) {
std::cout << " inputs" << std::endl;
const std::string name = input.get_names().empty() ? "NONE" : input.get_any_name();
std::cout << " input name: " << name << std::endl;
const ov::element::Type type = input.get_element_type();
std::cout << " input type: " << type << std::endl;
const ov::Shape shape = input.get_shape();
std::cout << " input shape: " << shape << std::endl;
}
auto outputs = compiled_model->outputs();
for (const ov::Output<const ov::Node> output : outputs) {
std::cout << " outputs" << std::endl;
const std::string name = output.get_names().empty() ? "NONE" : output.get_any_name();
std::cout << " output name: " << name << std::endl;
const ov::element::Type type = output.get_element_type();
std::cout << " output type: " << type << std::endl;
}
}
private:
shared_ptr<ov::CompiledModel> compiled_model_;
};
int main () {
OpenvinoBackend ovBackend;
ovBackend.init(); // coredump here
ovBackend.printInfo();
return 0;
}

      Here is a picture of using auto-plugin debug info

freyr_0-1697731488572.png

    before init finished, it coredump at compile_model(), In addition, ov::Exception did not capture painc but triggers coredump

    I want to know if this issue is caused by the openvino-kit being incompatible with the CPU, or if the system I am using

 

0 Kudos
Megat_Intel
Moderator
2,668 Views

Hi Freyr,

Thank you for the information provided.

 

For your information, I tried running the code you provided and received an error while building the code. I made some minor changes to the code (my edited code is below) and was successful in building the code and running the inference. My results are here:

  • CPU: Intel i7-11700K
  • OS: CentOS 7
  • OpenVINO™ version: 2023.1.0

 done.png

 

On another note, I tried running the inference with an incorrect model and received the error Segmentation fault ( core dumped ). To investigate further, can you please provide us with the model you used?

 core dumped.png

 

If you are unable to provide us with your model, Could you try to run the code using the face-detection-adas-0001 model to verify if the model issue is the cause?

 

#include "openvino/openvino.hpp"
using namespace std;
class OpenvinoBackend {
public:
    void init() {
        try {
            std::string model_full_path = "<model_xml_directory>";
            ov::Core core;
            auto network = core.read_model(model_full_path);
            compiled_model = make_shared<ov::CompiledModel>(core.compile_model(network, "CPU"));
        }
        catch (const ov::Exception& exception) {
            std::cout << exception.what();
            return;

        }
    }

    void printInfo() {
        auto inputs = compiled_model->inputs();
        for (const ov::Output<const ov::Node> input : inputs) {
            std::cout << "    inputs" << std::endl;
            const std::string name = input.get_names().empty() ? "NONE" : input.get_any_name();
            std::cout << "        input name: " << name << std::endl;
            const ov::element::Type type = input.get_element_type();
            std::cout << "        input type: " << type << std::endl;
            const ov::Shape shape = input.get_shape();
            std::cout << "        input shape: " << shape << std::endl;
        }
        auto outputs = compiled_model->outputs();
        for (const ov::Output<const ov::Node> output : outputs) {
            std::cout << "    outputs" << std::endl;
            const std::string name = output.get_names().empty() ? "NONE" : output.get_any_name();
            std::cout << "        output name: " << name << std::endl;
            const ov::element::Type type = output.get_element_type();
            std::cout << "        output type: " << type << std::endl;
        }
    }
private:
    shared_ptr<ov::CompiledModel> compiled_model ;
};

int main() {
    OpenvinoBackend ovBackend;
    ovBackend.init(); // coredump here
    ovBackend.printInfo();
    return 0;
}

 

 

Regards,

Megat

 

0 Kudos
freyr
Beginner
2,655 Views
Thank you for your help. I don't think this problem is caused by OpenVino, but by Tencent Cloud system. I asked the relevant personnel and found that their machine clock frequency is much slower than normal, and there may be more issues.
Anyway, thank you very much for your help
0 Kudos
Megat_Intel
Moderator
2,643 Views

Hi Freyr,

Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.

 

 

Regards,

Megat


0 Kudos
Reply