Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.
5765 Discussions

Running Efficientdet on NCS2 segfaults during inference.

milani__peter1
New Contributor I
462 Views

Hi

I've been able to Train and Convert using the model optimiser for OpenVINO  the EfficientDet Model.

This model works well on CPU, however when running the same code that works for CPU and targeting the MYRIAD plugin for the NCS2  through the use of

ie.LoadNetwork(network, "MYRIAD")

I get a segfault on the inferenceRequestPtr.Infer().

doing a backtrace on that points to something deep in the inference engine. Here is the backtrace below:

Thread 1 "detection_pipel" received signal SIGSEGV, Segmentation fault.
0x00007fffad7eac2d in ?? ()
from /opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64/libmyriadPlugin.so
(gdb) bt
#0 0x00007fffad7eac2d in ?? ()
from /opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64/libmyriadPlugin.so
#1 0x00007fffad7bd58b in ?? ()
from /opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64/libmyriadPlugin.so
#2 0x00007fffad7cecb0 in ?? ()
from /opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64/libmyriadPlugin.so
#3 0x00007fffad7d7df6 in ?? ()
from /opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64/libmyriadPlugin.so
#4 0x00007ffff7e4139b in InferenceEngine::CPUStreamsExecutor::Execute(std::function<void ()>) ()
from /opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64/libinference_engine.so
#5 0x00007fffad7d41b1 in ?? ()
from /opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64/libmyriadPlugin.so
#6 0x00007fffad7d90ba in ?? ()
from /opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64/libmyriadPlugin.so
#7 0x00007fffad7cbb51 in ?? ()
from /opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64/libmyriadPlugin.so
#8 0x00007ffff7f6c336 in InferenceEngine::InferRequest::Infer() ()
from /opt/hovermap/lib/libdetection_engine.so
#9 0x00007ffff7f647e7 in vision_models::DetectionEngine::inferOnFrame(cv::Mat const&, double const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) () from /opt/hovermap/lib/libdetection_engine.so
#10 0x00007ffff7f8e1dc in vision_models::DetectionPipeline::imageCallback(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&) ()

 

Is there anything that I should be doing in addition to changing the flag for teh MYRIAD plugin to enable the NCS2 to work?

0 Kudos
1 Solution
milani__peter1
New Contributor I
385 Views

Good to hear, its sorely needed. That being said, I have found the training framework for the automl/efficientdet to be easier to understand and recording a few more useful metrics.

 

Just for reference, here are the frame rates (these are not a definitive as actual inference times) I've been able to achieve with a single NCS2. Happy to hear if anyone else has been able to do better.

Efficientdet-d0 7.8Hz

Efficientdet-d2-640 2.366Hz

Efficientdet-d3-640 1.6 Hz

Efficientdet-d3 1.0 Hz

View solution in original post

5 Replies
Peh_Intel
Moderator
410 Views

Hi Peter,


In order to reproduce your issue, I tried running object_detection_sample_ssd demo with efficientdet-04 model on CPU and Intel® Neural Compute Stick 2 (NCS2). The demo worked well with CPU but faced running out of memory issue when running the demo with NCS2. The segmentation fault issue that you’re facing is probably same as running out of memory issue.


Besides, I also checked the efficientdet-04 model with benchmark_app tool and obtained the same issue. You may also check your model with benchmark_app tool which located in the following directory:

<installed_dir>/deployment_tools/tools/benchmark_tool


For your information, MYRIAD plugin does not officially support EfficientDet models which you may refer to this documentation: https://docs.openvinotoolkit.org/2021.2/openvino_docs_IE_DG_supported_plugins_MYRIAD.html#supported_...



Regards,

Peh


milani__peter1
New Contributor I
402 Views

Hi Peh,

Thanks, I also came to that conclusion that the NCS2 was running out of memory. I therefore trained and tried a efficientdet-d0 which successfully worked on the NCS2. With async inference I was able to achieve about 7.8FPS.

I plan to work up the efficientdet family probably starting at d2 or d3 next, I'll take it from you d4 is probably a bridge too far. Lucky that this was the case I was fast running out of options with Intel Openvino.

It seems OpenVino is running out of officially supported models, at least for Tensorflow, most of the model zoo supported models are only for TF1.15 which  are deprecated and it is difficult to generate the environments to train models in as a lot of dependencies are becomming obselete and it difficult to find tags that work for the TF model zoo repo.

cheers

Peter

Peh_Intel
Moderator
393 Views

Hi Peter,


Thanks for sharing your result in the OpenVINO™ Community.


Currently, OpenVINO™ support for TensorFlow 2 models is in preview or Beta. Our development teams are working hard to add support for more models.


Thanks for using OpenVINO™ and please stay tuned for future releases.



Regards,

Peh


milani__peter1
New Contributor I
386 Views

Good to hear, its sorely needed. That being said, I have found the training framework for the automl/efficientdet to be easier to understand and recording a few more useful metrics.

 

Just for reference, here are the frame rates (these are not a definitive as actual inference times) I've been able to achieve with a single NCS2. Happy to hear if anyone else has been able to do better.

Efficientdet-d0 7.8Hz

Efficientdet-d2-640 2.366Hz

Efficientdet-d3-640 1.6 Hz

Efficientdet-d3 1.0 Hz

Peh_Intel
Moderator
359 Views

Hi Peter,


We appreciate your sharing in the OpenVINO™ Community.


This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.



Regards,

Peh


Reply