Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

mo_tf.py crashed with illegal instruction

KLim4
Beginner
597 Views

i'm running Intel OpenVINO R4 with newly installed ubuntu 16.04 (4.8.0-36-generic) on Intel(R) Pentium(R) CPU N4200 @ 1.10GHz (Up squared board)

ubuntu@ubuntu-UP-APL01:/opt/intel/computer_vision_sdk/deployment_tools/model_optimizer$ python3 mo_tf.py --input_model ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb --input_checkpoint ssd_mobilenet_v2_coco_2018_03_29/checkpoint
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb
    - Path for generated IR:     /opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/.
    - IR output name:     frozen_inference_graph
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Offload unsupported operations:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     None
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     None
Model Optimizer version:     1.4.292.6ef7232d
Illegal instruction (core dumped)

Linux ubuntu-UP-APL01 4.8.0-36-generic #36~16.04.1-Ubuntu SMP Sun Feb 5 09:39:57 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

flags        : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust smep erms mpx rdseed smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts

 

is this software problem or hardware processor incompetibility?

 

0 Kudos
2 Replies
KLim4
Beginner
597 Views

any update on this?

 

0 Kudos
nikos1
Valued Contributor I
597 Views

Hi Kim,

Intel(R) Pentium(R) CPU N4200 should run fine with fp32 on CPU and also if you get OpenCL on the GPU it will also run with fp16 and -d GPU.

I would try again to run the model optimizer on a different desktop and then if successful copy the IR (bin and xml) and just run inference on the low power N4200 device.

Cheers,

Nikos

 

0 Kudos
Reply