Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6539 Discussions

NAN result error after huge network was used

idata
Employee
2,571 Views

I examine two network for test with python API

 

・mnist lenet

 

https://github.com/ethereon/caffe-tensorflow/tree/master/examples/mnist

 

・openpose network (large size network)

 

https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/models/pose/coco/pose_deploy_linevec.prototxt

 

But I got result NAN data array ( = graph.GetResult() ) for openpose network.

 

Mysteriously, after that error I get result NAN array for the mnist network that I succeed before.

 

It recovered after I examined the mnist network without "-s 12" option.

 

Are there any restrictions about the size of network?

 

Does behavior change when inserting a large network once?

 

Thanks for your reading.

0 Kudos
11 Replies
idata
Employee
2,257 Views

@skymapnote The size limit of the graph file used for inference has to be no larger than 320 MB. However based on your reported error message, it looks like size may not be the root of the problem. The problem may be related to your other post: https://forums.intel.com/s/question/0D50P00004NM00USAT/float-16-or-32-about-pyexamplesclassificationexamplepy. Please revisit and see if converting your input data to float16 data values resolves your NAN error issue. Thanks.

0 Kudos
idata
Employee
2,257 Views

Did you ever manage to get OPENPOSE working?

0 Kudos
idata
Employee
2,257 Views

I'm considering buying movidius for real time openpose body tracking, and now I undersatnd that this isn't possible (openpose can't work with movidius)

 

Am I missing something?
0 Kudos
idata
Employee
2,257 Views

The graph file for coco in 320X240 image size is 100MB.

 

I tried COCO with any format avaliable (float 16/32/64, int8/16/32/64)

 

all returned the same nan results.

 

anyone success with openpose (coco/mpi) on movidius?
0 Kudos
idata
Employee
2,257 Views

@ohadcn any solution on your problem? I am not using openpose but experiencing the same NaN outputs for xception and inceptionv3. float16 input, still NaN!

0 Kudos
idata
Employee
2,257 Views

I am experiencing NaN outputs with DenseNet implementation as well. Please come up with a solution

0 Kudos
idata
Employee
2,257 Views

Same problem. I am now very disappointed with my MNCS.

0 Kudos
idata
Employee
2,257 Views

@antoniocappiello @q914847518 Unfortunately we haven't validated any DenseNet based networks. We have made a note that our users are interested in support for this feature, but we can't provide a road map/eta at the moment.

0 Kudos
idata
Employee
2,257 Views

I've had the same problem with DenseNet and the problem seems to be with the Deconvolutional layers, the NaNs and infs appear after the first one. Would be nice to be able to switch to a Upsampling+Conv instead but upsampling isn't supported (just in case your looking for more things for your backlog :smile: ).

0 Kudos
idata
Employee
2,257 Views

@gatli Thanks for bringing this to our attention. We'll definitely keep this in mind.

0 Kudos
idata
Employee
2,257 Views

@ohad Could you do that finally or not ??

 

@Tome_at_Intel I am trying to run Openpose on Movidius Neural Compute Stick and after I compile the prototxt and caffemodel files with mvNCCompile, when I use the network for prediction it returns nan value! Am I missing something?
0 Kudos
Reply