- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I get" raise Exception(Status(status)) Exception: mvncStatus.TIMEOUT" when I run graph.GetResult(), does anyone know why?
from mvnc import mvncapi as mvnc
import numpy as np
import cv2
import os
import sys
import tensorflow as tf
IMAGE_PATH =os.path.dirname(os.path.realpath(__file__))+'/disparity.png'
GRAPH_PATH=os.path.dirname(os.path.realpath(__file__))+'/model/CCNN.graph'
PAD=4
def do_initialize() -> (mvnc.Device, mvnc.Graph):
# ***************************************************************
# Get a list of ALL the sticks that are plugged in
# ***************************************************************
devices = mvnc.EnumerateDevices()
if len(devices) == 0:
print('Error - No devices found')
return (None, None)
device.OpenDevice()
# Load graph file
try :
with open(GRAPH_PATH, mode='rb') as f:
in_memory_graph = f.read()
except :
print ("Error reading graph file: " + graph_filename)
graph = device.AllocateGraph(in_memory_graph)
return device, graph
def forward(graph: mvnc.Graph, image_filename: str):
print("[*] Forwarding....")
# Load image from disk and preprocess it to prepare it for the network
disparity = np.expand_dims(np.expand_dims(cv2.imread(image_filename, cv2.IMREAD_GRAYSCALE), 0), -1)
padded_disparity = np.pad(disparity, ((0, 0), (PAD, PAD), (PAD, PAD), (0, 0)), "reflect")
# Start the inference by sending to the device/graph
graph.LoadTensor(padded_disparity, "user object for this inference")
output, userobj = graph.GetResult()
result = (output*65535.).astype(np.uint16)
cv2.imwrite('test.png', result[0])
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@tommy Which model are you using with this code? I'd like to reproduce the issue with your code if you can provide a link to the model.
Is your device still visible in the system after the timeout issue using the lsusb command? Looks like there may be a main() that is missing, that would be helpful to have also. Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Here you can find the model and the main too: https://github.com/tommyliverani/CCNN
Yes the device is still visible after the timeout and i have the same issue using mvNCCheck and mvNCProfile
[Error 25] Myriad Error: "mvncStatus.TIMEOUT".
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel In your opinion, is there any problem in the main?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@tommy I'm trying to recompile your graph file so that I can reproduce your issue, but I'm not able to. The command I'm using is mvNCCompile CCNN.meta -s 12 -on CCNN/prediction/conv/BiasAdd
and I'm running into an out of index error. Can you share how you compiled your graph file?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel mvNCCompile CCNN.meta -s 12 -in=input -on=CCNN/prediction/conv/BiasAdd
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel
I discovered that the problems are in 3x3 convolutions. Are that layers supported?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@tommy 3x3 convolutions are supported (See release notes at https://developer.movidius.com/docs).
The command you used isn't working for me. Which version of the NCSDK are you using?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel ncsdk-1.12.00.01
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel I've discovered with mvncProfile that the layers that raise the time out have a higher inference time that the other networks i've worked on. Could this be the problem?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@tommy If the output of the model is too large, it could cause the timeout issue in addition to nans in some intermediate layers. You can try adding an epsilon value to scale the values down in the batchnorm layer. For example:
layer {
name: "BatchNorm1"
type: "BatchNorm"
bottom: "Convolution1"
top: "Convolution1"
batch_norm_param {
eps: 1e-2
}
}
You can also use mvNCCheck with the -on option to check the outputs of each layer of your network as well. More information on mvNCCheck can be found at: https://movidius.github.io/blog/mvNCCheck/ (scroll down to the debugging networks section).
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page