I trained my tensorflow model and transferred to the IR (.xml+.bin)successfully, then I want to use the movidius stick2
to inference my model, just wondering how to do it? I found some examples in the original installation package, including face_recognition, road_barrier_detection and so on, each package provide a C++ file about using the Inference Engine API. Need I write a C++ file to use movidius stick2? or have other method to use it? I found another document showed that I need to transfer tensorflow model to .graph, but it need mentioned how to use this graph model.
You will need to write your own C++ or Python code to Initialize the device, load your neural network (.xml + .bin) and get the results. Check out these sample codes:
I recommend posting your OpenVINO questions on the OpenVINO forum.
Hope this helps!
thanks Jesus! also has a question about how to use .graph file, I found another document showed that I need to transfer tensorflow model to .graph, but it not mentioned how to use this graph model, is it another way to use tensorflow model?
The graph file format is used with the Intel Neural Compute SDK for the original Neural Compute Stick. As you are using the Intel Neural Compute Stick 2, use the model optimizer provided with the OpenVINO toolkit to convert your training model (tensorflow model) to IR Format (.xml & .bin).
Please let me know if you have additional questions regarding the Intel Neural Compute SDK.
thanks Jesus, I have totally understand the different between SDK and Openvino, but I have some questions about use multi sticks, I have post my question on the computer-vision forums, here is the question address: https://software.intel.com/en-us/forums/computer-vision/topic/802145, could you give me some suggestions?