- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello:
I used Keras based on VGG16 to train a model, then transferred the .h5 to .pb, I have tested the .pb model and the result is correct. After tested .pb model, then I used mo_tf.py to transfer this .pb model to .xml and .bin, but when I inferred the .xml and .bin, the result is totally different with the predicted result sometimes. I have read the parameters about mo_tf.py and added the --input_shape [1,128,128,3], --mean_values [0,0,0] -- scale_values [1,1,1] in the command, I have finished the pre_process to the input picture in the code, but the inference result sometimes is also wrong. Could give me some idea about this problem?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Chu,
It is difficult to help as we do no have enough details. It could be for example that you need --reverse_input_channels , BGR to RGB , or the mean_values and scale_values are not correct or the layout is not properly set (NHWC vs. NCHW), or the pb was not frozen properly, or the image processing pipeline has an issue etc. How different are the results? Here is a similar issue we worked in the last week for some ideas https://software.intel.com/en-us/forums/computer-vision/topic/802631
Cheers,
nikos

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page