- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I'm working on facenet with openvino R3 release,
I converted two version model( 20180402-114759, 20170512-110547) to test, but the output matrices are not the same(or the image are not the same) on the same image with original facenet.
The followings are my test steps:
1. Converted facenet model to openvino
~/intel/computer_vision_sdk/deployment_tools/model_optimizer$ python3 ./mo_tf.py --input_model '/home/aim/facenet/20170512-110547/20170512-110547.pb' --freeze_placeholder_with_value "phase_train->False"
2. Test the model output with original facenet
...... img = cv2.imread( "face.jpg" ) face = detectBYMTCNN(img) resized= cv2.resize( face, (160,160), interpolation=cv2.INTER_CUBIC ) reshaped= resized.reshape ( -1, 3, 160,160 )
facenet_net = IENetwork.from_ir( model="2017_facenet.xml", weights="2017_facenet.bin" ) facenet_plugin = IEPlugin( device="CPU" ) facenet_plugin.add_cpu_exrtension...... facenet_2017_exec_net = facenet_plugin.load( network = facenet_net, num_requests=1 ) facenet_res_2017 = facenet_2017_exec_net.infer( {'batch_join/fifo_queue': reshaped} ) embedding_2017 = facenet_res_2017['normalize'] # original_2017 is the pre-saved data using original facenet model dist_2017 = np.sqrt( np.sum( np.square( np.subtract( embedding_2017, original_2017 ) ) ) ) print( dist_2017 )
The code maybe not complete. I think the output matrix should be the same(or similar) with the same image between two models(original model and openvino model).
But the dist_2017 is larger than 1( the two images are not the same), and the 2018 model has the same result.
So I test another situation and find a strange thing, almost all my test sets are been recognized to the same face using converted model.
Does anyone get the same issue?
Thanks.
David
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi David,
Our validation accuracy standard is 10^(-5) signs comparing TF blob output and Inference Engine blob output. I just checked that facenet passes this test. Probably the issue is in you measurements way: make sure you've got original_2017 the same way). Try to compare TF and IE output blobs elementwise up to 10^(-5) as we're doing.
Best wishes,
Anna
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Anna,
I will check that!
Thanks.
David
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Anna and David,
I do the same thing as David to inference the facenet model, but different of the 2017 and 2018 model are all larger than 1 for the same image on cpu. and I do the validate_on_lfw as facenet to check the accuracy of IR model it's only 0.58. Apparently the embedding result is wrong.
so any advise on it? Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Feng,
If you have a question can you create a new post?
Thanks,
Kind Regards,
Monique Jones
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page