Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

[Facenet] different output matrix with R3

Wu__David
Beginner
799 Views

Hi,

I'm working on facenet with openvino R3 release,

 I converted two version model( 20180402-114759, 20170512-110547) to test, but the output matrices are not the same(or the image are not the same) on the same image with original facenet.

The followings are my test steps:

1. Converted facenet model to openvino

~/intel/computer_vision_sdk/deployment_tools/model_optimizer$ python3 ./mo_tf.py --input_model '/home/aim/facenet/20170512-110547/20170512-110547.pb' --freeze_placeholder_with_value "phase_train->False" 

2. Test the model output with original facenet

......

img = cv2.imread( "face.jpg" )
face = detectBYMTCNN(img)
resized= cv2.resize( face, (160,160), interpolation=cv2.INTER_CUBIC )
reshaped= resized.reshape ( -1, 3, 160,160 )
facenet_net = IENetwork.from_ir( model="2017_facenet.xml", weights="2017_facenet.bin" )
facenet_plugin = IEPlugin( device="CPU" )
facenet_plugin.add_cpu_exrtension......
facenet_2017_exec_net = facenet_plugin.load( network = facenet_net, num_requests=1 )
facenet_res_2017 = facenet_2017_exec_net.infer(  {'batch_join/fifo_queue': reshaped}  )

embedding_2017 = facenet_res_2017['normalize']

# original_2017 is the pre-saved data using original facenet model
dist_2017 = np.sqrt( np.sum( np.square( np.subtract( embedding_2017, original_2017 ) ) ) )
print( dist_2017 )

The code maybe not complete. I think the output matrix should be the same(or similar) with the same image between two models(original model and openvino model).

But the dist_2017 is larger than 1( the two images are not the same), and the 2018 model has the same result.

 

So I test another situation and find a strange thing, almost all my test sets are been recognized to the same face using converted model.

 

Does anyone get the same issue?

 

Thanks.

David

 

 

 

 

 

0 Kudos
6 Replies
Anna_B_Intel
Employee
799 Views

Hi David, 

Our validation accuracy standard is 10^(-5) signs comparing TF blob output and Inference Engine blob output. I just checked that facenet passes this test. Probably the issue is in you measurements way: make sure you've got original_2017 the same way). Try to compare TF and IE output blobs elementwise up to 10^(-5) as we're doing.

Best wishes, 

Anna

0 Kudos
Wu__David
Beginner
799 Views

Hi Anna, 

I will check that!

 

Thanks.

David

0 Kudos
Yixiong__Feng
Beginner
799 Views

Hi Anna and David,

I do the same thing as David to inference the facenet  model, but different of the 2017 and 2018 model are all larger than 1 for the same image on cpu.   and I do the validate_on_lfw as facenet to check the accuracy of IR model it's only 0.58.  Apparently the embedding result is wrong.

so any advise on it? Thanks. 

 

 

0 Kudos
Yixiong__Feng
Beginner
799 Views
Dear Anna and David: I do the same thing as David for use facenet, But the dist_2017 and dist_2018 are both larger than 1 for the same image. And I also validate the result with LFW it only has result: Accuracy: 0.61617。 Apparently the embedding from IR is wrong. Do you have any advise? Thanks
0 Kudos
Monique_J_Intel
Employee
799 Views

Hi Feng,

If you have a question can you create a new post?

Thanks,

Kind Regards,

Monique Jones

0 Kudos
Gdeep
Beginner
799 Views
why we are using FaceNet model ? its any difference between Facenet and face-detection-retail-0004??
0 Kudos
Reply