Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

face recognition using NCS2

KDeep
Beginner
3,707 Views

hie,

i am using NCS2 stick with latest openvino toolkit (l_openvino_toolkit_p_2019.1.144) in ubuntu 16.04 environment. Can we implement face recognition using NCS2,opencv,tensorflow. ive gone through so many links where only face detection was implemented.i need face recognition ,is there any source that i can go through.

 

Thanks&regards

Deepika

 

0 Kudos
1 Solution
Shubha_R_Intel
Employee
3,707 Views

Dear kvgr, deepika,

Unfortunately I don't have the bandwidth to debug your code project. Here is my advice. Compare your code to that in deployment_tools\inference_engine\samples\object_detection_demo .  Here is documentation on object_detection_demo .

What image size does Model Optimizer think you are using ? Add the --log_level DEBUG to see the image size. Make sure that  the image size is correct and matches what the facenet model was trained on.

The Facenet MO Doc clearly states the following:

Batch joining pattern transforms to placeholder with model default shape if --input_shape or --batch/-bwas not provided. Otherwise, placeholder shape has custom parameters.

--freeze_placeholder_with_value "phase_train->False" to switch graph to inference mode

--batch/-b is applicable to override original network batch

--input_shape is applicable with or without --input

other options are applicable

 

Did you pass in an --input_shape ? If you didn't then Model Optimizer may assume something which may not be correct, and of course, this will affect the accuracy of OpenVino inference.

Thanks,

Shubha

 

 

View solution in original post

0 Kudos
14 Replies
Shubha_R_Intel
Employee
3,707 Views

Dear kvgr, deepika,

Yes we support Tensorflow Facenet, here is the Tensorflow Facenet documentation . 

And here is the original github facenet repo:

https://github.com/davidsandberg/facenet

Thanks,

Shubha

 

0 Kudos
KDeep
Beginner
3,707 Views

Hi subha,

 

Thanks for the reply.Up to my knowledge,This code will work on host machine ,i need source that work on NCS2( MYRIAD ). I used same kind of code modified with inference engine API so that it worked on NCS2 stick (aligned and detection worked )But I stucked on recognition phase.

thanks,

deepika

0 Kudos
Shubha_R_Intel
Employee
3,707 Views

Dearest kvgr, deepika,

if you follow that documentation I referred you to above then you can convert a Facenet Tensorflow model to work on NCS2. Just please add --data_type FP16 switch to your mo_tf.py command since NCS2  supports FP16 only.

Hope it helps,

Thanks !

Shubha

 

0 Kudos
KDeep
Beginner
3,707 Views

hi shunbha,

yeah i've converted model with FP16 datatype.As per the referred document there is script named predict.py in that particular script execution will done using tensorflow graphs so i need the script to run on NCS2 stick, basically i stucked at prediction  .

Thanks,

Deepika

0 Kudos
Shubha_R_Intel
Employee
3,707 Views

Dear kvgr, deepika,

I see no mention of predict.py in the Model Optimizer Facenet OpenVino Document . If you are talking about https://github.com/davidsandberg/facenet/blob/master/contributed/predict.py it's understandable why NCS2 would not work with that script. NCS2 VPU is likely not recognized by that predict.py, while I'm sure a CPU is.

If you want to run inference on NCS2, then you must use OpenVino.

Hope it helps,

Thanks,

Shubha

 

0 Kudos
KDeep
Beginner
3,707 Views

hi shubha,

yeah i've used openvino,here is the script which i'm trying to execute,

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
import pickle
import sys
import time
import PIL
import cv2
import numpy as np
import tensorflow as tf
from scipy import misc
import facenet
from align import detect_face
from openvino.inference_engine import IENetwork, IEPlugin
from openvino import inference_engine as ie

img_path=sys.argv[1]
modeldir = '/home/icsltd/Desktop/deepika/facenet-master/facenet-master/src/20180402-114759'
#classifier_filename = '/home/icsltd/Desktop/deepika/facenet-master/facenet-master/src/old_version.pkl'
classifier_filename ='/home/icsltd/Desktop/deepika/facenet-master/facenet-master/src/aligned_out_new/classifier-svm.pkl'
npy=''
train_img='/home/icsltd/Desktop/dataset/dataset'
face_path = '/opt/intel/openvino_2019.1.144/deployment_tools/tools/model_downloader/Retail/object_detection/face/sqnet1.0modif-ssd/0004/dldt/face-detection-retail-0004-fp16.xml'
weights_file = face_path[:face_path.rfind('.')] + '.bin'
net = ie.IENetwork(face_path, weights_file)
plugin = ie.IEPlugin("MYRIAD")
input_name = list(net.inputs.keys())[0]
output_name = list(net.inputs.keys())[0]
exec_net = plugin.load(net)

with tf.Graph().as_default():
    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.6)
    sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False))
    with sess.as_default():
        pnet, rnet, onet = detect_face.create_mtcnn(sess, modeldir)

        minsize = 20  # minimum size of face
        threshold = [0.6, 0.7, 0.7]  # three steps's threshold
        factor = 0.709  # scale factor
        margin = 32
        frame_interval = 3
        batch_size = 1000
        image_size = 160
        input_image_size = 160

        HumanNames = os.listdir(train_img)
        HumanNames.sort()

        classifier_filename_exp = os.path.expanduser(classifier_filename)
        with open(classifier_filename_exp, 'rb') as infile:
            (model, class_names,_) = pickle.load(infile)
            print('model===================================',model,class_names)
        # video_capture = cv2.VideoCapture("akshay_mov.mp4")
        c = 0


        print('Start Recognition!')
        prevTime = 0
        # ret, frame = video_capture.read()
        frame = cv2.imread(img_path,0)

        #frame = cv2.resize(frame, (0,0), fx=0.5, fy=0.5)    #resize frame (optional)

        curTime = time.time()+1    # calc fps
        timeF = frame_interval

        if (c % timeF == 0):
            find_results = []

            if frame.ndim == 2:
                frame = facenet.to_rgb(frame)
            frame = frame[:, :, 0:3]
            bounding_boxes, _ = detect_face.detect_face(frame, minsize, pnet, rnet, onet, threshold, factor)
            nrof_faces = bounding_boxes.shape[0]
            print('Face Detected: %d' % nrof_faces)

            if nrof_faces > 0:
                det = bounding_boxes[:, 0:4]
                img_size = np.asarray(frame.shape)[0:2]

                cropped = []
                scaled = []
                scaled_reshape = []
                bb = np.zeros((nrof_faces,4), dtype=np.int32)

                for i in range(nrof_faces):
                    emb_array = np.zeros((1, 512))

                    bb[0] = det[0]
                    bb[1] = det[1]
                    bb[2] = det[2]
                    bb[3] = det[3]

                    #inner exception
                    if bb[0] <= 0 or bb[1] <= 0 or bb[2] >= len(frame[0]) or bb[3] >= len(frame):
                        print('face is too close')
                        break

                    cropped.append(frame[bb[1]:bb[3], bb[0]:bb[2], :])
                    cropped = facenet.flip(cropped, False)
                    cropped = PIL.Image.fromarray(cropped)
                    scaled.append(np.array(cropped.resize((image_size, image_size) ,PIL.Image.BILINEAR)))
                    #scaled.append(misc.resize(cropped, (image_size, image_size), interp='bilinear'))
                    feed_dict = cv2.resize(scaled, (input_image_size,input_image_size),
                                           interpolation=cv2.INTER_CUBIC)
                    
                    n, c, h, w = net.inputs[input_name].shape
                    print( feed_dict.shape)
                    if feed_dict.shape[:-1] != (h, w):
                        feed_dict=np.resize(feed_dict,(n, c, h, w))
                    print(feed_dict)    
                    feed_dict={'data':feed_dict}
                    print(feed_dict)  
                    #t = time.time()
                    emb_array = exec_net.infer(inputs=feed_dict)
                    print(emb_array)
                    emb_outputs = list(emb_array.values())[0]
                    print(emb_outputs.shape)
                    emb_outputs=np.resize(emb_outputs,(200,7))
                    print(emb_outputs.shape)
                    #total_time += time.time() - t
                    #print('emb_array[0, :]====================',emb_array)
                    #print('embeddings========================',embeddings)
                    predictions = model.predict_proba(emb_outputs)
 
                    print('prediction==================================',predictions)
                    best_class_indices = np.argmax(predictions, axis=1)
                    # print(best_class_indices)
                    best_class_probabilities = predictions[np.arange(len(best_class_indices)), best_class_indices]
                    print(best_class_probabilities)
                    cv2.rectangle(frame, (bb[0], bb[1]), (bb[2], bb[3]), (0, 255, 0), 2)    #boxing face
                
                    #plot result idx under box
                    text_x = bb[0]
                    text_y = bb[3] + 20
                    print('Result Indices: ', best_class_indices[0])
                    #print(HumanNames)
                    for H_i in HumanNames:
                       
                        if HumanNames[best_class_indices[0]] == H_i and best_class_probabilities > 0.43:
                            print('H_i')
                            result_names = HumanNames[best_class_indices[0]]
                            print('result==================',result_names)
                            cv2.putText(frame, result_names, (text_x, text_y), cv2.FONT_HERSHEY_COMPLEX_SMALL,
                                        1, (0, 0, 255), thickness=1, lineType=1)
            else:
                print('Unable to align')
        cv2.imshow('Image', frame)
        cv2.imwrite('output/'+img_path.split('/')[-1],frame)
if cv2.waitKey(2000) & 0xFF == ord('q'):
            sys.exit("Thanks")
cv2.destroyAllWindows()

Error i got ..............i think i made a small mistake in above script so i,m not getting the expected shape i got (200,7) could u helpme to solve this out.

Traceback (most recent call last):
  File "face_recognition_image_test.py", line 126, in <module>
    predictions = model.predict_proba(emb_outputs)
  File "/usr/local/lib/python3.6/dist-packages/sklearn/svm/base.py", line 620, in _predict_proba
    X = self._validate_for_predict(X)
  File "/usr/local/lib/python3.6/dist-packages/sklearn/svm/base.py", line 474, in _validate_for_predict
    (n_features, self.shape_fit_[1]))
ValueError: X.shape[1] = 7 should be equal to 512, the number of features at training time

 

 

0 Kudos
KDeep
Beginner
3,707 Views

hi shubha,

sorry,i made error in classification itself ,the same error carried out in the above script i menctioned .please go through  attachment and let me know the changes .Attached file has classifier_train which is original and classifier_train_test is modified.I tried to run the script on stcik but embeddings are not generating properly for the dataset.

 

thanks

deepika

0 Kudos
Shubha_R_Intel
Employee
3,708 Views

Dear kvgr, deepika,

Unfortunately I don't have the bandwidth to debug your code project. Here is my advice. Compare your code to that in deployment_tools\inference_engine\samples\object_detection_demo .  Here is documentation on object_detection_demo .

What image size does Model Optimizer think you are using ? Add the --log_level DEBUG to see the image size. Make sure that  the image size is correct and matches what the facenet model was trained on.

The Facenet MO Doc clearly states the following:

Batch joining pattern transforms to placeholder with model default shape if --input_shape or --batch/-bwas not provided. Otherwise, placeholder shape has custom parameters.

--freeze_placeholder_with_value "phase_train->False" to switch graph to inference mode

--batch/-b is applicable to override original network batch

--input_shape is applicable with or without --input

other options are applicable

 

Did you pass in an --input_shape ? If you didn't then Model Optimizer may assume something which may not be correct, and of course, this will affect the accuracy of OpenVino inference.

Thanks,

Shubha

 

 

0 Kudos
KDeep
Beginner
3,707 Views

hi shubha,

Thanks it was helpful,i verified the shapes and converted according to the model required,but the output is not so accurate as i expected. The scripts which i executed on host machine i got face-recognition accuracy around 90%. Using stick i barely getting 30% .Then i check with sample script which is attached ,then i got to know that after executing with Stick (exec.infer) i am getting zeros(0) may be because of that image data was missing could u confirm this and let me know the solution.

 

Thanks,

Deepika.

0 Kudos
Shubha_R_Intel
Employee
3,707 Views

Dear kvgr, deepika,

Please review the following posts. My answer to your question is similar:

https://software.intel.com/en-us/forums/computer-vision/topic/813268

https://software.intel.com/en-us/forums/computer-vision/topic/813255

Thanks for using OpenVino !

Shubha

0 Kudos
KDeep
Beginner
3,707 Views

hie shubha,

 i used this command to convert model  ,i dint understand how to pass mean values and scale values and

sudo python3 mo_tf.py  --input_model /home/icsltd/Desktop/deepika/facenet-master/facenet-master/src/20180402-114759/20180402-114759.pb --input_shape=[1,160,160,3] --freeze_placeholder_with_value "phase_train->False" --data_type FP16 --output_dir /home/icsltd/Desktop/deepika/facenet-master/facenet-master/  --model_name facenet and i got the same result.

i've gone through above links ,i din't get full clarity on preprocesing the model if there is any source/link that i can go through could u forward it to me .

Thanks

deepika.

0 Kudos
KDeep
Beginner
3,707 Views

hie shubha,

i resolved the error with the reference of the below link ,it was helpful .

https://software.intel.com/en-us/forums/computer-vision/topic/802451
 

 I need to know how much accuracy does openvino toolkit (2019.1.144) ( using MYRIAD plugin and MTCNN) gives  for face recognition .

Thanks

Deepika.

0 Kudos
Shubha_R_Intel
Employee
3,706 Views

Dear kvgr, deepika,

This is great news. Thank you for sharing ! 

So you solved the problem by:

The only solution is to train the model on Inception resnet v2 or to use compute stick myriad 1 until intel allow support for Incpetoin resnet v1 on new myriad devices.

Got it. 

Thanks for sharing to the OpenVino community !

Shubha

0 Kudos
D__yugendra
Novice
3,706 Views

Hey,  kvgr Deepika,

I am using Facenet model with cpu and NCS2 stick. I have converted model .pb to .xml and .bin. i am using openvino 2020.1.023( l_openvino_toolkit_p_2020.1.023 )   in ubuntu 16.04 environment, but i am facing some problem. please suggest me.

Thank you.

Getting error 

model=================================== SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
  decision_function_shape='ovr', degree=3, gamma='auto', kernel='linear',
  max_iter=-1, probability=True, random_state=None, shrinking=True,
  tol=0.001, verbose=False) ['Akshay Kumar', 'Nawazuddin Siddiqui', 'Salman Khan', 'Shahrukh Khan', 'Sunil Shetty', 'Sunny Deol', 'bounding boxes 51219.txt']
Start Recognition!
Face Detected: 0
Unable to align
Face Detected: 0
Unable to align
Face Detected: 0
Unable to align
Face Detected: 0
Unable to align
Face Detected: 0
Unable to align
Face Detected: 0
Unable to align
Face Detected: 0
Unable to align
Face Detected: 0
Unable to align
Face Detected: 0
Unable to align
Face Detected: 1
(160, 160, 3)
Traceback (most recent call last):
  File "inference_Facenet_March.py", line 189, in <module>
    emb_array = exec_net.infer({'data':feed_dict})
  File "ie_api.pyx", line 420, in openvino.inference_engine.ie_api.ExecutableNetwork.infer
  File "ie_api.pyx", line 608, in openvino.inference_engine.ie_api.InferRequest.infer
  File "ie_api.pyx", line 610, in openvino.inference_engine.ie_api.InferRequest.infer
  File "ie_api.pyx", line 735, in openvino.inference_engine.ie_api.InferRequest._fill_inputs
AssertionError: No input with name data found in network
Segmentation fault (core dumped)
 

0 Kudos
Reply