- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi everyone,
We are excited to announce two new FaceNet examples (tensorflow/facenet and apps/video_face_matcher) that you can try out @ https://github.com/movidius/ncappzoo.
Make sure that you have version 1.12 of the NCSDK installed on your machine (http://github.com/movidius/ncsdk/releases/latest).
You will also have to download the FaceNet model from David Sanberg's github @ https://github.com/davidsandberg/facenet. After accessing the site, scroll down to "Pre-trained Models" and click the "20170512-110547" link to download a zip file of the model. After downloading, place the zip file in the ncappzoo/tensorflow/facenet directory.
Thanks!
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I ran the above example and it worked fine . I want to levaerage above example for my own training on different imageset. How can I achieve custom face detection ?
Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @mov_neural ,
The facenet example in the ncappzoo here: https://github.com/movidius/ncappzoo/tree/master/tensorflow/facenet isn't trained on specific faces so that it can infer a classification for one of those faces. Instead it is trained to find and quantify landmarks on faces in general. So if you have an image of a particular face you'd like to recognize you can get an inference from that face and save that as the control output. Once you have the control face/output you can compare that with the inference output of any other face to determine if the face matches the control ouput within some threshold. The example linked above includes python code that demonstrates this.
Neal
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @neal_at_intel
I've been trying to get this to run both on a Raspberry Pi and in an Ubuntu VM for about a day now. It's been a good learning experience, but I'm still not there. (I'm not asking for help in getting started, I still have learning and reading to do.)
In the meanwhile though, it would be very helpful if you or someone else could post some details about what kind of performance one could expect on the Movidius NCS - e.g. how many milliseconds of NCS compute time are needed per frame.
I notice that the code in tensorflow/facenet/run.py resizes the images to 160x160 pixels - what kind of performance could one expect with something like 640x480 pixels?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @nealatintel:
Not sure if I got it right. The result seems different from using tensorflow (https://github.com/compustar/ncappzoo/tree/ncsdk2/tensorflow/facenet):
compare_nc.py
Images:
0: elvis-presley-401920_640.jpg
1: neal_2017-12-19-155037.jpg
2: president-67550_640.jpg
3: trump.jpg
4: valid.jpg
Distance matrix
0 1 2 3 4
0 0.0000 0.6212 0.6725 0.7981 0.5387
1 0.6212 0.0000 0.8101 0.7106 0.5050
2 0.6725 0.8101 0.0000 0.6509 0.6273
3 0.7981 0.7106 0.6509 0.0000 0.6946
4 0.5387 0.5050 0.6273 0.6946 0.0000
compare_tf.py
Images:
0: elvis-presley-401920_640.jpg
1: neal_2017-12-19-155037.jpg
2: president-67550_640.jpg
3: trump.jpg
4: valid.jpg
Distance matrix
0 1 2 3 4
0 0.0000 1.4255 1.3354 1.3078 1.4498
1 1.4255 0.0000 1.5454 1.4255 0.6372
2 1.3354 1.5454 0.0000 1.2032 1.4949
3 1.3078 1.4255 1.2032 0.0000 1.4904
4 1.4498 0.6372 1.4949 1.4904 0.0000
Any idea?
Shane
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @Johan,
For the performance information on the facenet model, you might want to look at the the ncappzoo/apps/benchmarkncs project (https://github.com/movidius/ncappzoo/tree/master/apps/benchmarkncs). This project will outputs FPS numbers for networks in the repository that take images as input. If multiple NCS devices are plugged in will give numbers for one device and for multiple devices.
As for the image resizing, the images are getting resized to match the network expectations. To get performance data on other sized images the network would need to be retrained with different sized images. I'm not sure what performance might be for that but if you take that on please reply back with the numbers you see, and also do a Pull Request for the ncappzoo with your new network!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @shaneng,
I'm not sure off the top of my head what the difference is. If i get some time I will investigate closer, but in the meantime if you, or anyone else, have any revelations, please reply back.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I was looking for a version of facenet that I could use on the PI and that uses the latest version 2 SDK.
Looks like the examples use version 1. I can tell by the initialization.
The largest differences is when it loads the Graph.
devices = mvnc.EnumerateDevices()
graph = device.AllocateGraph(graph_in_memory)
So is there an updated version that works with Version 2 SDK?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@chicagobob123 There is a facenet example for NCSDK2 available, however the example uses the ncapi2_shim which is a wrapper for NCSDK1 -> NCSDK2. Look at line 6 through line 8 and you can see the import for the ncapi2_shim. This allows you to quickly enable your NCSDK1 apps to use NCSDK2.
More information on the ncapi2_shim wrapper can be found at https://github.com/movidius/ncappzoo/tree/ncsdk2/ncapi2_shim. Information on converting NCSDK1 code to NCSDK2 code without using the ncapi2_shim can be found at https://movidius.github.io/ncsdk/ncapi/python_api_migration.html and https://movidius.github.io/ncsdk/ncapi/c_api_migration.html.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I see how you wrapped the API. Good to know.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, I have read the code and found something. It use function "cv2.resize()" to convert 640_480 image to 160_160 directly, without detecting faces and aligning it, so, results may be bad.
I wanna connect MTCNN to FaceNet, but generate 2 GRAPH files when compile MTCNN, 1 GRAPH when compile FaceNet, it seems that i need 3 compute sticks. Should i build complete connected network firstly and then generate GRAPH? Could you help me how to do it, thank you so much.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Following along I tried using the SDK and the results with a live camera were not good. A blank wall was face according to it.
Still looking into this example. Have dog and pony on this by Monday
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Could you please explain more what is happening in this example ?
compared to the steps explained by Adam Geitgey
1 Encode a picture using the HOG algorithm to create a simplified version of the image. Using this simplified image, find the part of the image that most looks like a generic HOG encoding of a face.
2 Figure out the pose of the face by finding the main landmarks in the face. Once we find those landmarks, use them to warp the image so that the eyes and mouth are centered.
3 Pass the centered face image through a neural network that knows how to measure features of the face. Save those 128 measurements.
4 Looking at all the faces we’ve measured in the past, see which person has the closest measurements to our face’s
measurements. That’s our match!
My guess is that "only" step 3 "Pass the centered face image through a neural network " is performed !?
Has anyone made this workflow complete? Like this, but running mostly on movidius ?
https://github.com/ageitgey/face_recognition
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hi
i wanna build my own model like this but on hand
can any one help pleas
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I bought NCS a few days ago for my pet project. And was unpleasantly surprised how
video_face_matcher_multipleFace
. It could not distinguish me and my 6 years old daughter. I started the search and found this post. @swe_jb comment was most interesting._Disclaimer. I know nothing about ML and Python. So code can be not optimal and factually incorrect._
I found https://www.pyimagesearch.com/2018/06/25/raspberry-pi-face-recognition/ and copied "Haar cascades" into video_face_matcher_multipleFace
. The code can be found there branch "ncsdk1-haar-cascades-video_face_matcher_multipleFace" (actual changes in commit 1fa478018545c587d8d4f7e375ff44e6ff1d8a2e). I can say it works much much better. Now i get min_distance
around 0.08-0.3 for the same set of validated_images
. But the drawback of this method - performance. I ran all my tests on raspberry 3 B+. Improved preprocess_image()
takes 800-900msec. So in real tests app works 0.6-0.9FPS which is very slow for me.
@swe_jb mentioned HOG, so i hope to find example which i can integrate and test. But so far i'm thinking about selling NCS :(
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel @neal_at_intel
Hi, I am trying to use ncs to inference a fcn ,.
In my situation, the inference does not need to be at real time. So I set the input image size bigger.
But everytime one of width or height exceed 300, An error called "dispatcherEventReceive:236 dispatcherEventReceive() Read failed -4" occured.
Is it possible to inference an image with width or height larger than 300. 400*200 can also help.
Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, I am trying to use facenet based on https://github.com/movidius/ncappzoo/tree/master/tensorflow/facenet. They trained model for a single person, but I need to train model for multiple people, is there any way to achieve this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
How we can find the face location or coordinates?
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page