i am struggling to run the security-picam.py example.
The error I am getting is "AttributeError: 'numpy.ndarray' object has no attribute 'show'".
Note that when I comment out the offending line (137) then I get the screenshots appearing in the _capture_ directory. But no annotated video appears.
I am using a Raspberry Pi 3 B+, a Pi Cam and version 1 of the SDK. The is running in a terminal on the Raspian Desktop. Note that the security-cam.py example runs fine on my Ubuntu machine.
Any suggestions for how to fix this? I just want to get this running with a Pi Camera :smile:
@MarkWest1972 , good catch! I developed security-picam.py on, and have always run on a headless system, so I didn't run into this issue. I am unable to get to this code immediately, but try rerunning the code after commenting out lines 136 and 137. This should let the script run, but the results won't be visualized on the display.
Based on the error you are reporting, I suspect that the issue happens when there are zero detection, which is when
img is a numpy array as against an image object. Try converting
img to an image object after line 86 and before line 100.
Thanks for your reply!
Your assumption seems to be correct. Converting _img_ to an _PIL.Image.Image_ object removes the error, but there is still no image displayed in the Raspian desktop…
Additional information - after some googling I installed ImageMagick. Now the show() command works, but instead of a real time video I get lots of snapshots popping up on my desktop. All I want is the same behavior as shown by the security-cam.py example when I run on my ubuntu machine….
Oh I agree with you 100% @chicagobob123 - it's a great example! Apologies to @AshwinVijayakumar if I came across as negative!
My only issue is that the Pi Camera version doesn't display the annotated images as a video when running in the Raspian Desktop. This isn't really needed functionality, but it would be nice to compare with the USB camera version for performance and so on.
Anyway I'm working on fixing this (I just need a free evening). As soon as I do I'll post the results here.
I will let you in on something I found out while working on this. The Web camera performs better because the GPU to memory on the PI is slow. The Pi Cam goes to the GPU memory then comes to the system memory you modify it and send it back. The USB comes into system memory then you modify it then you send it back to be displayed in GPU.
That's roughly what I have seen and found out.
@chicagobob123 Very interesting!
I got the annotated video display running with a Pi Camera and this seems to confirm what you are saying.
How I did it
First I made sure that OpenCV was installed on the Pi, along with the relevant dependancies
My installation of the NCS SDK is API only (and version 1) so I didn't have OpenCV installed. Building from source is a pain on the PI so I elected to install a pre-compiled version. This took around 5 minutes or so.
sudo apt-get install python-opencv sudo pip3 install opencv-python==18.104.22.168 sudo apt-get install libjasper-dev sudo apt-get install libqtgui4 sudo apt-get install libqt4-test
Then I made a copy of the security-cam.py file and replaced it's main function with the following code
def main(): device = open_ncs_device() graph = load_graph( device ) # Main loop: Capture live stream & send frames to NCS with picamera.PiCamera() as camera: with picamera.array.PiRGBArray( camera ) as frame: while( True ): camera.resolution = ( 640, 480 ) camera.capture( frame, ARGS.colormode, use_video_port=True ) img = pre_process_image( frame.array ) infer_image( graph, img, frame.array ) # Clear PiRGBArray, so you can re-use it for next capture frame.seek( 0 ) frame.truncate() # Display the frame for 5ms, and close the window so that the next # frame can be displayed. Close the window if 'q' or 'Q' is pressed. if( cv2.waitKey( 5 ) & 0xFF == ord( 'q' ) ): break close_ncs_device( device, graph )
Finally I added the following import statements to the head of the new file
import picamera import picamera.array
Running the new file results in the annotated video being displayed, albeit at a seemingly slower rate than the USB camera version.
Note that I'm not an expert in using the Pi Camera so it might be possible to tune this for better performance. YMMV :)
EDIT : Added missing instructions to add import statements.
One more thing : I see also that all code using the 'camera' variable APART FROM THAT IN THE MAIN METHOD can also be removed from the new file, as this code is no longer required.
EDIT : Added text in bold.
In case it is of interest to anyone reading this thread, I'm getting around 5 FPS with a Raspberry Pi Camera by using the imutils library (and specifically the VideoStream class) for handling the video stream. This class also allows one to easily switch between a USB and Raspberry Pi Camera.
From what I can tell the improvement is due to threading. More information is available here.
Here's a quick benchmark I did with 640x480:
Non Threaded: 4.48 FPS
Threaded: 4.82 FPS
Non Threaded: 1.4 FPS
Threaded: 4.15 FPS
At 320x240 I got 5.13 FPS with the Raspberry Pi Camera by using threading!
Those numbers look a little quicker than I got. I think I was averaging about 3.x fps @640x480
using the Pi Cam. I did not try the web cam because I was confined by form factor.
Wanted something small and compact.
Did you try threading? If not, it might help. Good luck in any case!
One last thing - how did you work out the reason for the difference in USB vs PiCam performance? Is there a blog post or article out there somewhere? I'm thinking about writing up my experiences…
There was a post someplace in the PI forums where they spilled the beans on how it worked. Pi Camera going to GPU mem and USB going to basic memory. It was telling. I am going to switch hardware platforms. Latte Panda or something that has more get go..
when I can actually successfully RE-TRAIN a InceptionV3 network and use it on the stick. So far no such luck.