- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am using D435 camera with pyrealsense2 library.
Intel has given a short code snippet of streaming frames and extracting the depth value of a pixel. see link: https://github.com/IntelRealSense/librealsense#ready-to-hack
I rewrote the code in python. I want to know, is it calculating the distance to an object from the centre of the image or the whole image?
I placed an object at the centre of the camera, this actually shows the distance. When I moved the object left and right of the camera, it didn't pick up on the distance. I am not sure if i am doing it wrong. Is there a way to cover the distance within whole image and not just at the center?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi NeuroGalaxy,
Thank you for your interest in the Intel RealSense D435 camera.
What do you mean by " is it calculating the distance to an object from the centre of the image or the whole image?"
Each stream of images provided by this SDK is also associated with a separate 3D coordinate space, specified in meters, with the coordinate [0,0,0] referring to the center of the physical imager.
You may find more information here:
https://github.com/IntelRealSense/librealsense/wiki/Projection-in-RealSense-SDK-2.0
Regarding your second question, if yo take a look at the picture below (from the user manual on page 58), the red dot represents the minZ, which is the minimum distance at which the camera can detect depth. If your object is too close to the camera and you move it to right or left, there is a big chance that the object is outside of the Depth FOV (in the invalid depth band) where you can't get the distance values.
Regards,
Alexandra
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page