- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello guys and administers,
Since i have bought Intel RealSense D435, there has been lots of questions I have currious. Hope that someone can help me to find a cure.
First, I currious about where is the camera "virtural origin location".
I do find some answers on the Community, which told me the origin is located between the left and right imager, 4.2mm behind the glass cover.
According to the friendly suggestion from all fellows and the data sheet.
Secondly, I wonder what is the theory of the RGB camera. According to the info of D435 below
Since the depth FOV and RGB FOV are different, are final image align with both cameras?
and why there are two different FOVs?
Well, does the depth camera(or Depth Fov)only applicable for 2D image or only for measuring depth? In other words, does the RGB camera for 3D image?
If I have to follow a rational FOV, which region should I follow (BTW I am about to build up an auto guide vehicle)?
If anyone have a thought to inspire me, I would like to hear your voice.
Regards
Jeff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi JeffXia1024,
Thank you for your interest in the Intel RealSense D435 camera.
Regarding your first question, you should take a look at section 4.7.1 in the data sheet:
https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/Intel-RealSense-D400-Series-Datasheet.pdf https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/Intel-RealSense-D400-Series-Datasheet.pdf
Also, on page 33 and 35, the sensors used for depth and color are listed. The depth sensor is OV9282 and color is OV2740. Two different sensors with different FOVs. The depth cameras are used only for determining depth for 3D images. The color sensor is used to capture 2D images and to provide color texture to the 3D images captured by the depth sensors.Section 4.2 of the datasheet lists the different streams from the camera. In order to use the streams and align them you must use the Intel RealSense SDK 2.0, found here https://github.com/IntelRealSense/librealsense https://github.com/IntelRealSense/librealsense. The SDK can be used to align the depth and color streams as demonstrated in the align example:
https://github.com/IntelRealSense/librealsense/tree/master/examples/align https://github.com/IntelRealSense/librealsense/tree/master/examples/align
The align example aligns depth frames to their corresponding color frames. In other words reconstructs a depth image being "captured" using the origin and dimensions of the color sensor. Then, the original color and the re-projected depth frames (which are aligned at this stage) can be used to determine the depth value of each color pixel.Let me know if this helps.
Regards,
Alexandra
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi JeffXia1024,
Thank you for your interest in the Intel RealSense D435 camera.
Regarding your first question, you should take a look at section 4.7.1 in the data sheet:
https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/Intel-RealSense-D400-Series-Datasheet.pdf https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/Intel-RealSense-D400-Series-Datasheet.pdf
Also, on page 33 and 35, the sensors used for depth and color are listed. The depth sensor is OV9282 and color is OV2740. Two different sensors with different FOVs. The depth cameras are used only for determining depth for 3D images. The color sensor is used to capture 2D images and to provide color texture to the 3D images captured by the depth sensors.Section 4.2 of the datasheet lists the different streams from the camera. In order to use the streams and align them you must use the Intel RealSense SDK 2.0, found here https://github.com/IntelRealSense/librealsense https://github.com/IntelRealSense/librealsense. The SDK can be used to align the depth and color streams as demonstrated in the align example:
https://github.com/IntelRealSense/librealsense/tree/master/examples/align https://github.com/IntelRealSense/librealsense/tree/master/examples/align
The align example aligns depth frames to their corresponding color frames. In other words reconstructs a depth image being "captured" using the origin and dimensions of the color sensor. Then, the original color and the re-projected depth frames (which are aligned at this stage) can be used to determine the depth value of each color pixel.Let me know if this helps.
Regards,
Alexandra
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
So the aligned images' origin is the same as the original color image? But where is origin of the original color image? I can only find the origin
of the depth frame in the datasheet.
Regards
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page