- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi there,
I'm following the instruction https://01.org/developerjourney/recipe/intel-realsense-robotic-development-kit Intel RealSense Robotic Development Kit | Developer Journey to test realsense r200 on a UP board by ROS.
I can get the depth image stream by rosrun image_view image_view image:=/camera/depth/image_raw.
But the image is too dark as shown in the attached image. I can only see very dark shadow of the hand (and in two).
Could you please give any advise on how to fix this issue?
Many thanks!
링크가 복사됨
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
A dark / totally black depth image is often related to a component called the IR emitter in the camera which helps the IR camera to deal with exposure. If the lighting in a location is too bright or too dim, the IR emitter can cause the IR sensor to become saturated, producing the kind of dark images that you are experiencing.
You may get better results if you disable the IR emitter using an SDK instruction. Full details are in this post:
If you are using the Robotics Kit, you will likely need to use the Librealsense (i.e Linux) version of the emitter enable / disable instruction.
Alternatively, instead of programming you could try altering the lighting conditions in the location where you are using the Robotics Kit.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Thank you very much for your suggestions. I'll try and share the results later.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Since librealsense was mentioned, I tried the https://github.com/IntelRealSense/librealsense/tree/master/doc/stepbystep librealsense/doc/stepbystep at master · IntelRealSense/librealsense · GitHub . I modified the file to get all the color, ir and depth images. The depth file looks okay.
Under the same lighting condition, I ran the ROS test and the depth image was still dark.
So far, I verified that the librealsense depth image works well at my lighting condition. The question now is how to configure the ROS to get the same performance. You mentioned "emitter enable / disable", is there any reference on how to do it in ROS?
Many thanks!
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
I'm not aware of an instance where the ROS has been used to control the functions of the camera itself, and my research didn't turn up any either. I believe that the changes to the camera configuration should be made in the Librealsense part of the robotic setup.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
It looks like your hand might be too close to the camera. With the default settings, the R200 only works above about 50cm - do you get a better image if you move further away?
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
I tested to disable the emitter and verified that r200_emitter_enabled: 0. Are there any other options I should try? Many thanks!
rosrun dynamic_reconfigure dynparam dump /camera/driver camere_driver.yawl
$ cat camere_driver.yawl
!!python/object/new:dynamic_reconfigure.encoding.Config
dictitems:
color_backlight_compensation: 1
color_brightness: 56
color_contrast: 32
color_enable_auto_white_balance: 1
color_gain: 32
color_gamma: 220
color_hue: 0
color_saturation: 128
color_sharpness: 0
color_white_balance: 6500
enable_depth: true
groups: !!python/object/new:dynamic_reconfigure.encoding.Config
dictitems:
color_backlight_compensation: 1
color_brightness: 56
color_contrast: 32
color_enable_auto_white_balance: 1
color_gain: 32
color_gamma: 220
color_hue: 0
color_saturation: 128
color_sharpness: 0
color_white_balance: 6500
enable_depth: true
groups: !!python/object/new:dynamic_reconfigure.encoding.Config
dictitems:
R200_Depth_Control: !!python/object/new:dynamic_reconfigure.encoding.Config
dictitems:
groups: !!python/object/new:dynamic_reconfigure.encoding.Config
state: []
id: 1
name: R200_Depth_Control
parameters: !!python/object/new:dynamic_reconfigure.encoding.Config
state: []
parent: 0
r200_dc_estimate_median_decrement: 5
r200_dc_estimate_median_increment: 5
r200_dc_lr_threshold: 12
r200_dc_median_threshold: 235
r200_dc_neighbor_threshold: 90
r200_dc_preset: 5
r200_dc_score_maximum_threshold: 420
r200_dc_score_minimum_threshold: 27
r200_dc_second_peak_threshold: 70
r200_dc_texture_count_threshold: 8
r200_dc_texture_difference_threshold: 80
state: true
type: ''
state: []
state: []
id: 0
name: Default
parameters: !!python/object/new:dynamic_reconfigure.encoding.Config
state: []
parent: 0
r200_auto_exposure_bottom_edge: 479
r200_auto_exposure_left_edge: 0
r200_auto_exposure_right_edge: 639
r200_auto_exposure_top_edge: 0
r200_emitter_enabled: 0
r200_lr_auto_exposure_enabled: 0
r200_lr_exposure: 164
r200_lr_gain: 400
state: true
type: ''
state: []
r200_auto_exposure_bottom_edge: 479
r200_auto_exposure_left_edge: 0
r200_auto_exposure_right_edge: 639
r200_auto_exposure_top_edge: 0
r200_dc_estimate_median_decrement: 5
r200_dc_estimate_median_increment: 5
r200_dc_lr_threshold: 12
r200_dc_median_threshold: 235
r200_dc_neighbor_threshold: 90
r200_dc_preset: 5
r200_dc_score_maximum_threshold: 420
r200_dc_score_minimum_threshold: 27
r200_dc_second_peak_threshold: 70
r200_dc_texture_count_threshold: 8
r200_dc_texture_difference_threshold: 80
r200_emitter_enabled: 0
r200_lr_auto_exposure_enabled: 0
r200_lr_exposure: 164
r200_lr_gain: 400
state: []
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi ArkTx,
Have you tried running the R200 calibrator tool? It is possible that this is not the issue but you can always try, you can download the application here https://downloadcenter.intel.com/download/24958/Intel-RealSense-Camera-Calibrator-for-Windows- https://downloadcenter.intel.com/download/24958/Intel-RealSense-Camera-Calibrator-for-Windows-. Keep in mind that you´ll need a Windows system to run it.
Regards,
Pablo M.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi Pablo,
Thank you very much for your suggestion! I'll install it and try.
Here is my progress. I followed the instruction https://01.org/developerjourney/recipe/intel-realsense-robotic-development-kit Intel RealSense Robotic Development Kit | Developer Journey to use "rosrun image_view image_view image:=/camera/depth/image_raw" to display the depth image. It was dark. But when I try to use "rosrun rqt_image_view rqt_image_view" to display the depth image, it works. Then I found that rqt_image_view scales (normalizes) the image which I thought should have already been done in /camera/depth/image.
https://github.com/ros-visualization/rqt_common_plugins/blob/groovy-devel/rqt_image_view/src/rqt_image_view/image_view.cpp rqt_common_plugins/image_view.cpp at groovy-devel · ros-visualization/rqt_common_plugins · GitHub
else if (msg->encoding == "16UC1" || msg->encoding == "32FC1") {
// scale / quantify
double min = 0;
double max = ui_.max_range_double_spin_box->value();
if (msg->encoding == "16UC1") max *= 1000;
if (ui_.dynamic_range_check_box->isChecked())
{
// dynamically adjust range based on min/max in image
cv::minMaxLoc(cv_ptr->image, &min, &max);
if (min == max) {
// completely homogeneous images are displayed in gray
min = 0;
max = 2;
}
}
cv::Mat img_scaled_8u;
cv::Mat(cv_ptr->image-min).convertTo(img_scaled_8u, CV_8UC1, 255. / (max - min));
cv::cvtColor(img_scaled_8u, conversion_mat_, CV_GRAY2RGB);
But I still don't know why the instruction can directly get the normalized depth image.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi ArkTx,
Just to clarify, when using "rosrun rqt_image_view rqt_image_view" you're getting the image you expected since the beginning, right? The lighter image I mean. I'm not sure why this is happening but you might find some new information in here:
http://wiki.ros.org/rqt_image_view http://wiki.ros.org/rqt_image_view
https://github.com/ros-perception/image_pipeline/issues/158# issuecomment-157524991 https://github.com/ros-perception/image_pipeline/issues/158# issuecomment-157524991
Regards,
Pablo M.
