I am testing with R200 on Linux and librealsense library and putting the result in openCV. The image I get it is quite of noise and not giving so much useful information.
Any help/clue how to enhance these values?
Not trying to encourage spamming, but the question seems sincere enough. Some methods I've tried are the EnhancedPhotography feature, capturing multiple samples and processing them into a single image, as well as common image processing techniques like median blur or Gaussian blur.
the basic steps for accessing the 2D array of values for a depth image are 1. initialize a PXCImage::ImageData object so that data.planes contains a buffer array for the image, 2. call AquireAccess to initialize planes with the depth information. 3. Depending on the specifics of the language you're using, grab the depth data by casting it to an int pointer (uint16_t*)data.planes or some other method. 4. process the image.
Its just a general over view of what i've had to do to improve the depth quality. I have struggled getting EnhancedPhotography to work with my r200, but if it works for you. I'd recommend that first, It promises better results than anything i've accomplished with blurs or simple hole filling algorithms.
Hope this helps
thanks steven d. for ur answer.
Actually, I am working with LIBREALSENSE. I will try to test something similar, but this is the image I am getting right now, with this code:
const uint16_t * depth_frame = reinterpret_cast<const uint16_t *>(dev->get_frame_data(rs::stream::depth));
cv::Mat depth16(480, 640, CV_16U, (void*)depth_frame);
cv::Mat depthM(depth16.size().height, depth16.size().width, CV_8UC1);
unsigned short min = 0.5, max = 5;
cv::Mat img0 = cv::Mat::zeros(depthM.size().height, depthM.size().width, CV_8UC1);
double scale_ = 255.0 / (max-min);
depthM.convertTo(img0, CV_8UC1, scale_);
cv::applyColorMap(img0, depth_show, cv::COLORMAP_JET);
and this is the image...
thanks for the screenshot. I am not familiar with the eccentricities of librealsense, but here are a few more suggestions.
The depth image retrieved normally is adjusted to focus on different elements in a scene. To get a clearer image try removing small objects from the scene, placing the camera about three feet from your testing space, and making sure that each object you are aiming at is in clear view of the camera. Because of the nature of the infrared techniques used to capture a depth image, every solid object casts a "depth shadow" that can prevent capturing data behind an object in your scene. Small objects cast extra shadows, and the contrast of objects priorities larger objects in the capture space. (an interesting test is to stream a depth video, then point the camera at the corner of the ceiling, you should be able to see the affect i'm talking about.)
beyond these simple ideas, it looks like the image is constrained to only red or blue values. Try using a range between these colors. As I looked at my output, it seems each individual depth coordinate mapped pretty close to centimeters in physical space. With a little tweaking, it should be possible to create a spectrum that shows better detail about the captured scene.
hope this helps.