- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I’ve been playing with the RealSensor sensor in an effort to get a point cloud to project from a light-field display. I have an existing application that can take multiple sensor streams from xbox Kinects, or a higher density stream from an Kinect One. It is a thread pool/task based application where for every processing frame, a bunch of tasks are queued up to capture, filter, fuse and render point clouds. I can’t seem to get a good depth image from the RealSense sensor and I’ve tried to duplicate the data access provided by the demos. What I see, it a depth image that temporally fades in and out. These pixels actually don’t appear to be valid depth values, and I have to multiply them by 32 to even see them, so I’m not sure what is going on. I’ve included the TaskSensorRealSense.cpp code, a few screenshots and a link to an AVI that shows what I am seeing. The Kinect One stream is on the left and the RealSense is on the right. Image over depth. The Kinect One point cloud is being rendered in the background.
https://www.youtube.com/watch?v=rPBcFL49uWs&feature=youtu.be
The operator() method calls the two methods to grab the RGB and depth values. I'm just trying to get good data out at the moment. Any ideas on what is going on?
Thanks
Ps. the code follows the images.
// TaskSensorRealSense.cpp
// Thomas Burnett // December 4, 2014 //--------------------------------------------------------------------- // Includes #include <boost/threadpool.hpp> #include <pcl/io/pcd_io.h> #include "Sensors/pvSensor.h" #include "Tasks/pvTaskSensorRealSense.h" using namespace pv; //--------------------------------------------------------------------- //--------------------------------------------------------------------- // Globals //--------------------------------------------------------------------- //--------------------------------------------------------------------- // fetchImage //--------------------------------------------------------------------- void TaskSensorRealSense::fetchImage(PXCImage *pI) { if (pI) { PXCImage::ImageInfo info = pI->QueryInfo(); PXCImage::ImageData imgData; if (pI->AcquireAccess(PXCImage::ACCESS_READ,PXCImage::PIXEL_FORMAT_RGB24,&imgData) == PXC_STATUS_NO_ERROR) { _image.create(info.height,info.width,CV_8UC3); { unsigned char *pS = (unsigned char *)imgData.planes[0]; unsigned char *pD = (unsigned char *)_image.data; unsigned char *pDEnd = pD + (info.height * info.width * 3); while (pD < pDEnd) { *pD++ = *pS++; *pD++ = *pS++; *pD++ = *pS++; } pI->ReleaseAccess(&imgData); } } } } //--------------------------------------------------------------------- // fetchDepth //--------------------------------------------------------------------- void TaskSensorRealSense::fetchDepth(PXCImage *pI) { if (pI) { PXCImage::ImageInfo info = pI->QueryInfo(); if (info.format == PXCImage::PIXEL_FORMAT_DEPTH) { PXCImage::ImageData imgData; if (pI->AcquireAccess(PXCImage::ACCESS_READ,PXCImage::PIXEL_FORMAT_DEPTH,&imgData) == PXC_STATUS_NO_ERROR) { _depth.create(info.height,info.width,CV_16UC1); { unsigned short *pS = (unsigned short *)imgData.planes[0]; unsigned short *pD = (unsigned short *)_depth.data; unsigned short *pDEnd = pD + (info.height * info.width); while (pD < pDEnd) *pD++ = *pS++ * 32; // *pD++ = *pS++; } pI->ReleaseAccess(&imgData); } } } } //--------------------------------------------------------------------- // exec //--------------------------------------------------------------------- void TaskSensorRealSense::operator()() { if (_pRS->AcquireFrame(true) == PXC_STATUS_NO_ERROR) { PXCCapture::Sample *pSample = _pRS->QuerySample(); if (pSample->depth) fetchDepth(pSample->depth); if (pSample->color) fetchImage(pSample->color); _pRS->ReleaseFrame(); _pSensor->update(_image,_depth,_cloud); } } //--------------------------------------------------------------------- // init //--------------------------------------------------------------------- int TaskSensorRealSense::init(void) { int rc = -1; _pRS = PXCSenseManager::CreateInstance(); if (_pRS) { PXCVideoModule::DataDesc desc={}; desc.deviceInfo.streams = PXCCapture::STREAM_TYPE_COLOR | PXCCapture::STREAM_TYPE_DEPTH; _pRS->EnableStreams(&desc); _pRS->Init(); rc = 0; } return rc; } //--------------------------------------------------------------------- // TaskSensorRealSense //--------------------------------------------------------------------- TaskSensorRealSense::TaskSensorRealSense(void) : TaskSensor(), _pRS(0) { } //--------------------------------------------------------------------- // TaskSensorRealSense //--------------------------------------------------------------------- TaskSensorRealSense::~TaskSensorRealSense(void) { }
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The camera works reliably for the distances less than 1.5 meters, hope you are with in that range..
PIXEL_FORMAT_DEPTH provides depth with mm precision, however a special depth value will be assigned to indicate low-confidence depths...you can dig in to this using QueryDepthLowConfidenceValue(). The typical reasons for this are too close/far away from the camera and objects with low IR reflectivity.
Please check whether your issue is down the above lines..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
After further investigation, it appears that I only get the weird depth values when I am running the Kinect One at the same time.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There you go, hope you have resolved your issue with that finding... :)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, because the Realsense camera peripheral operates using infrared light that sweeps across the scene, it is very possible and another source of infrared could interfere with its operation, depending on the camera type. Because the sensor cannot predict the state of the other camera's infrared output, it leads to unpredictable values in the depth map, usually striping or dead zones. There are clever tricks that can be employed to get around this limitation, if you are interested, otherwise it is often safest to avoid pointing multiple cameras at the same scene.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page