<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to calculate pointcloud from raw depth frame data in Items with no label</title>
    <link>https://community.intel.com/t5/Items-with-no-label/How-to-calculate-pointcloud-from-raw-depth-frame-data/m-p/642105#M14529</link>
    <description>&lt;P&gt;If you send the intrinsics over too, you can use rs2_deproject_pixel_to_point (&lt;A href="https://github.com/IntelRealSense/librealsense/blob/master/include/librealsense2/rsutil.h#L49"&gt;https://github.com/IntelRealSense/librealsense/blob/master/include/librealsense2/rsutil.h#L49&lt;/A&gt;) on each pixel in the depth image to generate the point cloud. The intrinsics won't change once the stream has started so you only need to send them once.&lt;/P&gt;</description>
    <pubDate>Mon, 14 Jan 2019 17:17:11 GMT</pubDate>
    <dc:creator>jb455</dc:creator>
    <dc:date>2019-01-14T17:17:11Z</dc:date>
    <item>
      <title>How to calculate pointcloud from raw depth frame data</title>
      <link>https://community.intel.com/t5/Items-with-no-label/How-to-calculate-pointcloud-from-raw-depth-frame-data/m-p/642103#M14527</link>
      <description>&lt;P&gt;I'm trying to stream the camera output over a network to render it on a different machine. I want to render the pointcloud, but the result of calculating the pointcloud is much larger than just the depth frame data, so I want to stream the depth frame first and calculate the pointcloud on the other machine. I can stream the raw data no problem, but I don't know how to get it back into a format that the calculate() function will work with on the other end. Since frame.get_data() returns a const pointer, I can't just make a new frame and stick the data into it. I assume that since the frame instances maintain their own memory for the data, I can't just try to serialize them. &lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm really stuck on this. Trying to stream the pointcloud even at 1 fps would need almost a gigabit per second!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for your help. &lt;/P&gt;</description>
      <pubDate>Sat, 12 Jan 2019 04:21:59 GMT</pubDate>
      <guid>https://community.intel.com/t5/Items-with-no-label/How-to-calculate-pointcloud-from-raw-depth-frame-data/m-p/642103#M14527</guid>
      <dc:creator>JShan16</dc:creator>
      <dc:date>2019-01-12T04:21:59Z</dc:date>
    </item>
    <item>
      <title>Re: How to calculate pointcloud from raw depth frame data</title>
      <link>https://community.intel.com/t5/Items-with-no-label/How-to-calculate-pointcloud-from-raw-depth-frame-data/m-p/642104#M14528</link>
      <description>&lt;P&gt;In January 2018, Intel did a demo at the Sundance festival where they captured point cloud data on a PC with the camera and sent the data to a separate PC for ​postprocessing.  They likely did this over a network like you are trying to do.  Some of the technical details are in the link below.&lt;/P&gt;&lt;P&gt;​&lt;/P&gt;&lt;P&gt;&lt;A href="https://realsense.intel.com/intel-realsense-volumetric-capture/"&gt;https://realsense.intel.com/intel-realsense-volumetric-capture/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;​&lt;/P&gt;&lt;P&gt;I also recommend watching Intel's recent RealSense presentation on ​'Deep learning for VR / AR', which features use of 'tiny networking'.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://realsense.intel.com/webinars/"&gt;https://realsense.intel.com/webinars/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;​&lt;/P&gt;&lt;P&gt;​&lt;/P&gt;</description>
      <pubDate>Sat, 12 Jan 2019 04:54:15 GMT</pubDate>
      <guid>https://community.intel.com/t5/Items-with-no-label/How-to-calculate-pointcloud-from-raw-depth-frame-data/m-p/642104#M14528</guid>
      <dc:creator>MartyG</dc:creator>
      <dc:date>2019-01-12T04:54:15Z</dc:date>
    </item>
    <item>
      <title>Re: How to calculate pointcloud from raw depth frame data</title>
      <link>https://community.intel.com/t5/Items-with-no-label/How-to-calculate-pointcloud-from-raw-depth-frame-data/m-p/642105#M14529</link>
      <description>&lt;P&gt;If you send the intrinsics over too, you can use rs2_deproject_pixel_to_point (&lt;A href="https://github.com/IntelRealSense/librealsense/blob/master/include/librealsense2/rsutil.h#L49"&gt;https://github.com/IntelRealSense/librealsense/blob/master/include/librealsense2/rsutil.h#L49&lt;/A&gt;) on each pixel in the depth image to generate the point cloud. The intrinsics won't change once the stream has started so you only need to send them once.&lt;/P&gt;</description>
      <pubDate>Mon, 14 Jan 2019 17:17:11 GMT</pubDate>
      <guid>https://community.intel.com/t5/Items-with-no-label/How-to-calculate-pointcloud-from-raw-depth-frame-data/m-p/642105#M14529</guid>
      <dc:creator>jb455</dc:creator>
      <dc:date>2019-01-14T17:17:11Z</dc:date>
    </item>
    <item>
      <title>Re: How to calculate pointcloud from raw depth frame data</title>
      <link>https://community.intel.com/t5/Items-with-no-label/How-to-calculate-pointcloud-from-raw-depth-frame-data/m-p/642106#M14530</link>
      <description>&lt;P&gt;Thanks, I think this is exactly what I needed! &lt;/P&gt;</description>
      <pubDate>Wed, 16 Jan 2019 04:24:52 GMT</pubDate>
      <guid>https://community.intel.com/t5/Items-with-no-label/How-to-calculate-pointcloud-from-raw-depth-frame-data/m-p/642106#M14530</guid>
      <dc:creator>JShan16</dc:creator>
      <dc:date>2019-01-16T04:24:52Z</dc:date>
    </item>
  </channel>
</rss>

