- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello,
I am using the Realsense ROS wrapper with the Realsense D435 and am publishing the compressed color and aligned depth topics which is received by Unity .
I want to create the pointcloud in Unity (like the pointcloud example scene in the unity wrapper which works if the realsense camera is plugged into USB of the unity computer) from the received ROS color and aligned depth message data and not from the USB stream.
Is there a way that I could do this by leveraging the Unity wrapper ?
Any help would be appreciated.
Thanks !
コピーされたリンク
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
I'm not certain how to publish ROS data to the RealSense Unity wrapper. There is though a free ROS plugin for Unity that is available from the Unity Asset Store that can enable ROS data to be published into Unity.
Store
https://assetstore.unity.com/packages/tools/physics/ros-107085
Documentation
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Thanks for your response. That is in fact exactly the plugin I am using to get ROS data into Unity.
Although, my question is not related to how to get the data into unity. Rather once the depth and color data is already in unity, is there a way with the Unity wrapper to generate a pointcloud by using only the already received aligned depth and color data ?
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
I looked very carefully at your problem. It is hard to see how to get the Unity wrapper to make use of your ROS data though. Part of the problem is the way in which the Unity wrapper renders data such as images and point clouds. Instead of showing the raw data from the camera, it converts the data into a 'material' shader object and displays that.
Unless you could convert your ROS data into a shader and load that into the RealSense point cloud generator in place of the default 'PointCloudMat' shader, I'm not sure how you could get your data into the wrapper's point cloud sample program.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Yes there is a way. Use the Software Device in the RealSense SDK. I did it by creating a wrapper class that used two software sensors that had video streams with all the intrinsic values from the camera info topics. I then used an instance of the Real Sense Syncer class in replacement of the pipeline profile that has waitForFrames() called on it to feed framesets into the point cloud rendering pipeline. It's possible!
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello, could you share your solution with me, I have the same problem and I am trying to do what you suggest but I have not succeeded yet
