- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
When I used d415 to record the change of expression, I found that there was no clear facial state.What is the reason for this?Too few facial points or devices?Need further algorithmic processing. Thanks.
링크가 복사됨
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi,MartyG
I can already get a point cloud by the way you said, but the results are not good. I guess it's because the color and depth are not aligned with the pixels.
Due to the correspondence between color and point cloud coordinates,I don't know if the position in the color image is easy to find a point in the point cloud
In order to extract the point cloud accurately, I am still eager to get your help.
Thanks!
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi,MartyG
I can already get a point cloud by the way you said, but the results are not good. I guess it's because the color and depth are not aligned with the pixels.
Due to the correspondence between color and point cloud coordinates,I don't know if the position in the color image is easy to find a point in the point cloud
In order to extract the point cloud accurately, I am still eager to get your help.
Thanks!
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hello fwang22,
Thank you MartyG for your inputs!
As for using a frame instead of a bag file, I believe this may not be available, since for enable_device_from_file function is mandatory to use a bag file.
I am not aware of another method of using an external file, but let me look into this and I will respond to this thread.
Thank you,
Eliza
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Thank you very much. Due to time constraints, I don't have time to carefully study how to convert bag into point cloud. I wonder if I can ask for your help.
At the same time, I would like to ask whether the resulting point cloud is in the form of n*3(n is total number of pointcloud) or m*n*3(n is width,m is height)?
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hello,
This thread may help you extract then manipulate frames from bag files: https://github.com/IntelRealSense/librealsense/issues/1290#issuecomment-404903998.
If you want to change the resolution of the streams in MATLAB, use cfg.enable_stream, like this:
% Make Pipeline object to manage streaming
cfg = realsense.config();
cfg.enable_stream(realsense.stream.depth, 1280, 720, realsense.format.z16);
pipe = realsense.pipeline();
% Start streaming on an arbitrary camera with default settings
profile = pipe.start(cfg);
Regards,
Jesus G.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hello Jesus G,
I have finished extracting color and depth frames.Also there is align.m which can be used to align color and depth frames, but how to use it? Is it frame-by-frame aligned.
Thanks!
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hello,
RealSense support has moved to a new location and this forum is no longer being monitored. If you need further assistance with this issue, please open a new case at http://support.intelrealsense.com.
Regards,
Jesus G.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hello,
RealSense support has moved to a new location and this forum is no longer being monitored. If you need further assistance with this issue, please open a new case at http://support.intelrealsense.com.
Regards,
Jesus G.
