Items with no label
Announcements
The Intel sign-in experience has changed to support enhanced security controls. If you sign in, click here for more information.
3332 Discussions

after align rgb to depth,the points calculated from depth image are not evenly distributed but a small pile of small

tguan
New Contributor I
819 Views
 
0 Kudos
4 Replies
tguan
New Contributor I
206 Views

i calculate the points from aligned depth image,and it is devided into pieces,but if i do not align the depth image to color,it seems to be good.

__author__ = '关彤' #encoding:utf-8 #RGB-D图片和点云数据 import pyrealsense2 as rs import numpy as np import cv2   pc = rs.pointcloud() points = rs.points() # Create a pipelinett pipeline = rs.pipeline() config = rs.config() config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30) config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30) # Start streaming profile = pipeline.start(config) # Getting the depth sensor's depth scale depth_sensor = profile.get_device().first_depth_sensor() depth_scale = depth_sensor.get_depth_scale() print("Depth Scale is: " , depth_scale) # 移除背景,设置阈值 clipping_distance_in_meters =1 #1 meter clipping_distance = clipping_distance_in_meters / depth_scale align_to = rs.stream.color align = rs.align(align_to) # Streaming loop(串流循环)   while True: frames = pipeline.wait_for_frames() # Get frameset of color and depth depth = frames.get_depth_frame() color = frames.get_color_frame() if (not depth) or (not color): break # Align the depth frame to color frame aligned_frames = align.process(frames)   # Get aligned frames depth_frame = aligned_frames.get_depth_frame() # aligned_depth_frame is a 640x480 depth image depth_intrin = depth_frame.profile.as_video_stream_profile().intrinsics color_frame = aligned_frames.get_color_frame() # Validate that both frames are valid if not depth_frame or not color_frame: continue #点云 # Tell pointcloud object to map to this color frame #pc.map_to(color_frame) # Generate the pointcloud and texture mapping points = pc.calculate(depth_frame) testpoints = pc.calculate(depth)   depth_image = np.asanyarray(depth_frame.get_data()) color_image = np.asanyarray(color_frame.get_data())     depth_image2 = np.where((depth_image > clipping_distance) | (depth_image <= 0), 0, depth_image) depth_image2 = cv2.applyColorMap(cv2.convertScaleAbs(depth_image2, alpha=0.03), cv2.COLORMAP_BONE)#渲染图像depth_colormap depth_image = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_BONE) cv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE)   cv2.imshow('Align Example', color_image) cv2.imshow('Align Example2', depth_image2) key=cv2.waitKey(1) if key == ord('q'): points.export_to_ply("point3.ply", color_frame) testpoints.export_to_ply("point0.ply", depth) break   pipeline.stop()  

tguan
New Contributor I
206 Views

the two kind of points are like this

Eliza_D_Intel
Employee
206 Views
Hello tguan, Thank you for your interest in the Intel RealSense Technology. Could you please let us know how your output looks like? Thank you and best regards, Eliza
Eliza_D_Intel
Employee
206 Views
Hello tguan, Could you please let us know if you require further assistance on this subject? Thank you, Eliza
Reply