- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm trying to use the D435 camera to get color images and opencv to work with it. I'm having problems reading the image. This is the python code I'm using:
import cv2
cap = cv2.VideoCapture(6)
while (1==1):
key=cv2.waitKey(1)
ret, frame = cap.read()
if ret:
cv2.imshow("res",frame)
if key==ord('q'):
break
cap.release()
cv2.destroyAllWindows()
When run the code I get 2 color images which are all green and after those 2 I get the error message: python vidioc_dqbuf resource temporarily unavailable and no more images.
The scrip works with another camera, which isn't a d435. I've tried 2 different D435s and they both give the same faulty responses. When I use the same cameras and same python script on another computer it works. I'm positive the d435 is connected to a usb 3. Any help I can get is appreciated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Solved the problem by using the sdk.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I got the same issues. normally, I can read the realsense camera by opencv. But today, the green color is shown, and after 3 times reading the messages is "VIDIOC_DQBUF: Resource temporarily unavailable"
Code:
import cv2
import numpy as np
# Create a VideoCapture object and read from input file
# If the input is the camera, pass 0 instead of the video file name
cap = cv2.VideoCapture(2)
print('hello')
# Check if camera opened successfully
if (cap.isOpened()== False):
print("Error opening video stream or file")
# Read until video is completed
while(cap.isOpened()):
# Capture frame-by-frame
ret, frame = cap.read()
if ret == True:
print("can reading file")
# Display the resulting frame
cv2.imshow('Frame',frame)
# Press Q on keyboard to exit
if cv2.waitKey(25) & 0xFF == ord('q'):
break
# Break the loop
else:
break
# When everything done, release the video capture object
cap.release()
# Closes all the frames
cv2.destroyAllWindows()
Terminal result:
hello
can reading file
can reading file
can reading file
select timeout
VIDIOC_DQBUF: Resource temporarily unavailable
I already test the same code with others computer, it can work. I also change different realsense cameras, but the same results. :(
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
annyway, the SDk still works. And the code is here. But I really want to figure out the problem in opencv
## License: Apache 2.0. See LICENSE file in root directory.
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.
###############################################
## Open CV and Numpy integration ##
###############################################
import pyrealsense2 as rs
import numpy as np
import cv2
cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
# Configure depth and color streams
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
# Start streaming
pipeline.start(config)
try:
while True:
# Wait for a coherent pair of frames: depth and color
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
if not depth_frame or not color_frame:
continue
# Convert images to numpy arrays
depth_image = np.asanyarray(depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())
# Apply colormap on depth image (image must be converted to 8-bit per pixel first)
scaled_depth=cv2.convertScaleAbs(depth_image, alpha=0.08)
depth_colormap = cv2.applyColorMap(scaled_depth, cv2.COLORMAP_JET)
# Stack both images horizontally
images = np.hstack((color_image, depth_colormap))
# Show images
cv2.imshow('RealSense', images)
k = cv2.waitKey(1) & 0xFF
if k == 27:
break
finally:
# Stop streaming
pipeline.stop()
References: http://robotsforroboticists.com/realsense-usage-robotics-slam/

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page