Hi,
Like many others, I am noticing a high CPU load with my D435 RealSense device. Using the capture example program, and commenting the display sections of the code (just running the pipe) takes 90% of my i5 CPU. I need to replicate my solution on an a relatively significant volume. So, the cost of the CPU is of some consideration.
Given that I am running at 1280x720 resolution with 30 fps, the pixel rate for both the depth and color streams amount to around 150 megabytes/second. One of my USB 3.0 Machine Vision cameras produces 5-megapixel resolution at 30 fps which amounts to the same data rate. Yet, the CPU load for this camera is only 20%. So, there is certainly issue with my D435.
I have the following questions:
1) Is there a solution in the works for reducing the CPU load significantly? If so, any heads-up as to when we should expect a solution
2) In the absence of a "good" solution to reduce the CPU load, my backup plan is to avoid streaming the color image; my application is strictly 3D and I just need the depth data. Is it possible to configure the D435 so that it outputs ONLY the depth data? If so, how?
Thanks,
Mo
連結已複製
Which RealSense SDK version are you using, please? A bug fix for high CPU load was included in the recent 2.12.0 version.
https://github.com/IntelRealSense/librealsense/releases/tag/v2.12.0 Release Intel® RealSense™ SDK 2.0 (Build 2.12.0) · IntelRealSense/librealsense · GitHub
Currently I have no other information available about reducing CPU load, unfortunately.
If you need a depth script, there is one on the front page of the Librealsense documentation. Scroll down to the section titled 'Ready to hack'.
https://github.com/IntelRealSense/librealsense GitHub - IntelRealSense/librealsense: Intel® RealSense™ SDK
Your prompt response is appreciated, as always.
I reviewed the documentation related to "Ready to hack", and noticed that the code snippet is just pulling in the depth range. I can understand that eliminating the readout of the color image can reduce the CPU load. However, this is only part of the problem. I am looking for some kind of "trick" that would tell the D435 not to event output the color image to reduce the load on the driver to catch the data on the USB bus.
Mo
I think a large part of the CPU load is converting the native YUYV colour image to RGB, so not asking for a colour image should reduce the load.
To configure the pipeline to only get depth images, you can do something like:
rs2::pipeline pipe;
rs2::config config;
config.enable_stream(RS2_STREAM_DEPTH);
pipe.start(config);
I also was baffled by the consistently high CPU usage.
On studying the behaviour, I discovered that if the CPU has nothing else to do, RealSense will use up any available processing power.
My application uses a good chunk of 4 processors, yet adding the D435 camera didn't slow it down at all... but the CPU sticks pedal-to-the-metal at 100%.
It seems like Intel have the knowledge and smart compilers to use their own hardware efficiently; 100% CPU doesn't necessarily mean there's a problem; it can also mean that the program is bottleneck-less.
I wish I could write a (non-trivial, multi-tasked) program that consistently used 100% CPU >;-)
