Cornelius Buerkle and Fabian Oboril are both research scientists at Intel Labs specializing in robotics, perception, and safety.
Highlights
- Novel 3D perception algorithms from Intel Labs can segment ground floor and static versus dynamic objects.
- These algorithms enable safe human-robot interaction for both stationary and mobile robots.
- The safety algorithm is available through the Intel Robotics SDK or packaged with RealSense cameras.
Intel Labs researchers have developed a new set of safety concepts for mobile and stationary robots that enhance the robot’s operating capabilities while ensuring the robot always maintains a safe state. While the use of smart and autonomous robots offers great promise, it can be challenging to ensure the safe operation of systems to prevent human harm, especially when robots and humans share the same workspace in environments such as healthcare, manufacturing, and retail. Environment perception is a critical component for confirming every safety-relevant object is properly detected, allowing the robot to adapt its behavior accordingly.
This novel safety concept uses 3D distance sensors, such as RealSense depth cameras, to quickly create a reconfigurable 3D virtual safety zone for robust object detection. Artificial intelligence (AI) is not required, making these perception algorithms safety certifiable and executable on low-power embedded hardware. Working in close collaboration with RealSense, the team ensured that these novel algorithms will be supported by the upcoming next generation of RealSense cameras.
Figure 1. Intel Labs researchers create environments for a stationary robot (left) and a mobile robot (right). The scene is segmented into static and dynamic components.
As a result of the algorithms, mobile robots can drive over ramps but stop in front of stationary obstacles lying on the ground. Demonstrated at the 2022 IEEE International Conference on Intelligent Transportation Systems (ITSC), this safety behavior is hardly possible even with state-of-the-art 2D safety sensors. Furthermore, the safety field around a stationary cobot cell can be quickly reconfigured in a task- and situation-dependent way, which is impossible with light curtains or physical barriers to separate robots and humans, as presented at the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) and Embedded World 2025
3D Safety for Autonomous Mobile Robots
Safety of mobile robots is usually ensured using 2D distance sensors such as LiDAR sensors, which use a single scan line to measure the distance to surrounding objects at a certain height above ground. This is a low cost and well-established approach that works well only if robots navigate on nearly flat surfaces. However, with the increasing capabilities of mobile robots, many industries want to deploy them in environments with non-flat surfaces, such as in warehouses with ramps for loading and unloading trucks. With the traditional approach, the 2D safety sensors need to be mounted high enough to avoid the ramps triggering safety alerts that there are obstacles in proximity. This method comes at a cost because now no obstacle below the set height will be detected. A similar problem occurs when using approaches that filter 3D data by height.
Researchers at Intel Labs have developed a novel perception algorithm that can overcome this limitation by avoiding height-based filtering of distance data from 3D sensors. The goal of the algorithm is to segment the sensor data into data points that belong to the ground surface. For example, a robot can move over these ground surface points, data points that belong to objects above the robot’s maximum height such as a bridge, and finally data points that belong to critical objects. All measurements that belong to potentially critical objects are then filtered and clustered to assess if the robot can stop in front of these objects given its current speed and response time. If this safety requirement is met, the robot can move as intended, otherwise the algorithm can enforce an emergency stop of the robot.
Figure 2. Top: Advantage of Intel Labs 3D perception algorithm is the ability to handle ramps and objects at the same time. Bottom: Different processing steps of the Intel Labs safety approach for mobile robots.
A byproduct of this novel algorithm is the possibility of performing ground removal for a given set of data points from a distance sensor, which is a very common problem in the domain of mobile robots during their path planning process. A corresponding sample application is available in the Intel Robotics SDK for autonomous mobile robots (AMRs).
3D Safety for Stationary Industrial Cobots and Robots
As another class of robots, stationary robots usually have robot arms with manipulators and are used in many industry domains. Traditionally, safety of these robots is ensured using physical separation with fences or light curtains. However, in the dawning era of collaborative robots working jointly with humans, physical separation is no longer viable. Hence, new more versatile safety approaches are needed.
Therefore, Intel Labs researchers created a novel perception approach optimized for stationary robots. The approach uses data from 3D distance sensors, such as RealSense depth cameras or 3D LiDAR sensors. Leveraging statistical sampling, the novel HistoDepth algorithm can robustly segment the working area of a robot into elements that are stationary and dynamic.
Figure 3. The HistoDepth algorithm provides a continuous update of background estimation and identification of dynamic measurements.
Knowing the 3D position allows the dynamic data points to be clustered and filtered. Whenever a dynamic object is detected near the robot, the operating speed of the robot can be reduced or eventually halted entirely. The challenge is to distinguish the moving robot arm from other dynamic entities in the environment. By correlating information on the robot’s pose with sensor data, this dilemma can be robustly addressed.
The algorithm provides self-adapting mechanisms to automatically adjust to changes in the stationary environment. A built-in continuous monitoring of sensor data quality also provides self-checking capabilities and assures that the system can identify degradation in sensor data quality.
Other Perception Algorithm Applications
This perception algorithm is not bound to a single sensor technology or a single use case with stationary robots. In fact, with the same approach, elevator or train door safety could be improved to avoid closing when objects are still within the door frame. In addition, traffic could be monitored to make urban intersections safer. Presented at 2023 ITSC, this robust LiDAR-based traffic monitoring offers a smart way to manage the ever-growing amount of road users, mitigate traffic congestion, and improve road safety.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.