Essential Sensors for Safe AMRs
The pivotal role they play in sensor fusion and machine learning for tomorrow’s material handling workflow.
Sensors for autonomous mobile robots (AMRs) are so much more than they used to be. No longer are they just for safely moving from A to B and stopping to avoid collisions, they’re contributing to dynamic collaborative workflows that predict and adjust to sitewide operational performance. Misplaced materials? Unplanned equipment failure? Unexpected bottlenecks or shortages? Today, in coordination with machine learning and system-wide intelligence, sensors allow new levels of perception and prediction for safe navigation and operational efficiency.
Sensors on an AMR monitor equipment health and environmental conditions. For example, for localization – the real-time determination of an AMR position in space over time in relation to nearby objects using a reference map. Localization is a necessary first step to safely navigating a warehouse or manufacturing floor. Navigation then combines localization with an AMR’s vector definition (speed/direction), motor control, route planning, and fleet management. But navigation is only as good as its localization algorithm and the sensors used for this calculation.
Self-localization is where the AMR uses onboard sensors to determine its whereabouts. For example, a 2D LiDAR can be used to measure and compare contours of nearby objects with a locally stored reference map. Another method is with optical or magnetic sensors for the AMR to follow lines or routes on the floor.
External localization relies on infrastructure-based sensors with no active elements installed on the asset itself. For example, an overhead camera that tracks the position of a QR code – or a wireless triangulation method to locate and track tags such as with Ultra-Wide Band (UWB).
For tomorrow’s more dynamic and collaborative workflows, a hybrid combination of onboard and external sensors and technologies are increasingly used. This is particularly useful in places requiring additional safety and performance to collaborate with humans or frequent workflow changes. And this is where sensor fusion and machine learning can take flight and really shine.
But first, let’s examine some sensor technologies contributing to localization: LiDAR stands for “light detection and ranging”. This method of environment perception is based on measurement data from probing laser scanners that detect points in the environment. Localization software then compares the measured points to contours within a map to achieve <10mm accuracy*. To improve this estimation, wheel odometry and Inertial Measurement Unit (IMU) sensors can be used to cross check the LiDAR readings to further refine positional accuracy. Additionally, high-resolution digital LiDAR scanning techniques such as safe HDDM® improves performance 4x in bright and dusty environments.
Here’s an attribute comparison of Ultrasonic, Radar, and LiDAR sensors – commonly used in autonomous vehicle applications:
Another powerful sensor technology is vision cameras. Examples include 2D methods, such as RGB and infrared, and 3D methods such as stereoscopic, structured light, and time-of-flight (ToF). RGB sensors capture the visible wavelength spectrum in specific red, green, and blue measurements. Infrared sensors capture the longer non-visible infrared wavelength and is useful in low-light conditions and detecting heat signatures. Stereoscopic cameras, however, can measure distance to objects by triangulating depth across two separate cameras. Structured light cameras similarly determine depth by projecting a pattern on the object and measuring its deformation. And ToF cameras determine depth by measuring the time it takes for an emitted infrared light to reflect off an object and return to the sensor.
3D depth data from LiDAR and vision cameras allows us to create highly detailed 3D point clouds to take advantage of the tremendous potential of machine learning. And with the advancements in edge computing, localization algorithms and classification inference can be run on the LiDAR or camera sensor itself, reducing latency, improving performance, and simplifying deployment. For example, objects can be detected and identified in just milliseconds for enhanced perception and navigation allowing more complex workflows and higher speeds.
What if we combined this perception and classification data with infrastructure sensor data and warehouse management systems? This is where the magic really begins. In development, and in coordination with international safety standards, are new levels of safety, performance, and operational visibility using sensor fusion and machine learning.
*depending on the sensor combination and environment
To find out more about MHI’s MAG Industry Group: https://www.mhi.org/mag
For further articles/podcasts from MAG:
Integrating Mobile Robots Into Your Operations
Building Sustainability Through Mobile Automation
Podcast: Energizing Mobile Automation
Top Misconceptions Of Mobile Automation
Podcast: MAG – How To Get Started With Mobile Automation
Podcast: Sensors Revolutionizing Automated Material Movement: Efficiency And Safety Enhanced
Powering Tomorrow’s Mobile Automation