Building the next generation of indoor autonomous navigation.

By the idealworks Development Team

At idealworks, we’re building the next generation of AMR’s from the ground up — combining hardware, software, and simulation to get industrial goods from A to B. Born from the complex environment of BMW Group’s world-wide factory ecosystem, our custom stack of hardware and software is designed to meet the most demanding criteria for safety, performance, flexibility, and reliability. At BMW Group, we began the first implementation of fully autonomous AMR’s, performing production critical functions. In order to do so, our pipeline from software development, all the way through to deployment makes use of cutting edge software implementation and marketing leading hardware systems.

Image for post
Image for post

Our pipeline for developing the idealworks technology stack starts early on in simulation. A simulation first approach allows us to test and validate all aspects of our technology development in way that is iterative and time efficient. Over the last 2 years, we’ve focused on developing a simulation pipeline that supports and enhances our hardware development cycles, giving our teams the ability to test and refine both our hardware and software stack simultaneously.

Much like in autonomous driving, we use digital simulations to build representative scenarios our robots may encounter, and experiment with edge cases to solve complex problems in real-time. By building realistic digital environments, and leveraging the latest in game engine technology, we create full immersive worlds for our robots to navigate around and learn within. These simulations are used to test perception models, localization performance, sensor input and a range of other features, crucial to the optimization of performance form AMR’s in highly complex and dynamic industrial environments. To hear more about how we leverage the latest simulation technology, check out a talk from our developers at last years GTC conference.

An example of a simulated environment, a playground for autonomous AMR’s to play in.

Through building a custom interface between the software powering our autonomous systems, our hardware, and our simulation — our developers are able to close the loop on developing in simulation, pushing commands and receiving feedback from real robots, simulated robots, or both, at the same time. This fully integrated approach to software development allows us to validate changes to our core software against both real robots, operating in our test environment, and hundreds, or thousands of robots in our digitally simulated environments.

In both the digital and physical world we leverage NVIDIA’s Isaac SDK for both software development and deployment, helping us build code optimized to run efficiently on NVIDIA hardware. Partnering with Nvidia has allowed us to streamline our development approach, with both simulation and actual software stacks running on Nvidia hardware, with Nvidia Isaac SDK as our robotics middle ware. During the 2019 GTC keynote Jensen Huang, Nvidia CEO, commented on our integrated development approach;

“ Developed on the NVIDIA Isaac SDK, the robots utilize a number of powerful deep neural networks, addressing perception, segmentation, pose estimation and human pose estimation to perceive their environment, detect objects, navigate autonomously and move objects. These robots are trained both on real and synthetic data using NVIDIA GPUs to render ray-traced machine parts in a variety of lighting and occlusion conditions to augment real data. ”

A major advantage for integrating simulation into our development pipeline early on, was to hone in on selecting the optimal hardware solutions for application in industrial indoor environments. Although we share many technology skews from autonomous outdoor navigation, the context of AMR’s operating in factory, warehouses and assembly line poses its own set of unique and complex challenges. To achieve a robust solution, and meet the stringent demand for production dependent deliveries, we focused on a multifaceted hardware solution, combining Lidar and RGBD camera input.

Two diagonally opposed LIDARs form the main sensor system for our iw.hubs. SICK Microscan3 Pro Lidars are used to build accurate and detailed maps of the surrounding environment. These lidars enable accurate measurements of distances, and object size, all around the robots from up to 60 meters away. Using lidar also helps our robots navigate challenging environments with low light, or excessive glare. Together these two lidars help our robots build a reliable and accurate spatial understanding, and assist in complex maneuvers like docking, and charging. Further more, to operate in live production areas, we must meet stringent safety standards — SICK lidars help us do so by creating a 360 degree safety field, independent of the robots core navigation stack. Custom fields are designed into the robots operation, ensuring objects and associates entering the field are protected and avoided.

Complimenting our Lidar system, we use a single, forward facing LIPS AE400 stereo camera to help our robots understand their surroundings and detect obstacles. Powered by Intel RealSense technology, the IP67 rated LIPS captures both standard RGB images, and depth data — capturing more detail of the surrounding objects and environment. Using a combination of RGB and depth data the robots can detect unique assets, such as dolly’s and charging stations, and skillfully maneuver into place to preform its missions.

These different sensors are designed to work together, complementing each other to build a cohesive and robust vision system. More than that though, the sensors on the iw.hub provide a platform to improve performance through edge ai and computer vision. Using the Nvidia Isaac SDK, inputs from the onboard sensors are fed though our vision and navigation stack, giving the vehicle the ability to understand its surrounding and make decisions autonomously. Taking input form the LIPS camera, the iw.hub uses a algorithm based on autoencoders and a CNN (convolutional neural network) to interpret 2D images into 3D pose estimations, helping it to identify obstacles and understand their intent. What does this mean for our users? The iw.hub can understand obstacles, vehicles, pedestrians it may encounters throughout it work, identify what they are (a forklift, a bicycle) and predict how the obstacle might react in relation to its own path. When asked about the role of these systems in developing AMR’s Jensen Huang said;

‘BMW Group’s use of NVIDIA’s Isaac robotics platform to re imagine their factory is revolutionary. BMW Group is leading the way to the era of AI factories, harnessing breakthroughs in AI and robotics technologies to create the next level of highly customizable, just-in-time, just-in-sequence manufacturing.’

Together these complex systems of multi-sensor input, CNN’s and AI algorithms culminate in a more intelligent, safe, and aware AMR — contributing to a seamless integration into the industrial environment and complex logistics workflows.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store