How Starship Delivery Robots know where they are going

Joan Lääne

(plus how to make your very own 1:8 scale papercraft robot model)

by : Joan Lääne, Mapping Specialist, Starship Technologies

Every September when the new school year starts, many first-graders are a little afraid of the unknown. Not only about starting school and new people they will meet, but also about the journey they need to make each day. They must learn and remember how to navigate the world and the way to and from their classroom by themselves. This can be made easier by a parent who can accompany their child the first few trips back and forth to get them more familiar with the path, usually pointing out some interesting landmarks along the way such as, tall or bright buildings or signs on the pathway. Eventually it will be trivial for the child to go to school and remember the way. The child will have formed a mental map of the world and how to navigate it.

Starship Technologies provides a convenient last mile delivery service with fleets of sidewalk delivery robots navigating the world each day. Our robots have completed over 100,000 deliveries. To get from point A to point B the robots need to plan a route ahead which in turn requires some sort of a map. Even though there are already many publically available mapping systems such as Google Maps and OpenStreetMaps, they have the limitation that they are designed with car navigation in mind and mostly focus on mapping car roads. Since these delivery robots travel on sidewalks, they need an accurate map of where it is safe to travel on sidewalks and where to cross streets just like a child needs a mental map on how to get to school safely and on time every day. So how is this map generated?

The first step of creating a map for delivery robots is scouting the area of interest and generating a preliminary map (2D map) on top of satellite imagery in the form of simple interconnected lines representing sidewalks (green), crossings (red), and driveways (purple) as illustrated in the image below.

The system treats this map as a node graph and it can be used to generate a route from point A to point B. The system can identify the shortest and the safest path for the robot to take and also calculate the distance and time it would take to drive this route. The advantage of this process is that it all can be done remotely before any robots physically arrive on site.

The next step involves showing the robots what the world looks like. Similar to the parent-child analogy, the robots need a little bit of hand-holding the first time they explore an area. When the robot first drives, the cameras and a multitude of sensors on the robot collect data about the world around it. These include thousands of lines which come from detecting edges of different features, for example buildings, streetlight poles and rooftops. The server can then create offline a 3D world map from these lines which the robot can then use. Like the child, the robot now has a model of the world with guide posts and it can understand where it is at any given time.

Since our robots need to cover different areas at the same time to complete all their deliveries, to be efficient various maps need to be put together to create one unified 3D map of given area. The unified map is created piece by piece by processing the different pieces of the new area until eventually the map looks like a huge completed jigsaw puzzle. The server will put this map together based on the line data the robot collected earlier. For example, if the same rooftop was detected by two robots, then the software figures out how it connects with the rest of the map. Every colored line in the image below represents a single piece of a mapping trip added to the map.

The final step of the mapping process, before the robots can drive fully autonomously, is to calculate exactly where and how wide the sidewalk is. This is created by processing the camera images the robot recorded while exploring the area as a reference as well as incorporating the previously created 2D map based on the satellite imagery.

During this process more details are added to the map to accurately define the safe zones for where the robots can drive.

Of course, the world around us is not static. There are daily and seasonal changes in landscape, constructions and renovations, which change the way the world looks. How might this affect the mapped areas for the robots? Actually, the robot’s software handles small to medium changes in the mapped area quite well. The 3D models are robust enough and filled with such vast quantities of data, that a tree cut down here or one building torn down there typically does not pose a challenge to the robot’s ability to localize its position or use the map. And, additionally as the robot drives around each day it continues to gather more data that is used to update the 3D maps over time. But if an area is completely reshaped, or new sidewalks are built, then the solution is simple. The map must be updated using new data gathered by a robot. Then afterwards, other robots can drive autonomously again in the same area as if nothing happened. Keeping maps up to date is crucial to keep the robots driving safely and autonomously.

As you can no doubt tell by now, I really enjoy playing around with the concepts of 3 dimensional space. Ever since I played the first 3D first person shooter computer game (Wolfenstein 3D), the world of 3D in the digital domain became an interest of mine. I wanted to make my own 3D worlds for computer games, so I found ways to edit the existing game levels. Later, I also tried my hand with 3D computer modelling, which I found interesting. With the popularization and affordability of 3D printers, I started physically printing models too. But long before that, during school summer breaks, I loved to do papercraft models of different buildings and vehicles. It was an easy and cheap way to create something with my own hands, yet it was also interesting to see how a 2D layout on a piece of paper, with a little cutting, folding and gluing, can turn into a 3D model. Basically, creating the papercraft of a 3D object or “unfolding” is, in a sense, the reverse of mapping. It is creating the 2D layout of the surface of a 3D object.

Since I have a passion for papercraft I decided to create one for our Starship delivery robots. The goal of making this model is to enable others who might enjoy the same passions I do to create their own version of our delivery robots. Creating a paper model is a fun challenge, and once done it makes for a nice decorative item too. As with generating 3D maps for the robot, the making of a papercraft model requires precision, accuracy, and spatial thinking of how all the parts fit together. Also a good bit of patience.

I have created some instructions for you to create your own papercraft delivery robot and I’d love to see your efforts. Have fun and good luck making your own delivery robot paper model!

Please post a picture of your robot on Instagram and tag @StarshipRobots so I can find them!

Please find the Starship delivery robot papercraft model and instructions here

© Starship Technologies. The design of the Starship® delivery robot and aspect of the technologies described are proprietary and protected by copyright and other intellectual property laws

Discover more from WHO WILL CARE eCommerce

Subscribe now to keep reading and get access to the full archive.

Continue reading