The Future of Map-Making is Open and Powered by Sensors and AI

This article investigates the future of map-making and the role of Sensors, Artificial Intelligence and Machine Learning within that.



By Philipp Kandal, Telenav.

The tools of digital map-making today look nothing like those we had even a decade ago. Driven by a mix of grassroots energy and passion combined with innovations in technology, we have seen a rapid evolution marked by three inflection points: the dawn of consumer GPS, availability of high-resolution aerial imagery at scale, and lastly a shift to large scale AI powered map-making tools in which we find ourselves today.

Automatically detecting salient features from open street-level imagery could accelerate map-making by a factor 10

For OpenStreetMap (OSM), the availability of affordable and accurate consumer GPS devices was a key enabler in 2004, when Steve Coast and an emergent community of trailblazers (literally!) biked around, captured GPS traces, and created the first version of the map using rudimentary tooling.

The wide availability of high-resolution, up-to-date aerial and satellite imagery became the next map-making game changer around 2009-2010. It empowered people worldwide to contribute to the map, not just in places they knew, but anywhere in the world where imagery was available. This led to the rapid growth of mappers worldwide and the further expansion of a global map, aiding notable humanitarian support efforts, such as the enormous mapping response immediately following the 2010 earthquake in Haiti.

Fast forward to today, and we find ourselves in the midst of yet another massive change in map-making, this time fueled by the ubiquity of sensors, artificial intelligence (AI), and machine learning (ML). The three-prong combo of the availability of mature software frameworks, a thriving developer and research community, and commoditized GPU-based hardware enable an unprecedented wave of AI-powered technology for consumers as well as businesses.

It did not take long for the map-making community to harness this power and begin applying it to ortho- and street-level imagery to automate the generation of observed changes to the map. When directed to the human mapping community, these outcomes will reduce, without a doubt, the effort to create and enhance maps by a factor 10.

At Telenav, we have jumped on this trend early building and growing OpenStreetCam as have others with a stake in OSM, such as Facebook.

An important element, however, has been holding back a more rapid adoption and perfection of machine learning-based map generation: the lack of openness in the space. For various reasons, both data and software have largely been kept in silos and have not been open to contributions by the community. In our view, creating an open ecosystem around new map-making technology is vital – openness and creativity are what made OSM a success in the first place, because mappers could capture what they deeply cared about.

We are convinced that an open ecosystem around machine learning for map-making is the only way to ensure that this technology can be embraced and appropriated by the community. To that end, Telenav is opening up three key components of the OpenStreetCam stack:

  • A training set of images. We have invested more than five-man years creating a training set of street-level images aimed at common road signs. The set consists of well over 50,000 images, which will be available to anybody under a CC-BY-SA license. We will continue to manually annotate images to double this set by the end of 2018, by which time it will be the largest set of truly open images.
  • Core machine-learning technology. Currently, our stack detects more than 20 different types of signs and traffic lights. We will continue to develop the system to add features important to the navigation and driving-use cases, such as road markings including lanes.
  • Detection results. Lastly, we will release all results from running the stack on the more than 140-million street-level images already in OpenStreetCam to the OSM community as a layer to enhance the map.

You can find everything mentioned above in the Telenav.AI repository on Github.

Our hope is that opening our stack and data will enable others to enhance both the training sets as well as the detection nets and put them to new, creative uses that fulfill the needs and wants of the diverse mapmaker and map-user communities.

Additionally, by openly licensing the data and software, we want to make sure that the next era of mapmaking with OSM remains open and accessible to everyone and fosters the creation of a new generation of mappers.

To celebrate this milestone and to empower the community to run their own improvements to this stack on suitable hardware that is otherwise cost prohibitive, we are kicking off a competition around our training data and software stack, aimed at improving the quality of detections.

The winners will be able to run their detections on our cloud infrastructure against the more than 140-million images currently on OpenStreetCam, and of course release the improved and enhanced detection stack for all mappers to improve OSM. (Oh, and there’s $10,000 in prize money as well!!!!)

In the longer term, we will be releasing more parts of the map making technology stack that we are building to further enable OSM’s growth and expansion, and in order that it plays, over time, a central role in powering autonomous driving.

So, stay tuned for more from Telenav!

Original. Reposted with permission.

Related: