Autonomous Vehicles Need Superhuman Perception for Success
Michael Milford, Associate Professor at Queensland University of Technology (QUT), is a leading robotics researcher working to improve perception and more in autonomous vehicles, conducting his research at the intersection of robotics, neuroscience and computer vision.
By Leonie Philipp, Re.Work.
For self-driving cars and other smart transport to be successfully integrated in the real-world, the safety of passengers and pedestrians must be ensured. In the world of intelligent machines, perception answers the question: what is around me? This situational awareness is paramount for safe operation of autonomous vehicles in real-world environments.
Scientists working in this field point to robotic perception as fundamental in equipping machines with a semantic understanding of the world, so that they can reliably identify objects and make informed predictions and actions. Michael Milford, Associate Professor at Queensland University of Technology (QUT), is a leading robotics researcher working to improve perception and more in autonomous vehicles, conducting his research at the intersection of robotics, neuroscience and computer vision.
Michael's research models the neural mechanisms in the brain underlying tasks like navigation and perception in order to develop new technologies, with a particular emphasis on challenging application domains where current techniques fail such as all-weather, any-time positioning for autonomous vehicles.
As the Machine Intelligence in Autonomous Vehicles Summit in Amsterdam draws nearer, we spoke to Michael to gain insight into the recent advancements in robotic perception in autonomous systems and the challenges that lie ahead.
How did you begin your work in autonomous systems?
I’ve always been fascinated by the application of intelligence to autonomous systems, ever since my undergraduate university days. I think working with intelligent systems that are actually deployed and evaluated on embodied systems such as autonomous robots and vehicles provides a crucial “real-world” sanity check for what parts of theory are correct and which need more work. These insights can then close the loop back to the underlying theory and help improve it even further, moving us closer to truly intelligent autonomous systems.
What key factors have enabled recent advancements in the perception of challenging environments?
On a practical side, it’s the relatively recent realization by all the major players in this space that autonomous vehicles represent an unprecedented commercial opportunity at a huge scale, and the associated influx of resources and talent that is now working on making significant advances for problems like perception in challenging environments.
Sensors like cameras are also rapidly improving to make this problem more tractable. You can now get a consumer camera for a couple of thousand dollars that sees better in the dark than you do, and this technology is still getting better. So some “traditional” perception problems like seeing in the dark are being largely solved by improved sensing technology.
Then there is the software and algorithmic side of things. Humans are very good at dealing with challenging or “corner-case” perceptual situations – and now that we’re gathering such large amounts of data from self-driving car platforms, we’re able to train deep neural networks to mimick or even surpass human ability in this regard. We’re working in this space for autonomous vehicle navigation, and will present a paper “Deep Learning Features at Scale for Visual Place Recognition” at the IEEE International Conference on Robotics and Automation in Singapore next month.
Which key challenges need to be solved for perception to improve and progress?
Corner-cases are a big one. It’s hard to get a real autonomous vehicle to repeatedly experience dangerous real-life situations like a pedestrian jumping out in front of the car. So instead you have to do it formally through a theoretical approach, or with extensive high fidelity simulation. It’s also relatively easy to get an autonomous system to 90, or 99% reliability, but solving that last 1% of corner-cases is proving to be pathologically difficult. In constrained circumstances like highway driving, the autonomous cars from the 1980s were already reasonably reliable.
It’s also sobering that you can take a refugee from a country where they’ve seen relatively few cars and never driven, and train them up to be a competent driver in a new country in a short period of time. The new human driver is able to take a lifetime of general learning about the physics of the world and how things in the environment interact, and rapidly adapt it to the specific task of driving. This is very different to the approach being used by many of the learning-intensive corporations and start-ups in self-driving cars, who are training using “millions of miles” of data. This doesn’t mean their approach won’t work, but it is in some ways very different to how we humans likely do it.
Are there any additional applications of spatial mapping and visual recognition in autonomous systems?
Having robust visual recognition for autonomous robots and vehicles isn’t just about navigation. There are so many other situations – recognizing other vehicles, pedestrians, even recognizing the intent of a person by looking at the expression on their face. Recognition of all these situations and more is a critical capability for autonomous driving. Precise spatial mapping is also a critical component of other major applications like infrastructure monitoring. If you know where you are to sub-millimetre precision (something we’re working on in our lab), you can then accurately track the changes in environments over time, such as tracking crack propagation on a road surface, air-frame, ship hull or concrete wall in an industrial plant.
What developments can we expect to see in autonomous vehicles in the next 5 years?
This is the million / billion / trillion dollar question. I think the key to making any prediction is to first acknowledge that we’re particularly bad at predicting technology effects and propagation. That said, I can see no reason why we won’t have carefully controlled ride sharing fleets of fully autonomous vehicles in the central city areas of affluent tech-friendly Western cities like the centre of San Francisco. There are a number of reasons why this is completely feasible.
Firstly, it’s a high density, lucrative area, so you can afford to, if necessary, “cheat” and add significant external infrastructure to make the self-driving car problem easier, like external cameras and an external safety control system that intervenes if the car’s on-board systems fail to react to a hazardous situation.
Secondly, it’s a relatively low speed environment, so even when the cars muck up, the likely impact per incident will be far less. A car hitting someone at 20 miles per hour is far less likely to cause a fatality than one travelling at 65 miles per hour.
Thirdly, the ride sharing fleets can be turned off arbitrarily to avoid conditions in which they might fail – for example, adverse weather like snow or heavy rainfall. This is a pain for the consumer, but in the interim you’d still have human-driven ride sharing to fill in these gaps in coverage. This is the sort of strategy start-ups like NuTonomy are likely adopting, until new research can solve these adverse perception problems.
Outside of your field, what area of machine learning advancements excites you most?
My core passion is to understand and then develop truly intelligent systems. I think deep learning is an exciting new field, which needs to come back and become more truly integrated into computational neuroscience. When sophisticated hierarchical deep nets start being designed with significant influence from our knowledge of actual neural circuits (and there are several projects around the world already starting to do that), that’s where I think we’ll start seeing mind blowing advances in intelligence.
I’m also excited about the potential for making breakthroughs in intelligence research in the self-driving car domain. I got into navigation intelligence research because it is a tangible task under which to examine the problem of spatial intelligence and one where you might make more rapid advances than studying something more abstract. Driving is the next logical step – a more general but still constrained domain that requires some degree of true intelligence. I’m hoping that we can work with some of the major corporations and start-ups in this space to not only develop technology that enables self-driving cars, but also to make new breakthroughs into understanding our own intelligence, and how we can replicate and improve on it to make useful autonomous systems.
Opinions expressed in this interview may not represent the views of RE•WORK. As a result some opinions may even go against the views of RE•WORK but are posted in order to encourage debate and well-rounded knowledge sharing, and to allow alternate views to be presented to our community.
Original. Reposted with permission.
- Women in Tech: Interview with DeepMind’s Silvia Chiappa
- Top /r/MachineLearning Posts, April: Why Momentum Really Works; Machine Learning with Scikit-Learn & TensorFlow
- 5 Free Courses for Getting Started in Artificial Intelligence
Top Stories Past 30 Days