Robots Need “Common Sense” AI to Work Out Our Uncertain World
At the Machine Intelligence Summit in Berlin last week, Jeremy Wyatt, Professor of Robotics and Artificial Intelligence at University of Birmingham, was asked a few questions about his work in mobile robot task planning and manipulation.
By Nikita Johnson, Founder at Re.Work.
Jeremy Wyatt is Professor of Robotics and Artificial Intelligence, in the Computer Science department at University of Birmingham, where they create algorithms that enable robots to work in uncertain and unfamiliar worlds.
With extensive experience in robotics and AI, Jeremy's work has been featured in the media worldwide, and he has published over 90 papers, edited three books, and coordinated two major international research projects in robotics, CogX and PacMan, among other achievements. One of his goals is to endow a robot with explicit representations of what it does and doesn't know, and of how it's knowledge changes under the actions it can perform, which allows robots to plan in challenging environments where they know little.
At the Machine Intelligence Summit in Berlin last week, Jeremy presented advances in mobile robot task planning and manipulation, with an overview of the field and examples of work from his lab, including machine vision, common sense reasoning and robotic grasping. I asked him a few questions to learn more about his work in this area, and to find out what we can expect for AI and robotics in the future.
Tell us more about your research at the University of Birmingham.
We develop algorithms across a broad range of problems in intelligent robotics. This includes methods for task planning, manipulation, long life robots, whole body control, machine vision, and machine learning. My own interests are largely in manipulation and task planning. Our philosophy is enable the robot to know what it knows, and how its knowledge changes. This enables robots that explore and act effectively in incomplete and uncertain environments.
What do you feel are the leading factors enabling recent advancements in robotics and AI?
There has recently been good progress on low level perception and action. This has been most notable in areas such as vision and speech, where large amounts of data are available. The driver is clearly the realisation that the neural network techniques of the 1980s worked much better on large data sets than anyone ever dared to dream. In robotics similar machine learning techniques are also applicable, but there we will benefit from machine learning techniques that work with smaller amounts of data. In robotics we have got much better at robot localisation and mapping. This is again down to advances in probabilistic AI and machine learning. The result is reliable SLAM that is the basic technology underpinning every mobile robot from autonomous UAVs to self-driving cars. AI planning techniques have also scaled well to large closed worlds, but they need to do much better in less structured worlds.
What present or potential future applications of AI and robotics excite you most?
Application wise the most beneficial application of AI would be live translation from speech. I think this is closer than most people suspect. Current translation programmes are buggy, but they are getting better. When this is cracked, it will be transformational to business and personal communication worldwide. I am waiting to order my Babel fish. Regarding robotics, I see agriculture and logistics as the next big beneficiaries of recent advances in robotics. High value service robotics (nuclear, oil, defence, medicine, mining) in general is going to continue to be where most of the growth is in the next ten years.
What are the biggest technical challenges to robotic advancements and applications in the real world?
There are two: verification and manipulation. First, if we want autonomous drones, cars, etc, that move in human occupied spaces, they need to be safety certified. It is not entirely clear how to do this, and strictly speaking the verification problem is next to impossible. So we will need workarounds, not a head on attack. Second, general manipulation in unstructured environments is much harder than most people realise. Here there are clear challenges in the performance and reliability of complex manipulators. Grasping is close to being solved sufficiently well for a wide range of commercial tasks, but general manipulation could still be a long way off.
What developments can we expect to see in robotics and AI in the next 5 years?
I expect speech recognition to become reliable (<1% error rates) for large vocabulary, speaker independent recognition. I would also expect robot grasping in unstructured settings, such as logistics picking, to be solved, though not necessarily with the speed and reliability of humans. In the basic research domain I do expect to see laboratory advances in dexterous manipulation. This will include assembly and sequencing of manipulation operations. Robot manipulation is about 10-15 years behind robot mobility, but we are advancing. This won't be commercial in 5 years though. Finally, I will be disappointed if I'm still reading articles about self-driving cars, rather than sitting in one that can, and that I can buy---even if it's only allowed to off-road.
The next Machine Intelligence Summit will take place in New York on 9-10 November, to explore how AI will impact transport, manufacturing, healthcare, retail and more. Early Bird tickets are now available for this event - for more information and to register, visit the event page here.
See the full events list here for events focused on AI, Deep Learning and Machine Intelligence taking place in London, Amsterdam, Boston, San Francisco, New York and Singapore.
Original. Reposted with permission.