Anticipating the next move in data science – my interview with Thomson Reuters

Like chess, Big Data is a combination of science, art and play; Gregory Piatetsky-Shapiro of KDnuggets helps data devotees discover winning moves - my Thomson Reuters interview.



Thomson Reuters has a series, AI experts, where they interview thought leaders from different areas - including technology executives, researchers, robotics experts and policymakers - on what we might expect as we move towards AI.

As part of that series I recently spoke to Paul Thies of Thomson Reuters, and here are the excerpts from the interview:

Answers Anticipating the next move in data science


Thomson Reuters: For timely information concerning developments in data science, data mining and business analytics, KDnuggets is widely regarded as a leading outlet in the field. Created in 1993 by founder, editor and president Gregory Piatetsky-Shapiro, it is frequently cited as one of the top sources of data science news and influence by various industry watchers.

Thomson Reuters: What are some use cases of data science that you find to be particularly valuable to organizations in this age of Big Data?

GREGORY: Where people typically apply data science, probably not surprisingly, are in the areas of customer relationship management (CRM) and consumer analytics. Data science allows you to predict consumer behavior better and usually make incremental improvements and predictions, but those incremental improvements could translate to significant revenue. In the last couple of years, thanks to revolutionary advances in deep learning we see amazing advances in new areas connected to image and speech understanding, such as radiology. It was reported recently that deep learning systems have exceeded the accuracy of human radiologists in diagnosing cancer. Generally speaking, in areas where there is a very large amount of labeled data, deep learning has already been achieving human or superhuman levels.

We are also seeing advances in applications in cybersecurity and speech recognition, particularly with smartphones - now, when we talk with smartphones, they frequently understand us better than people on the telephone. Smart speakers have made their way into roughly a quarter of all homes in the United States, and they have an increasingly more accurate understanding of speech. Machine translations have become amazingly better in many areas. So these applications of data science and machine learning are growing at a very fast rate. Basically in any area where you have a lot of data, you can benefit from the use of data science and machine learning.

Thomson Reuters: If we become more heavily reliant on AIs to perform predictive behaviors, does that leave a role for humans in terms of predictive management?

Gregory Piatetsky-Shapiro GREGORY: I see different developments in the near term and the long term. In the near term, I can see people performing together with AI; one example would be when radiologists review and approve the results of medical tests with an AI. ButI'd like to use chess as an illustration.

I'm a chess player and follow the game closely. In 1997, IBM's Deep Blue program (developed after several years of effort) defeated then-World Champion chess player Garry Kasparov. After that, people organized human-computer teams, and there were tournaments where human-computer teams played versus unassisted humans and versus computers. For some period of time afterwards, the human-computer teams were better than either humans or AIs alone.

However, AI programs improve in chess is much faster than humans. The human-computer teams were soon inferior to pure computer teams - there was no advantage in adding a human grandmaster to a computer.

In 1997, it took IBM several years to develop algorithms and special software to defeat Kasparov. Last year, Google DeepMind developed a program called Alpha Zero (so called because it started learning chess, Go and another game using zero human knowledge). It just played games with itself and used additional methods called reinforcement learning to improve. This program took only four hours of self-play to reach and exceed the world champion level in chess. The world champion level now means not a human but a computer program. It took four hours for chess; and for Go (which is a more difficult game and very popular in Asia), it took about three days.

Games like Go and chess are easier to master then what happens in the real world because they have well-defined rules and limited, finite boards but we see similar developments in other areas; I already mentioned that AIs have exceeded medical doctors in radiology. In other domains, it probably will take longer but if there is sufficient data, then an AI can learn to perform at a superhuman level.

If there is insufficient data then there are methods like reinforcement learning that allow agents to actively experiment in the world and learn from their own experiments; they're sort of behaving like children. I have a one-year old granddaughter, and I enjoy watching how she explores everything around her. AI reinforcement learning behaves in this way, except it can learn much faster than children. Other methods also help to transfer learning - to learn something in one domain and apply that knowledge to other domains.

I can see a role for humans in managing AIs in the short term but in the long term it'll go towards full automation, because humans will not be able to perform at the same level as an AI. As an example, Google had developed a self-driving car that had a steering wheel. The idea was that the human backup driver would takeover at some point if there was a problem. However, Google saw in testing that the human in the car could not react fast enough in case of an emergency. As a result, Google removed the steering wheels in those self-driving cars and replaced them with big "stop" buttons.

Even asking a human to push a stop button in emergency may not always work. We saw a tragic confirmation of this recently when a self-driving Uber car ran over a pedestrian with a bicycle. The car was confused by the bicycle, and there was not enough time for the human to take over.

I think that's essentially what will happen long term with the human role in predictive management, that they need a large "stop" button if they don't understand something and even the stop button may not be enough if AI system is sufficiently autonomous. Humans will not have the ability to manage it long term. Of course, what is "long term" may differ - for some areas, it may be two years from now, for others it may be 50 or 100 years - and there would be many roles for humans to play in the meantime.


Here is the rest of the interview: Anticipating the next move in data science.
Here is Thomson Reuters series AI Experts

I will add (not in the interview) that the big challenge for the society is to come up with solutions for situations that will arise in perhaps 50 years when perhaps half of the people are unemployed because of automation and AI. The good scenario may include universal basic income, retraining, more leisure time, AI-generated technology that will solve climate change, etc. The bad scenario - well there are many apocalyptic movies about it.

However, given how poorly human society has been dealing with global climate change, that experts have been warning about since 1980s, it seems likely that the society and policy makers will not do much until the problem becomes severe. It is now up to younger generation to take on this problem.



Related: