KDnuggets Home » News » 2018 » Mar » Opinions, Interviews » Is an AI /machine-driven world better than a human driven world? ( 18:n11 )

Is an AI /machine-driven world better than a human driven world?

On the positive side of AI we have a prospect of self-driving cars, and other benefits, and thru education humans can evolve and improve. The risks include loss of jobs, growing inequality, dealing with superintelligence.


I recently spoke at a panel/debate at the World Government Summit in Dubai and also attended the AI Roundtable organized by the AI society at Harvard Kennedy school of government.

The topic discussed was “Is a machine-driven world better? (part of the Brave Conversations discussion)”. I was on the side of the Machines. Not an easy debate – someone referred to us as the ‘evil team’!

After the panel, I shared my thoughts with Gregory Piatetsky Shapiro. Gregory motivated me to also consider the contra perspective i.e. the risks of an AI driven world. So in this blog I decided to take up the challenge of looking at the issue from both sides.

Note that these are my personal views. It is not based on the panel discussion.

There is a lot of interest in AI in Dubai - UAE has a minster for AI and the discussion at the Summit was of a very high quality. Here is the summary of my five-minute opening talk on the subject as to why a machine-driven world is better.

Why an AI driven world is better than a human driven world?

Do you think we should ban self-driving cars?

If you do not think so (and no one did) then the debate is over.

Because then we believe implicitly that a machine-driven world is better.

We trust our lives with it.

We trust the lives of our children with self-driving cars.

But then we could ask:

Do we think that a world in which AI is dramatically more intelligent than humans would be a better place?

I still think yes.

1) Because human beings will also evolve faster to meet the threat

And I have more faith in the evolution of human intelligence.


Ironically because of Robots.

There is a precedent for this.

In times of threat humans have evolved rapidly – for example to develop larger, more complex brains in response to rapid climate changes – how climate change may have shaped human evolution.

But how can you guarantee that humans will evolve faster?

Through education.

Instead of the old 3Rs (Reading, WRiting, and ARithmetic) we need a new set of 3 Rs (AI literacy, Data literacy, Human literacy) (adapted from the book Robot Proof by Joseph Aoun which listed Tech literacy, Data literacy and Human literacy as the three new R)

Ok so let’s take it a step further.

2) How can we prosper in this world dominated by AI?

Most people do not fully realize the opportunities presented by AI – both to people and to countries.

There are three reasons why AI is so disruptive today. First, AI is rapidly becoming valuable as a skill. Secondly AI is ‘eating’ many functions. The first two are easy for people to understand intuitively. The third is IP (Intellectual Property). AI is one of the few defensible IP and this is an opportunity for both individuals and governments. Worldwide this AI talent pool is estimated to be only 22,000 people.

So, we need to replace the fear of AI with a curiosity for AI.

3) And how much time have we got?

According to one of the most authoritative book on AI by Goodfellow and Bengio we have currently the capacity to model the brain of a cat. If we extrapolate the current trends - we are expected to be able to be model a human brain by 2056. At which time we have to start thinking of Asimov definitions of Robotics

So, to recap why would a machine-driven world be better than a human driven world

  1. If you accept self-driving cars, you have already accepted a machine-driven world
  2. Humans will evolve and do it very rapidly, so it’s not a dichotomy
  3. We can guarantee our evolution through education
  4. How can we prosper in this world? Look at the opportunities. Go from fear to curiosity
  5. How much time have we got? To 2056

Dubai World Govt Forum

Now, let’s switch hats.

So, here are my thoughts arguing for the opposite perspective.

What are the risks of an AI driven world?

Let’s start with Alvin Toffler who is widely regarded as one of the most influential thinkers. In his 1970s book Future Shock he defines a term called "future shock" as a certain psychological state of individuals and entire societies due to "too much change in too short a period of time".  Toffler predicted many things accurately including the Post Industrial Society in the 1970s (i.e. Pre Internet). But even he had not factored in the impact of AI!  Alvin Toffler's main thought consists of the fact that modern man feels shock from rapid changes. Hence, the biggest impact of AI as I see it is that the rate of change will increase dramatically. We are then at risk of living in a future shock 2.0 (not the best phrases but it captures the sentiment)

There are three ways this AI driven rapid rate of change could impact us directly

  1. Loss of Jobs
  2. Inequality and the loss of income
  3. Dealing with Super Intelligence

There is also a fourth – Dealing with Emotional  + General AI.

Let’s start with the Loss of Jobs.

1) Loss of Jobs

Loss of jobs due to AI is easiest for us understand because we are experiencing it already.

Gartner Says By 2020, Artificial Intelligence Will Create More Jobs Than It Eliminates. AI Will Create 2.3 Million Jobs in 2020, While Eliminating 1.8 Million. But the issue is: Jobs that are eliminated will be very different from the jobs that are created.  Fukoku Mutual, an insurance firm in Japan, is replacing 34 claims adjusters with AI.

AI has shown to be saving costs of $1.2 million (140 million yen) in wages annually. Sectors which show the biggest deployment of AI  such as Banking and Financial services will also be the ‘ground zero’ of job losses in AI (Deutsche Bank CEO John Cryan has predicted a ‘bonfire of industry jobs’ and he could be right). For many companies and countries – the risks of not participating in AI are worse.

For example a recent Australian Government report on AI says that:

“Australia should double its pace of artificial intelligence and robotics automation to reap a $2.2 trillion opportunity by 2030, while also urgently preparing to support more than 3 million workers whose jobs may be at risk”

2) Loss of Income and Income inequality

Related to the loss of jobs is the loss of income and income inequality. Yuval Harari outlines this risk this best in the article Are we about to witness the most unequal societies in history?

We are at risk of creating two classes of people: ‘superhumans’ and a huge underclass of ‘useless’ people. Once the masses lose their economic and political power, inequality levels could spiral alarmingly. As some groups increasingly monopolise the fruits of globalisation, billions are left behind.

In feudal times, hierarchy was accepted as the norm – enforced (or even encouraged) by class divisions and religion. In the modern times – both free market capitalism and Socialism tried to balance this by creating a sense of equality.  While Socialism and Capitalism were not perfect, it was important to realize the masses were important for both.  But AI makes the masses redundant – whether they are soldiers or workers.

Once AI is smarter even than the human elite, all humanity could become redundant.

3) Dealing with Superintelligence

This then leads us to the question of Super Intelligence and how we deal with Super intelligence. Depending on who you speak to .. It is expected that by 2050 based on the previous discussion - we would be able to model the brain of a human being.

In his well known book - superintelligenceSuperintelligence  Nick Bostrom argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant life form on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists and the outcome could be an existential catastrophe for humans. Both Elon Musk and Bill Gates base their concern on this idea. The implications of introducing a second intelligent species to Earth are far-reaching and hard to predict. That reality may also come about sooner than later. In my view, we would still need to go past 2050 i.e. modelling the human brain – but in real terms – that date is not far away.

4) Dealing with Emotional and General AI

In my discussions with colleagues  ( Sophie Maclaren – Said Business school) and other colleagues from Oxford University over dinner in the evening – I thought of a fourth risk. Overall, we are currently not discussing General AI i.e. we are not speaking of self-aware or conscious systems. But the late Marvin Minsky’s work points to the impact of Emotions and AI.  In his book the Emotion Machine Minsky believed that there is no fundamental difference between humans and machines, and that humans are machines whose "intelligence" emerges from the interplay of the many unintelligent but semi-autonomous agents that comprise the brain.

Minsky’s main argument is that emotions are "ways to think" for different "problem types" that exist in the world, and that the brain has rule-based mechanisms (selectors) that turn on emotions to deal with various problems. This idea is explored also by Ray Kurzweil’s book How to create a mind.

An emotional machine may not be a threat in itself since it could learn empathy. However, engaging with an emotional machine could be an unpredictable situation – and hence a potential risk.


Thanks to the World Government Summit for this forum. Long known for it’s trade and industry – I also see Dubai now as a place for intellectual discussion at a Global scale.

Many thanks to our panel

Bio: Ajit Jaokar is a Data Scientist and he teaches the Data Science for Internet of Things course at the University of Oxford.  The blog represents his personal views. Ajit attended the World Government Summit in Dubai and also attended the AI Roundtable organized by the AI society – Harvard Kennedy school.