13 Forecasts on Artificial Intelligence

Once upon a time, Artificial Intelligence (AI) was the future. But today, human wants to see even beyond this future. This article try to explain how everyone is thinking about the future of AI in next five years, based on today’s emerging trends and developments in IoT, robotics, nanotech and machine learning.



AI forecast

I. Overview

We have discussed some AI topics in the previous posts, and it should seem now obvious the extraordinary disruptive impact AI had over the past few years. However, what everyone is now thinking of is where AI will be in five years time. I find it useful then to describe a few emerging trends we start seeing today, as well as make few predictions around machine learning future developments. The following proposed list does not want to be either exhaustive or truth-in-stone, but it comes from a series of personal considerations that might be useful when thinking about the impact of AI on our world.

II. The 13 Forecasts on AI

1. AI is going to require fewer data to work. Companies like Vicarious or Geometric Intelligence are working toward reducing the data burden needed to train neural networks. The amount of data required nowadays represents the major barrier for AI to be spread out (and the major competitive advantage), and the use of probabilistic induction (Lake et al., 2015) could solve this major problem for an AGI development. A less data-intensive algorithm might eventually use the concepts learned and assimilated in richer ways, either for action, imagination, or exploration.

2. New types of learning methods are the key. The new incremental learning technique developed by DeepMind called Transfer Learning allows a standard reinforcement-learning system to build on top of knowledge previously acquired — something humans can do effortlessly. MetaMind instead is working toward Multitask Learning, where the same ANN is used to solve different classes of problems and where getting better at a task makes the neural network also better at another. The further advancement MetaMind is introducing is the concept of dynamic memory network (DMN), which can answer questions and deduce logical connections regarding series of statements.

3. AI will eliminate human biases, and will make us more “artificial”. Human nature will change because of AI. Simon (1955) argues that humans do not make fully rational choices because optimization is costly and because they are limited in their computational abilities (Lo, 2004). What they do then is “satisficing”, i.e., choosing what is at least satisfactory to them. Introducing AI in daily lives would probably end it. The idea of becoming once for all computationally-effort-independent will finally answer the question of whether behavioral biases exist and are intrinsic to the human nature, or if they are only shortcuts to make decisions in limited-information environment or constrained problems. Lo (2004) states that the satisficing point is obtained through an evolutionary trial and error and natural selection — individuals make a choice based on past data and experiences and make their best guess. They learn by receiving positive/negative feedbacks and create heuristics to solve quickly those issues. However, when the environment changes, there is some latency/slow adaptation and old habits don’t fit the new changes — these are behavioral biases. AI would shrink those latency times to zero, virtually eliminating any behavioral biases. Furthermore, learning over time based on experience, AI is setting up as a new evolutionary tool: we usually do not evaluate all the alternatives because we cannot see all of them (our knowledge space is bounded).

4. AI can be fooled. AI nowadays is far away to be perfect, and many are focusing on how AI can be deceived or cheated. Recently a first method to mislead computer vision has been invented, and it has been called adversarial examples (Papernot et al., 2016; Kurakin et al., 2016). Intelligent image recognition software can indeed be fooled by subtle modifying pictures in such a way the AI software would classify the data point as belonging to a different class. Interestingly enough, this method would not trick a human mind.

5. There are risks associated with AI development. It is becoming mainstream to look at AI as potentially catastrophic for mankind. If (or when) an ASI will be created, this intelligence will largely exceed the human one, and it would be able to think and do things we are not able to predict today. In spite of this, though, we think there are few risks associated to AI in addition to the notorious existential threat. There is actually the risk we will not be able to understand and fully comprehend what the ASI will build and how, no matter if positive or negative for the human race. Secondly, in the transition period between narrow AIs and AGI/ASI, there will be generated an intrinsic liability risk — who would be responsible in case of mistakes or malfunctioning? Furthermore, there exists, of course the risk of who will detain the AI power and how this power would be used. In this sense, we truly believe that AI should be run as a utility (a public service to everyone), leaving some degree of decision power to humans to help the system managing the rare exceptions.

6. Real general AI will likely be a collective intelligence. It is quite likely that an ASI will not be a single terminal able to make complex decisions, but rather a collective intelligence. A swarm or collective intelligence (Rosenberg, 2015; 2016) can be defined as “a brain of brains”. So far, we simply asked individuals to provide inputs, and then we aggregated after-the-fact the inputs in a sort of “average sentiment” intelligence. According to Rosenberg, the existing methods to form a human collective intelligence do not even allow users to influence each other, and when they do that they allow the influence to only happen asynchronously — which causes herding biases. An AI on the other side will be able to fill the connectivity gaps and create a unified collective intelligence, very similar to the ones other species have. Good inspirational examples from the natural world are the bees, whose decision-making process highly resembles the human neurological one. Both of them use large populations of simple excitable units working in parallel to integrate noisy evidence, weigh alternatives, and finally reach a specific decision. According to Rosenberg, this decision is achieved through a real-time closed-loop competition among sub-populations of distributed excitable units. Every sub-population supports a different choice, and the consensus is reached not by majority or unanimity as in the average sentiment case, but rather as a “sufficient quorum of excitation” (Rosenberg, 2015). An inhibition mechanism of the alternatives proposed by other sub-populations prevents the system from reaching a sub-optimal decision.

7. AI will have unexpected socio-political implications. The first socio-economic implication usually associated with AI is the loss of jobs. Even if from one hand this is a real problem (and opportunity from many extents), we believe there are several further nuances the problem should be approached from. First, the job will not be destroyed, but they will simply be different. Many services will disappear because data will be directly analyzed by individuals instead of corporations, and of the major impact AI will have is fully decentralizing knowledge. A more serious concern in our opinion is instead the two-fold consequence of this revolution. First of all, using always smarter systems will make more and more human beings to lose their expertise in specific fields. This would suggest the AI software to be designed with a sort of double-feedbacks loop, which would integrate the human and the machine approaches. Connected to this first risk, the second concern is that humans will be devoted to mere “machine technicians” because we will believe AI to be better at solving problems and probably infallible. This downward spiral would make us less creative, less original, and less intelligent, and it will augment exponentially the human-machine discrepancy. We are already experiencing systems that make us smarter when we use them, and systems that make us feeling terrible when we do not. We want AI to fall into the first category, and not to be the new “smartphone phenomenon” which we will entirely depend on. Finally, the world is becoming more and more robo-friendly, and we are already acting as interfaces for robots rather than the opposite. The increasing leading role played by machines — and their greater power to influence us with respect to our ability to influence them — could eventually make the humans be the “glitches”.

On a geopolitical side instead, we think the impact AI might have on globalization could be huge: there is a real possibility that optimized factories run by AI systems which control operating robots could be relocated back to the developed countries. It would lack indeed the classic economic low-cost rationale and benefits of running businesses in emerging countries, and this is not clear whether it will level out the countries’ differences or incrementing the existing gaps between growth and developed economies.

8. Real AI should start asking “why”. So far, any machine learning system is pretty good in detecting patterns and helping decision makers in their processes, and since many of the algorithms are still hard-coded they can still be understood. However, even if already clarifying the “what” and “how” is a great achievement, AI cannot understand the “why” behind things yet. Hence, we should design a general algorithm able to build causal models of the world, both physical and psychological (Lake et al., 2016).