13 Forecasts on Artificial Intelligence

Once upon a time, Artificial Intelligence (AI) was the future. But today, human wants to see even beyond this future. This article try to explain how everyone is thinking about the future of AI in next five years, based on today’s emerging trends and developments in IoT, robotics, nanotech and machine learning.

9. AI is pushing the limits of privacy and data leakage prevention. AI is shifting the privacy game on an entirely new level. New privacy measures have to be created and adopted, more advanced than simpler secure multi-party computation (SMPC) or faster than homomorphic encryption. Recent researches show how differential privacy can solve many of the privacy problems we are facing on a daily basis, but there are already other companies looking one step ahead — an example is Post-Quantum, a quantum cybersecurity computing startup.

10. AI is changing IoT. AI is allowing IoT to be designed as a completely decentralized architecture, where even single nodes can do their own analytics (i.e., “edge computing”). In the classic centralized model, there is a huge problem called server/client paradigm. Every device is identified, authenticated, and connected through cloud servers — that entails an expensive infrastructure. A decentralized approach to IoT networking or a standardized peer-to-peer architecture can solve this issue, reduce the costs, and prevent a single node failure to break down the entire system.

11. Robotics is going mainstream. I believe that AI development is going to be constrained by advancements in robotics, and I also believe the two connected fields have to go pari passu in order to achieve a proper AGI/ASI. Looking at following figure, it is clear how our research and even collective consciousness would not consider an AI as general or super without having a “physical body”.

Search trends for robotics

Search trends for robotics and other fields artificial intelligence alike (created with the CBInsights Trends tool)

Other evidence that would confirm this trend are: i) the recent spike in robotic patent application, which according to IFI Claims reached more than 3,000 applications in China, and roughly the same number spread across USA, Europe, Japan, and South Korea; ii) the price trend for the Robo Stox ETF, as shown in next figure.


Robo Stox ETF Price trend

Robo Stox ETF Price trend, for the period 2013–2016.

12. AI might have a real barrier to development. The real barrier for running toward an AGI today is not the choice of algorithms or data we used (not only at least) but is rather a mere structural issue. The hardware capacities, as well as the physical communications (e.g., the internet) and devices power, are the bottlenecks for creating an AI fast enough — and this is why I believe there exist departments such as Google Fiber. This is also why quantum computing is becoming extremely relevant. Quantum computing allows us to perform computations that Nature does instantly although they would require us an extremely long time to be completed using traditional computers. It relies on properties of quantum physics, and it is all based on the idea that traditional computers state every problem in terms of strings of zeros and ones. The qubits instead identify quantum states where a bit can be at the same time zero and one. Hence, according to Frank Chen (partner at Andreessen Horowitz), transistors, semiconductors, and electrical conductivity are replaced by qubits — that can be represented as vectors — and new operations different from traditional Boolean algebra.

A common way to explain the different approach of traditional vs. quantum computing is through the phonebook problem. The traditional approach for looking for a number in a phonebook proceeds through scanning entry by entry in order to find the right match. A basic quantum search algorithm (known as Grover’s algorithm) relies instead on what is called “quantum superposition of states”, which basically analyzes every element at once and determines probabilistically the right answer.

Building a quantum computer would be a scientific revolutionary breakthrough, but it is currently extremely hard to build according to Chen. The most relevant issues are the elevated temperature needed for superconducting materials the computer will be built with; the small coherence time, which is the time window in which the quantum computer can actually perform calculations; the time for performing single operations; and eventually, the energy difference between the right and the wrong answers is so small to be hard to be detected. All these problems shrink the market space to no more than a few companies working on quantum computing: colossus such as IBM and Intel are working on it since some years, and startups such as D-Wave Systems (acquired by Google in 2013); Rigetti Computing; QxBranch; 1Qbit; Post-Quantum; ID Quantique; Eagle Power Technologies; Qubitekk; QC Ware; Nano-Meta Technonoliges; and Cambridge Quantum Computing Limited are laying the foundations for quantum computing.

13. Biological robot and nanotech are the future of AI applications. We are witnesses of a series of incredible innovations lying at the intersection of AI and nanorobotics. Researchers are working toward creating creatures entirely artificial as well as hybrids, and they even tried to develop biowires (i.e., electrical wires made by bacteria) and organs on chips (i.e., functional pieces of human organs in miniature made by human cells that can replicate some of the organ functions – Emulate is the most advanced company in this space). Bio-bots research is also testing the boundaries of materials, and soft-robots have been recently created with only soft components. BAE Systems corporation is also pushing the limits of computing trying to create a “chemical computer (the Chemputer)”, a machine that would use advanced chemical processes “to grow” complex electronic systems.

— Francesco Corea

III. Read More & References

What is an Artificial Connectome?



Kurakin, A., Goodfellow, I. J., Bengio, S. (2016). “Adversarial Examples in the Physical World”. Technical report, Google, Inc. Available at arXiv: 1607.02533.

Lake, B. M., Salakhutdinov, R., Tenenbaum, J. B. (2015). “Human-level concept learning through probabilistic program induction”. Science, 350(6266): 1332–1338.

Lake, B. M., Ullman, T. D., Tenenbaum, J. B., Gershman, S. J. (2016). “Building Machines That Learn and Think Like People”. Available at arXiv:1604.00289.

Lo, A. W. (2004). “The Adaptive Markets Hypothesis: Market Efficiency from an Evolutionary Perspective”. Journal of Portfolio Management 30: 15–29.

Papernot, N., McDaniel, P. D., Goodfellow, I. J., Jha, S., Celik, Z. B., Swami, A. (2016). “Practical black-box attacks against deep learning systems using adversarial examples”. CoRR, abs/1602.02697.

Rosenberg, L. B. (2015). “Human Swarms, a real-time method for collective intelligence”. Proceedings of the European Conference on Artificial Life: 658–659.

Rosenberg, L. B. (2016). “Artificial Swarm Intelligence, a Human-in-the-Loop Approach to A.I.”. Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16): 4381–4382.

Simon, H. A. (1955). “A Behavioral Model of Rational Choice”. The Quarterly Journal of Economics, 69 (1): 99–118.

Original post. Reposted with permission.

Bio: Francesco Corea is a Decision Scientist and Data Strategist based in London, UK.