Four Broken Systems & Four Tech Trends for 2018

We may be well into 2018, but here are a set of tech trends for looking forward, along with a set of 4 systems that manifested how inappropriate, inaccurate or outright broken they are in 2017.

By Carolina Ödman-Govender, Physicist

I was invited to speak at a Future Females event about 2018 Tech Trends. Future Females is a movement of female entrepreneurs, current and aspiring. The sort of women I deeply admire, so I was really honoured to be asked to come and speak. Also, what a propitious way to end the year, to be asked to think about the future, so here are some thoughts on this vast topic.

Disclaimer: Coming from astrophysics, I am biased towards data science, statistics and the family of algorithms currently grouped under the machine learning and artificial intelligence umbrella.

How to approach the topic of Tech Trends? There are endless reportspublicationsarticlesdomain-specific analyses and infographics about this. How to navigate this sea of buzzwords and try to extract some understanding of what it means for us people next year, or beyond? My starting point was to ask:


What is broken? What does not work in 2017, that should work better in 2018?

While this list could be very long, I came up with 4 systems that manifested how inappropriate, inaccurate or outright broken they are in 2017.

1. Social and political engineering

In 2017, Social media networks demonstrated that they are not, as previously professed, a unifying force, creating bridges between people across the world. Social media platforms have been in fact quite the contrary, polarising communities, fracturing societies and surfacing deep ideological divides, defended with the spite and aggression that was first identified in YouTube comments back in the day. It turns out, South Africa itself is not small enough to go under the radar of twitter bots designed to fuel partisan opinions.

We’ve seen PR companies shift public opinion and fuel political unrest, foreign countries use social media platforms to influence votes and elections in other nations. We’ve seen political forecasters, from traditional pollsters to cutting-edge statistical analysts get it all wrong.

Clearly, this isn’t working.

2. Traditional (tech) education

Each year there are a number voices describing how the education system in general is broken. While that may be true and way too big to address here, I thought it would be interesting to focus on tech education because there are things happening there, which are very interesting.

The issue is that many corporates don’t feel that they can hire fresh graduatesfrom universities, as they cannot hit the ground running as much as those corporates would like. Graduate unemployment in South Africa is an issue. There are some efforts. Sol Plaatjie University in Kimberley opened in 2014 with a focus on Data Science. Massively Open Online Course platforms like Coursera, edXUdacity and others are now offering not just courses, but specialisations, or nano-degrees, which bundle a group of skills that are in high demand. Those online degrees are increasingly taken seriously in the job market, to the point where many big companies, like Microsoft put a number of accreditation courses on such platforms.

An additional aspect that makes innovation in this space interesting, is the emergence of different business models, and the use of other incentives than just making money. The examples I have in mind are the Insight Data Science FellowshipsEcole 42, and its South African version, the most excellent we think code_. These are tech education institutions where the students are not in charge of the cost of their tuition but the corporates who need the talent they breed are. This changes the game in terms of accessibility tremendously. Ecole42 and we think code_ in particular have no pre-requisites in terms of qualifications whatsoever. That’s right. Someone can have dropped out of school at 15, and still get in, as long as they pass the online problem-solving skill test. It is not a test of knowledge, but a test of how people approach problems. Something that most schools are not teaching.

3. Data stewardship

We have seen some of the biggest data breaches in history this year, and some of the biggest cover-ups revealed as well. Whether UberEquifax or Yahoo, at least one of those breaches most likely affects each one of us. In South Africa, we have seen a data breach reveal ID numbers and other personal information, which could be used for identity theft.

The issue here is that the incentives, ownership and risk are all distributed the wrong way. We all benefit from the system whereby large companies use our data to sell targeted advertising, in exchange for which we willingly give our data away so we can use social networks, web platforms and search engines for free. The problem comes when there is a breach. While those large companies may own the data, they don’t bear the risk that comes with a breach. That risk — of identity theft, of credit card abuse, and more — is still borne by ourselves, the consumers. Therefore, the incentive to apply proper data security measures capable of withstanding the ever smarter hacks, does not lie with the big companies who own and keep the data. Entire reportsexist on the stolen data economy; it turns out than an identity isn’t very expensive on the dark web.

Those identities for sale are not the big companies’ info, they’re yours and mine. Clearly this isn’t working.

The issue is so big that some invest heavily in attempts to rectify this situation. There is big money in this.

4. AI applications propagate the bias of their creators

Artificial Intelligence algorithms in use today are mainly built on machine learning algorithms that are programmed and then trained on data. This leaves room for biases in both the algorithmic and training phases, the latter being probably the most revealing. The biases of AI have been thoroughly documented in 2017 by many major publications. (Please click on all these links to find out more — some articles may be behind paywalls).

Two examples to show how pervasive those biases are, are shown below.

The first image is a screen grab of a google image search for “face front” with the additional filters of colour images and creative commons licensing allowing me to show the images elsewhere. Looking for a face, I got one statue, one drawing, one building, TWO animals, but only one black person. This essentially means that when searching for an image of a face, a black face appears no more frequently than errors like animals and buildings! This is not just a poor representation of the world’s faces, but an utterly biased signal-to-noise ratio.

In the second case, a friend of mine played with an online AI service built at Cambridge University, which derives a demographic from analysing tweets or facebook posts. Needless to say that my friend, myself and a whole bunch of female scientists were immediately tagged as male, raising the justified question: “Are generalisations justifiable in the age of AI?”.

The answer is of course no. Therefore, this is clearly not working. And if we are going to trust our AI, we both need to improve it and to bear in mind the edge cases that the AI is going to miss. We have to stay smarter than AI (and we can).

That covers some of the big broken things of 2017 in my humble and humanly biased opinion. Now onto some interesting technology trends that I (and many others) foresee.


How is technology likely to impact our lives differently in 2018?

Again, this is not an exhaustive list. I have just identified four interesting trends that I’m eager to see develop. There is a bias in this list, as this is mainly technology that is visible to the consumer, if not direct consumer technology. There are fascinating tech developments in industrial and research areas that are not directly visible to us, and that’s an entire other world to explore.

1. Working and living with robots

This has been the standard for many scientists for a few years now already, but still this keeps helping scientific research. From a robotic eel used to find the source of water pollution to stealth robot zebra fish that are accepted in the school, robotics are in use at the cuttong edge in creative ways. Closer to us, robotic surgery is becoming a reality for many doctors (with the natural pains of transition), and therefore also patients. Even more interesting, is the rise of affordable, natural-looking individually-fitted smart prosthetics. With 3D printing, prosthetics don’t need to look like a shop window mannequin anymore, and they can be fitted to any individual. With advances in neuroprosthetics, this is now truly living with robots.

We will also see robots entering our physical space more, I think. Beyond robots at the end of phone numbers, or regulating our social media feeds, we will start seeing them. Any autonomous vehicle is in fact a robot, whether a self-driving carshuttletruck, or a drone, indoors or outdoors, robots are becoming visible, and they will soon be a part of our living environemnt.

2. The vanishing interface

In the past we would go out and buy some piece of technology to access something, whether a walkman for music, a game console to play games, or a computer to read email. Now, this seems to disappear more and more, and instead of having to purchase more technology, we come back and stick to a few items that we have lived with for decades that appear to be here to stay; cars, watches, telephones, entertainment systems. The interface to everything we like starts to live inside of those objects instead, but navigate between them seamlessly. We listen to our music from our phone, but the car takes it over when we start driving, and it’s the car that picks up the next phone call, etc.

The interface is now both vanishing and everywhere, and this raises some very interesting opportunities and challenges for the next few years, from interface/interaction design to data and privacy. I look forward to seeing how this trend will develop in 2018.

3. AI as a commodity

Artificial Inteligence is unavoidable. It will penerate most of what we do. There are some very intersting creative applications of AI, such as optimising chances of gaining employment for refugees based on where they are relocated, to your ordinary chat bot. The difference is now that artificial intelligence is becoming commoditized, and therefore, accessible to anyone to include in their products and services. This commoditization has three main facets:

  • Open Source Software
    TensorflowSparkPyBrainOpenNN, etc. Some of the best AI and machine learning libraries are open source. No more hiding behind proprietary fine-tuned algorithms. Between this and the wealth of free online courses, nearly anyone can learn to apply machine learning and artificial intelligence to their own business.

  • Application Programming Interfaces — APIs
    If one doesn’t have lots of data to train a machine learning algorithm on, for example, then there are pre-trained engines that one can readily access through APIs. And each call to, say, analyse a picture or some text, is not very expensive at all anymore. All the big companies have a comprehensive set of AI APIs ready to be used; Microsoft Azure’s Cognitive Services, Google’s Cloud services — for speechvision and natural language, or Amazon’s Rekognition for example.

  • Computing at the edge
    This is a particularly interesting one in my opinion. Previously, if you looked around the world of connected devices, you could almost say that the internet of things was mainly a bunch of connected sensors, collecting data to be analysed centrally somewhere else, hence the need for the connection. Now chip makers around the world have woken up to the GPU revolution — the fact that Graphics Processing Units (typically found in gaming computers) are more effective for machine learning and artificial intelligence than normal CPUs, central processing units — chips that sit in laptops and cellphones and such. Last year has seen a number of announcements by major chip makers of a new kind of chip that is designed for artifical intelligence; neuromorphic computing (called neuromorphic in reference to the neural networks that most artifical intelligence use). Chips being chips, they tend to come in the billions and be cheap. And this is going to bring affordable AI hardware capability right at the edge, where the sensor sits. I believe that is going to change how we think about AI applications in the world and I’m excited to see where this leads.

4. Blockchain

Finally I must mention this elephant-in-the-room technology, of which I am not a specialist by far. I understand the principles upon which it works, but don’t ask me about the specific differences between cryptocurrencies and exchanges. Blockchain is the technology that underlies cryptocurrencies and it has been observed that those cryptocurrencies offer an attractive alternative in places where central authorities are not trusted anymore. They also attract organisations with nefarious purposes. This brings me to the two features of blockchain technology that make it such a game changer:

  • Trusted decentralisation to replace untrusted centralised systems
  • Uniqueness of digital artefacts

The first point is why cryptocurrencies are so attractive in countries where the central economic authority is in difficulty or untrusted. It can have applications in contracts, supply chain, etc. The second point is unprecedented. The ability to tell an original from a copy has not before been possible with digital artefacts. This can guarantee authenticity of pieces of art, limit copying of copyrighted material, etc.

I will leave this here without further development, but I think it is important not to get blinded by all the hype surrounding cryptocurrencies and seeing the technology for its uniqueness and maybe look at what that can enable in any industry. 2018 will undoubtedly see some of those interesting applications emerge from the noise of cryptocurrency value fluctuations.


In Conclusion

As this was addressed to enterpreneurs, I decided to think a little about where opportunity lies. My impression is that technology and a problem to solve isn’t enough, and needs the right model of economic incentive, and there is space for lot of creativity there, which I tried to illustrate.

If you have time to read on:

One last point I’d like to highlight here is the ‘Beyond CSR’ thinking of economic incentives.

There are two examples of win-win situations that have nothing charitable about them in this article. The first one is the business model of we think code_ (and Ecole 42). By funding the school, the companies that hire their graduates can be seen as selfishly looking after their interests, i.e. fixing the scarcity of talent that they need, but in fact it goes way beyond that. In South Africa, someone with a job often supports an entire family, often an extended family too, especially if they entered the coding school without major qualifications, and this contributes to alleviating the dreadful effects of poverty. So beyond the win-win of the model itself, there’s an additional positive side-effect.

The second example is that of AI applied to relocate refugees to increase their chances of finding employment. While this example is not from the private sector, the reward/currency of academia is scientific publications and this clearly led to at least one. But it also applied cutting-edge technology for the benefit of people whose interests often come last. Moreover, the outcome, higher chance of employment, directly results in better integration, and seeing that integration goes both ways, leads to more tolerance too. So beyond the win-win of this project, there is an additional positive side-effect.

The bottom-line is that by focusing further than on just a single beneficiary, one can be creative around a generative chain of value/wealth/wellness-creation and technology or not, that is unlikely to ever change.

Bio: Carolina Ödman-Govender is a Physicist of the Bayesian confession. EPFL & Cambridge Alum. Also proud mama and happy wife.

Original. Reposted with permission.