Chaos is needed to keep us smart with Machine Learning

This post analyses why the chaotic nature of our lives can be used to improve machine learning algorithms.

By Smrati Gupta, CA Technologies.

I like how machine learning has found its way into our lives and how it makes the interaction with technology far more lucid and natural than before. I love how you tube suggests me my next videos to watch based on what I watched previously. I love when Google maps know that I would like to go to Yoga at 6 am and to work at 9 am. I like that I don’t need to go through the list of movies on Netflix in alphabetical order but it knows which ones I would like the most. I like the power of machine learning based software to predict, learn, recommend and self-correct. As we interact more with these background algorithms, they get better tuned to provide us tailor-made products. It is like walking into a restaurant and having a shortlisted set of dishes tailor made to my taste without me asking for it. Oh wait- we do have it already right?



We like this angle of technology is because it makes our lives easier. It reduces the pain of removing the piece of information that does not interest us and is noise from our perspective. What is noise for me, may not be noise for you, and hence machine learning is removing the noise from our lives in a relative manner. The “relative noise reduction” helps us trust Artificial intelligence (AI) more. The acceleration in adoption of technology due to power of AI is a self-sustaining cycle: by increasing our trust in technology, we use it more, that improves the technology further. Once we trust the technology, we allow it to advise us, to compete with us, to help us and to educate us. We lead ourselves to let AI learn our tastes and take it further from there.

The “relative noise reduction” paradigm of AI is reducing the outliers that you see in your interactions with the technology. However, the noise that you don’t see is somehow reducing your frontiers. As a data scientist, I am happy to see my algorithm achieve high accuracy to predict, to recommend well and to self-correct. But these algorithms learn from what they see, exactly or partially bearing resemblance to the history. They do not and cannot propose things my brain would propose in exploratory process without any proof of obvious interest from history. And that brings us to a “side effect” of AI- the unconscious trimming of creativity. I have my interest to watch astronomy related documentaries, and google, Facebook, Instagram, YouTube, LinkedIn and even Pinterest seems to be aware of this. However, somehow, a chain reaction has triggered that takes me to a gazillion resources around astronomy. How are we allowing the users of the AI-powered technology to broaden their horizons by showing them something absolutely out of the blue once in a while? How are we ensuring that the power of human nature and ability to learn is embedded in the recommendations we design? There seems to be an inherent notion in this philosophy that you must be “master of one trade”. I prefer to be the “jack of all” better.

The chaos that we capture in our lives is important. It is particularly important while exposing oneself to the knowledge resources. While the machine learning recommendations through algorithms like collaborative filtering provide the user tailored exposure to what it thinks is useful, at the same time, they also add a structure to the noise and over time, removing the noise completely. Once we are not exposed to noisy recommendations, our periphery of exposure is bound by history. It is therefore important that as the paradigm of recommendations through data science or selective exposure to content on the pretext of “tailor-made solutions” is noisy enough to ensure that we embrace the creativity that is natural to humans. When you detect an outlier in your data, the common knowledge is to treat it as rarity and not base conclusions on it, but outliers contain valuable information. Our algorithms should learn from the outliers in history and every spike that we ignore on the pretext of data cleaning needs to be the most valuable element of creativity to the process of machine learning. In true sense, that will be the paradigm for machine to be “learning” as a human being.


Original. Reposted with permission.