Topics: Coronavirus | AI | Data Science | Deep Learning | Machine Learning | Python | R | Statistics

KDnuggets Home » News » 2020 » Jun » Opinions » Why Do AI Systems Need Human Intervention to Work Well? ( 20:n23 )

Why Do AI Systems Need Human Intervention to Work Well?


All is not well with artificial intelligence-based systems during the coronavirus pandemic. No, the virus does not impact AI – however, it does impact humans, without whom AI and ML systems cannot function properly. Surprised?



By Laduram Vishnoi, CEO & Founder at Acquire

Each one of us has experienced artificial intelligence (AI) in our daily lives. From customized Netflix recommendations to personalized Spotify playlists to voice assistants like Alexa managing shopping lists and appliances – all these examples show how integral AI-enabled systems have become to our lives.

On the business front, most organizations are heavily investing in AI/ML capabilities. Whether it is the automation of critical business processes, building an omnichannel supply chain, or empowering customer-facing teams with chatbots, AI-based systems have significantly reduced manual work and costs for businesses, leading to higher profitability. 

Despite these successes, a recent article by MIT author, Will Douglas, indicated that all is not well with artificial intelligence-based systems during the coronavirus pandemic. No, the virus does not impact AI – however, it does impact humans, without whom AI and ML systems cannot function properly.

Surprised? 

Well, we can't blame you. If you have been using machine learning algorithms for quite some time to handle your inventory, customer support, and other such functions, your systems are likely to be well-trained and working efficiently without human intervention. Yet, that statement is only partially true, and that’s because your machine learning algorithms are not trained on the ‘new normal’ that has emerged during the pandemic.

 

As we all know, the pandemic has changed the world completely, including the patterns of supply and demand, and buyer behavior in general. For example, it only took a few days for the top items searched on Amazon across the globe to be populated with COVID-19 related products like toilet paper, face mask, and hand sanitizer. Things like phone chargers and Lego, which had been ruling the roost since ages, were quickly dethroned. Such drastic changes have also impacted artificial intelligence, as machine-learning models trained on normal behavior are suddenly facing massive deviations, and many are not working as they should.

 

Noteworthy AI-Fails Before the Pandemic

 
AI applications have been refined considerably over the past few years. Yet, there have been several setbacks in the journey when machines have not worked as they should have – for one reason or the other. To start with, IBM’s “Watson for Oncology” that was supposed to eliminate cancer turned out to be an utterly ridiculous product. Before being shelved, the product was found to give incorrect medical advice that could potentially worsen the condition of the patients. According to one source, the problem lay in the fact that Watson was trained on a small number of “synthetic cancer cases” rather than real patient data. Even the recommendations were based on a few cancer specialists’ expertise than any written guidelines or evidence. 

Another popular (or unpopular) case was Amazon’s hiring engine that was biased towards white males. The model was trained on the resumes submitted to Amazon for over ten years and benchmarked against the resumes of its presence engineering employees. In essence, the model trained itself to prefer men for the job. Some people familiar with the issue also reported that the system penalized resumes with the term “women’s” and downgraded resumes from two all-girl colleges.

Coming to the present time of the pandemic, one can take the hypothetical example of a company that sells disinfectants. Considering that the retailer depended on an automated inventory management system, the chances are that the forecasts on which the company relied upon (generated by its predictive algorithms based on user behavior) no longer matched up with the actual demand spurt caused by the coronavirus – leading to severe demand and supply issues. In fact, with supply chains getting impacted the world over and new demand patterns surfacing for various companies, it is time to rethink the AI-enabled models employed for sales and budget forecasting. As the ‘new normal’ emerges during the current economic and social upheaval, the data and assumptions drilled into these ML models is no longer up to date and can lead to grave mistakes.

 

Human Intervention is Crucial for AI Success

 
Machine-learning systems are only as good as the data they are trained upon. This means the current black swan event is the perfect trigger to reimagine the training sets fed to our AI-ML systems. Many experts believe that AI should be trained not only on simple worst-case scenarios but also the watershed events in human history like the Great Depression of the 1930s, the 2007-08 financial crisis, and the present pandemic.

Human oversight can also help to overcome the shortcomings of AI to a large extent. At present, most people are concerned about their health and that of their loved ones. Once again, social media has come in handy for both consuming and propagating news about the pandemic. Yet, many people can’t discern real news from fake news, which can have severe consequences in the real world. All of us are only too aware of the allegations against Facebook for its possible influence in the US elections by using its algorithms to push fake news. However, human oversight can help overcome the spread of fake news, as it is up to the readers to click on the sources, verify the story, and report fake news to the system to prevent its spread. 

 

The Way Ahead

 
Today, even as humans look to AI for immortalizing themselves, they cannot leave AI to function independently without human oversight because machines are eventually machines and do not possess a moral or social compass. At best, AI is as good as the data it is trained upon, which, in turn, may reflect the bias, thought process, or the moral compass of its creators. To overcome such instances, it is necessary to train AI across disparate data sets and put human checks in place to maintain the delicate balance.

 
Bio: Laduram Vishnoi (@laduramvishnoi) is CEO and Founder at Acquire. He loves to share his research and development on Artificial intelligence, machine learning, neural network and deep learning.

Related:


Sign Up

By subscribing you accept KDnuggets Privacy Policy