Why Data Scientists Expect Flawed Advice From Google Bard

First reported by Reuters, Bard returned an inaccurate response, leading to a drop in Alphabet’s (GOOGL) stock price by as much as 9% on the day of the demonstration. For many in the data community, this did not come as a surprise; here’s why.

Why Data Scientists Expect Flawed Advice From Google Bard
Image by Editor


During a recent promotional event, Google offered the world its first glimpse of Bard, the tech giant’s attempt to dislodge the immensely popular AI chatbot, ChatGPT. And while this event was likely designed to generate buzz around Google’s foray into AI chat, it quickly became a public demonstration of what can go wrong with these mysterious AI-based chat services.

First reported by Reuters, Bard returned an inaccurate response, leading to a drop in  Alphabet’s (GOOGL) stock price by as much as 9% on the day of the demonstration. For many in the data community, this did not come as a surprise; here’s why.


What’s Wrong With Bard’s Ad?


The issue: When asked what information one might share with a 9-year-old about the stunning James Webb Space Telescope (JWST), Bard incorrectly ascribed it as being “. . .used to take the very first pictures of a planet outside the Earth's solar system.”. This response was quickly identified as incorrect, as the first pictures of a planet outside Earth’s solar system were taken in 2004 by the European Southern Observatory's Very Large Telescope (VLT).


What Is Google Bard?


Google CEO Sundar Pichai calls Bard a “conversational AI service.” If that sounds familiar, you’ve likely encountered one of the many online influencers talking about ChatGPT in the past few months.

Opinions vary widely about this AI breakthrough. Those in favor discuss revolutionary plans or share excitement to leverage the chatbot to grow a business quickly. One of the most popular use cases for ChatGPT is the chance to automate repetitive tasks. On the other hand, critics have expressed flawed views about its abilities — thinking the chatbot’s functionality replaces human research and critical thinking. Google Bard will spark similar conversations.


What Is LaMDA?


Google’s Bard is based on LaMDA, a language model the company rolled out in 2021, which itself is built on the company’s popular open-source neural network architecture, Transformer.

Interestingly, LaMDA was trained using dialogue, allowing it to learn how to respond to human conversations more naturally than previous language models. (If the name LaMDA sounds familiar, maybe you recall when a Google engineer believed LaMDA had become sentient.)

Announced in May of 2021, Google’s LaMDA debut was a chance to discuss the benefits of their model, with the announcement specifically focused on the factuality of its responses. Two years later, during the Bard announcement, reporters quickly discovered a factual error. What will this mean for Google’s implementation? We will have to wait and see.


How Is Google Bard Different From LaMDA?


Given how similar the two sound, some have asked, what makes this new product different? Google modestly claims its new service adds the “breadth of the world’s knowledge” to LaMDA’s already-impressive conversational skills. It also promises to advance the model’s capabilities to offer more guidance and research than an ordinary chatbot. In short, Bard relies on LaMDA. LaMDA doesn’t rely on Bard.

Also note that Google plans to open the service up to developers via the Bard API link, making it more exciting than a regular chatbot. And while users can already drive free-flowing conversations with LaMDA, Google Bard promises to elevate the conversation by including the same information the company provides with its omnipresent Google Search service.

Counterintuitively, Bard’s factual misstep shouldn’t undercut the promised value of this new service. Google’s search results never promised absolute accuracy. Searchers should always consider their sources. That fact remains, even with the release of an impressive chatbot.


Ways to Use Google Bard


Data scientists aren’t the only people excited to use Google Bard. Pichai imagines this service will help anyone who uses Google. His promise? Bard simplifies complex topics. Consider the following use cases:


Find Simple Answers to Complex Questions


Imagine you want to take a vacation. Google’s existing search easily answers questions when you know where you want to go and helps you find prices for specific dates. But Pichai likens this new chatbot to a conversation with a friend.

You should be able to start a conversation about a project at work, break down a complex research article into the most important takeaways, and then ask for recommendations on applying those takeaways over the next few weeks.

Bard could also augment training on sales teams, offer dinner recommendations when you host a picky eater, and discuss the best parts of the latest Marvel movie. Google already implements AI to augment its products, but this process will expand as the new chatbot evolves. All this, and you’d never have to leave your familiar search engine.


Automate Routine Tasks


Google already integrates with marketplaces, allowing you to purchase goods and make reservations directly from their interface. Google Bard could allow you to do this via conversation. Alexa might be able to start your Spotify playlist, but imagine working in tandem with an AI to curate the soundtrack to your next road trip while you drive to work.


Join a Community Conversation


Social networks often foster online arguments, rewarding controversial takes and providing trolls with additional exposure. This led to many abandoning popular platforms from the 2010 era. Google Bard could curate conversations for niche communities, connecting curious minds while filtering out hate speech and misinformation.


Potential Pitfalls when Using Google Bard


Unsurprisingly, this new chatbot has garnered much attention, at least in part, because of its broad potential for applications. It's equally as exciting for a board-game fanatic as it is for a project manager. Pichai omits a few important points in his press releases, and these became more obvious with the recent misattribution of extraordinary space photography. Here are a few more points to consider.


Chatbots Don’t Have to Tell the Truth


Popular conversational agents like ChatGPT have built-in ethics guidance. For example, it shouldn’t make off-color jokes or deliberately try to mislead you. But that’s not always the case with AI chat systems.

Check out this recent study about the use of AI in the board game Diplomacy, which focused on the ability to negotiate and form alliances. The study also emphasized the use of penalties for any party that breaks a promise. It’s encouraging to see teams focused on building AI strategies that promote trustworthy communication, but consider what that means for unwary data scientists.

The further we progress with chatbot technology, the more we should consider the motivation of those providing the service. In situations where negotiations take place (financial services, trading, salary negotiations), a chatbot may deliberately mislead the user.


Machine Learning Doesn’t Negate Bias


The data science community is already well aware of biases in machine learning. Like their human counterparts, conversational chatbots regularly show confirmation bias and prejudice bias. Google Bard will undoubtedly account for some of these issues, but we should anticipate flaws from the newest third-party applications when released.

And, while Google hires some of the best data scientists in the world, the tech giant is not without its faults. Remember the ethics researcher that raised some inconvenient questions? She voiced concerns about environmental costs, inscrutable models, misdirected research efforts, and the potential for propagating misinformation.


All Data Sets Are Flawed


At their core, chatbots rely on natural language processing (NLP) models. But no data set ever represents the full truth. Deep learning may help provide more natural answers for conversation partners and, at its best, mitigate some of the limitations of its data set. But, despite what your favorite influencer may say, no chatbot is God.

To quote George Box, “. . . there is no need to ask the question ‘Is the model true?’. If ‘truth’ is to be the ‘whole truth’, the answer must be ‘No’. The only question of interest is ‘Is the model illuminating and useful?’”. 


How to Access Google Bard


Google has made Bard available to “trusted testers”, but the company promises a wider release within weeks. That means developers should get access to Google’s lightweight version before the snow melts in Michigan’s upper peninsula.

What does that mean for updates to Google Search? We’ll have to wait and see. Google already uses AI to improve visual search results. I expect to see many changes this year and next. I look forward to exploring Bard’s potential as a tool for business.


How Can I Get Started with Data Science for Bard?


If you want to get started with data science, we ranked some of our favorite courses. These vary in depth and scope, so be sure to find the option that suits your skill level. Those considering a career shift may consider learning the industry from scratch, though many people choose to learn a programming language like Python as an entry point to the field.


Learning Data Science from Scratch


All beginners should familiarize themselves with mathematics, specifically statistics for data science. These skills serve the scientist in programming, data evaluation, data storytelling, and most other parts of the job. Determining the relevancy of a data trend relies on understanding statistical significance as a starting point.


Learning Programming as a Data Scientist


Data science covers a variety of functions for a business, so data scientists learn several languages. Popular skills include Python, R, Hadoop, SQL, and Apache Spark. For a deeper dive into the subject, we also discussed how to learn data science.


Implementing AI as a Professional


For professionals interested in applying machine learning, natural language processing, and other modern techniques in their own fields, check out Stanford’s Artificial Intelligence Professional Program


Concluding Thoughts


If you’re excited about Google Bard, you’re not alone. If you’re nervous about it, scared of it, or confused by it, you’re not alone. Data enthusiasts understand just how the chatbot provides its advice, and we can expect similar errors from any technology based on large language models in the future.
Jim Markus manages Hackr.io and a portfolio of sites at VentureKite. He hosted Frugal Living, a popular finance podcast, and co-created The Ink & Blood Dueling Society, a theatrical writing event that appeared at conventions across the United States. He is also an award-winning game designer.