Regístrese ahora para una mejor cotización personalizada!

Noticias calientes

ChatGPT and Google's Bard: Are we looking for answers in all the wrong places?

Feb, 09, 2023 Hi-network.com
Image: Getty Images

A new era of searching the internet is underway, driven by impressive advances in AI. Just a few short months after its launch, OpenAI's conversational chatbot ChatGPT has Google rethinking its foundational service, and it's created an opening for other technology companies like Microsoft to gain new ground. 

It's no surprise that a conversational tool like a chatbot could disrupt the search business when you think about how the market has evolved. 

In Depth: These experts are racing to protect AI from hackers. Time is running out

read this

Everything you need to know about AI

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

Read now

Google, the world's dominant search engine for about two decades, says its mission is "to organize the world's information and make it universally accessible and useful." 

The world's information, however, continues to accumulate at a dizzying pace. The research firm IDC last year predicted that the amount of data created on an annual basis will reach more than 221,000 exabytes by 2026. That's more than double the amount of data created in 2022. 

A search engine that indexes websites is certainly an effective way to organize all that information, but it's not necessarily the best way to make it useful. 

Also: The best AI chatbots: ChatGPT and other fun alternatives to try

In fact, it's so easy to collect and organize data, that it can be a challenge just to sift through your own data, or the data you're searching at work. Do you remember how long it took the last time you had to sift through your company's HR platform to figure out how to file an expense report? 

These kinds of challenges present opportunities for the next iteration of search.

"When people think of Google, they often think of turning to us for quick factual answers, like 'how many keys does a piano have?'" Google CEO Sundar Pichai wrote in a blog post this week, introducing Google's own experimental AI chatbot, Bard. "But increasingly, people are turning to Google for deeper insights and understanding -- like, 'is the piano or guitar easier to learn, and how much practice does each need?' Learning about a topic like this can take a lot of effort to figure out what you really need to know, and people often want to explore a diverse range of opinions or perspectives."

Pichai added: "AI can be helpful in these moments, synthesizing insights for questions where there's no one right answer."

The problem, however, is that these subjective insights, neatly packaged in a conversational format, typically have to be grounded in some kind of truth. 

Also: How to start using ChatGPT

As Sabrina Ortiz explained for , these conversational chatbots are designed to converse with people -- not necessarily to deliver accurate answers. OpenAI trained its language model using Reinforcement Learning from Human Feedback (RLHF), according to OpenAI. Human AI trainers provided the model with conversations in which they played both parts, the user and AI assistants. Instead of asking for clarification on ambiguous questions, the model just takes a guess at what your question means, which can lead to unintended responses to questions. 

Already this has led developer question-and-answer site StackOverflow to at least temporarily ban ChatGPT-generated responses to questions. "The primary problem is that while the answers that ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce," Stack Overflow moderators explained.

OpenAI itself acknowledges, "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers."

Google's Bard seemingly attempts to address this issue by allowing its models to tap into recently created data from external sources. Bard is based on LaMDA, a large language model developed by Google. LaMDA's developers, as Tiernan Ray noted for , specifically focused on how to improve what they call "factual groundedness." They did this by allowing the program to call out to external sources of information beyond what it has already processed in its development, the so-called training phase.

Also: The best AI writers: ChatGPT and other interesting alternatives to try

However, Google's recent Bard demo gone-wrong illustrates exactly why tapping external sources of information is risky business, particularly for AI models that prioritize coherence over accuracy. In response to the question, "What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?" Bard said that the telescope took the first-ever image of an exoplanet -- which isn't right. 

How did Bard end up giving this inaccurate statement? It probably has to do with the quality of external information available on the topic. As any computer scientist knows, "garbage in, garbage out."

And indeed, NASA's own materials about the James Webb Space Telescope -- no doubt trying to portray the telescope in the best light possible -- was ambiguous. In September 2022, the agency wrote, "For the first time, astronomers have used NASA's James Webb Space Telescope to take a direct image of a planet outside our solar system." To clarify, this was the first time this specific telescope took a direct image on an exoplanet -- but another telescope did so as early as 2004. 

One immediate way to address these chatbot shortcomings is to offer as much transparency as possible. Microsoft's new version of the Bing search engine, which runs on a next-generation OpenAI large language model, cites its sources with its answers. 

Also: ChatGPT took an MBA exam. Here's how it did

So, where does this leave us? For sure, it helps, as always, for users to leverage these tools with a skeptical eye and a clear understanding of how they work, as Microsoft points out.

"Bing aims to base all its responses on reliable sources -- but AI can make mistakes, and third party content on the internet may not always be accurate or reliable," the Bing FAQ section reads. "Bing will sometimes misrepresent the information it finds, and you may see responses that sound convincing but are incomplete, inaccurate, or inappropriate. Use your own judgment and double check the facts before making decisions or taking action based on Bing's responses."

Our new digital friends may want to be helpful, but it would unwise to rely on them yet.

See also

How to use ChatGPT to write Excel formulasHow to use ChatGPT to write codeChatGPT vs. Bing Chat: Which AI chatbot should you use?How to use ChatGPT to build your resumeHow does ChatGPT work?How to get started using ChatGPT
  • How to use ChatGPT to write Excel formulas
  • How to use ChatGPT to write code
  • ChatGPT vs. Bing Chat: Which AI chatbot should you use?
  • How to use ChatGPT to build your resume
  • How does ChatGPT work?
  • How to get started using ChatGPT

tag-icon Etiquetas calientes: Inteligencia Artificial innovación

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.