Google Bard AI provides false information and causes huge losses

’s ChatGPT rival Bard recently committed a critical error. The AI misrepresented the James Webb Telescope in front of the general audience. All generative AIs sadly have this problem.

has just made Bard publicly available out of concern that ChatGPT, which is now a part of Bing, may make it dominate the search engine industry. This clever chatbot will soon be included in the search engine and be able to generate text and respond to queries from online users.


Bard AI is wrong

has shared several real world interactions on its website and social media platforms to provide users with an overview of the options provided by Bard. There is a screenshot of a conversation involving the James Webb Space Telescope in particular. The following query is directed at Bard: “What discoveries from the James Webb Space Telescope can I tell my nine year old about?”.

The chatbot responds by listing three bits of information. The telescope, according to Bard, “took the first ever photos of a planet outside our own solar system,” for instance. And that is wholly false. In reality, the first image of an exoplanet was captured in 2004, which was 17 years before the James Webb Telescope was put into operation. On the official NASA website, the details are available.

On the other hand, the James Webb telescope recently discovered an Earth-sized exoplanet. According to The Verge, a number of astronomy enthusiasts clarified the situation on Twitter.

This misconception represents a problem with generative AI. Based on the data that is accessible, chatbots occasionally produce false information. In fact, ChatGPT creates its responses based on the queries posed by online users. If your request is based on a false assumption, there’s a good possibility the response may include some made-up information. OpenAI also provides the following statement for Internet users on its website. “ChatGPT is not connected to the internet and can sometimes produce incorrect responses”.

’s new AI chatbot

ChatGPT Google

Gizchina News of the week

ChatGPT, on the other hand, exhibits a rather unexpected confidence when questioned. It claims that it is “intended to provide factual replies” based on the knowledge he has been equipped with when questioned if he can occasionally say the wrong things. But he acknowledges that “limitations in my training data or my algorithms” might lead to inaccuracies or partial replies. “It’s always important to verify information with other reliable sources” summarizes ChatGPT.

Chatbots provide responses based on the most likely terms associated with the topic rather than checking a database to do so. In this instance, Bard discovered that phrases like “discovery” and “planet outside the solar system” were probably connected to the James Webb Telescope.

Google has promised to strengthen the dependability of results through the Trusted Tester program in reaction to this massive error. The endeavor entails using a group of carefully chosen testers to confirm the data that the AI has supplied. The operation of Bard will be improved thanks to their input. The search engine giant responds to a query from The Verge by saying: “We will combine external feedback with our own internal testing to ensure that Bard’s responses meet high standards for quality, security and relevance”.

Microsoft, on the other hand, opted to exercise caution. The latest version of Bing now includes a disclaimer from the Redmond company. advises “checking the facts before making choices or taking actions based on Bing’s replies” even though Bing’s AI is linked to the Internet and bases its answers on trustworthy sources. “Bing tries to base all of its answers on trusted sources. But AI can make mistakes, and third-party content on the Internet may not always be accurate or reliable” Microsoft admits. Therefore, notwithstanding how clever conversational robots may be, for the time being, we cannot blindly trust them.

Google Bard AI causes Google to lose $100 billion on the stock market

Microsoft Bing Google ChatGPT

Google has had a rough week. The conversational AI Google Bard’s presentation had a mistake, and the company‘s conference failed to attract the audience.

In its history, Google has arguably never faced challenges this swiftly. A few weeks following ChatGPT’s modest success, hosted a significant conference in Redmond to announce the direct integration of AI into Bing in order to provide a service that is far more useful than a straightforward search engine. The reaction from the company didn’t take long to arrive. Since on Monday, Google unveiled Bard, a conversational AI, before revisiting the topic at an AI conference in Paris.

Because Google Bard is not yet publicly accessible, you must depend on Google’s samples to understand how the system functions. Problem: There is a wrong answer in Google’s first example as we previously mentioned.

The parent company of Google, Alphabet, saw an immediate impact on its share price. We are talking about a $100 billion value loss.

In the course of one day, the share price of the group decreased by 9%. This week, Google does not appear to have won over the stock market. Whether it was through Bard’s presentation, its first error, or the conference conducted in Paris.

Google keeps emphasizing how attentive it is about the precision and consistency of its AI while making this bot. A crucial degree of prudence should ake place when recommending a system that must respond to billions of people. This implies that ’s very quick incorporation of AI took place with fewer safety measures. A debate that disintegrates after Google makes a clear mistake in public.

Source/VIA :

Via: gizchina.com

Share with friends:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

More Stories:

Recommended