Users warned against trusting Google's AI chatbot Bard with personal data

Directions shared as company attempts to safeguard its long-standing policy of keeping its information safe

By
Web Desk
|
The Google logo is seen during the Google I/O annual developers conference at Shoreline Amphitheatre in Mountain View, California on May 10, 2023. — AFP
The Google logo is seen during the Google I/O annual developers conference at Shoreline Amphitheatre in Mountain View, California on May 10, 2023. — AFP

Google's parent company Alphabet has reportedly warned their employees against feeding any confidential information to Bard, the tech giant's AI chatbot, as it has launched worldwide marketing for its product, reported Reuters, Thursday, citing sources.

The directions were shared as the company under its policy, attempted to safeguard its long-standing policy of keeping its information safe.

The chatbots — including OpenAI's ChatGPT — are human-like bots capable of solving complex problems as they are programmed with artificial general intelligence (AGI). They are also capable of holding conversations similar to humans.

According to researchers, as human reviewers may read the chats, similar AI could reproduce the data it absorbed during training, creating a risk of information leaks.

A photo shows the US technology Google group logo during the Vivatech technology startups and innovation fair at the Porte de Versailles exhibition centre in Paris, on June 14, 2023. — AFP
A photo shows the US technology Google group logo during the Vivatech technology startups and innovation fair at the Porte de Versailles exhibition centre in Paris, on June 14, 2023. — AFP

Reuters reported that Alphabet also alerted its engineers to avoid direct use of computer code that chatbots can generate.

Google's parent said Bard can make undesired code suggestions, but it helps programmers nonetheless while mentioning that it aimed to be transparent about the limitations of its technology.

The recent worries highlight how the technology giant is approaching to avert any harm to its business after it announced its product while competing with ChatGPT.

ChatGPT logo is seen in this illustration taken, on February 3, 2023. — Reuters
ChatGPT logo is seen in this illustration taken, on February 3, 2023. — Reuters

By February, Google told staff testing Bard before its launch not to give it internal information, Insider reported.

Now Google is rolling out Bard to more than 180 countries and in 40 languages as a springboard for creativity, and its warnings extend to its code suggestions.

Google told Reuters it has had detailed conversations with Ireland's Data Protection Commission and is addressing regulators' questions, after a Politico report Tuesday that the company was postponing Bard's EU launch this week pending more information about the chatbot's impact on privacy.

Information leaks

A Google privacy notice updated on June 1 also states: "Don’t include confidential or sensitive information in your Bard conversations."

Some companies have developed software to address such concerns. For instance, Cloudflare which defends websites against cyberattacks and offers other cloud services, is marketing a capability for businesses to tag and restrict some data from flowing externally.

Google and Microsoft also are offering conversational tools to business customers that will come with a higher price tag but refrain from absorbing data into public AI models. The default setting in Bard and ChatGPT is to save users' conversation history, which users can opt to delete.

Yusuf Mehdi, Microsoft's consumer chief marketing officer said: “It makes sense that companies would not want their staff to use public chatbots for work.”

"Companies are taking a duly conservative standpoint," he said explaining how Microsoft's free Bing chatbot compares with its enterprise software. "There, our policies are much more strict."