Following OpenAI’s ChatGPT and Microsoft’s Bing Chat, Google responded by going global with its own experimental AI chatbot: Bard.
Released in March 2023 and so far reaching more than 180 countries, Bard has had some challenges along the way – the most recent one being the postponing of the European launch over privacy concerns. We’ll get into that shortly.
But how exactly does Bard work? What differentiates it from the ChatGPT we already know? What privacy and security challenges does it face? How is it expected to develop in the future? These are the questions we’ll answer to throughout this article.
How does Google Bard work?
It works similarly to ChatGPT, in the way that it’s a generative AI that accepts prompts and performs text-based tasks, like writing code, providing answers, summaries and other forms of written content.
Google Bard’s initial version was powered by a lightweight version of the language model LaMDA (Language Model for Dialogue Applications), but it has recently evolved into using Google’s most advanced language model PaLM 2 (Pathway Language Model) – it is said to perform better in reasoning tasks, including logic, code and maths.
How is it different from ChatGPT?
There are a few key differences between the two AI chatbots, namely:
- Large Language Model (LLM)
While Google Bard relies on PaLM 2, ChatGPT uses the Generative Pre-trained Transformer 4 (GPT-4). Both technologies are built to detect and replicate patterns of human speech, but they operate differently and are being studied for performance comparisons. - Data source
ChatGPT was built to provide answers published online up to 2021, while Bard is able to pull real-time information directly from the internet. This gives Bard an advantage in terms of knowledge up-to-dateness. - Content production
ChatGPT is more sensitive to language styles requested by users, meaning it can generate long answers in a specific style that meets the user’s requirements. Bard, on the other hand, mostly provides short statements and links to other online sources where users can find more information – it basically helps navigating Google Search in a more efficient way. Also, it has recently been updated to include images in its answers.
Will it be used to support Google Search?
That is the plan, yes. When Google announced the launch of Bard, they said that the goal was to bring this AI innovation into their products, starting with Search.
The idea is to implement Bard’s features in Search, in a way that helps users consume information in easier-to-digest formats, instead of getting it complexified from multiple sources.
It’s clear, however, that Bard is not meant to be a replacement of Search, but a feature that complements it.
Privacy concerns
Google is yet to release Bard in the European Union (EU), since the Data Protection Commission blocked its launch over privacy concerns. According to the Dublin-based data regulator, which is the supervisory authority for the General Data Protection Regulation (GDPR), the tech company hasn’t provided, so far, sufficient information about how Bard protects Europeans’ privacy.
Google said they’re addressing this issue. “We said that we wanted to make Bard more widely available, including in the European Union, and that we would do so responsibly, after engagement with experts, regulators and policymakers. As part of that process, we’ve been talking with privacy regulators to address their questions and hear feedback”, a spokesperson assured.
According to Alter Solutions’ Data Protection Officer (DPO), Inés Chenouf, “the protection of personal data remains an issue that digital companies pay little attention to”. “Personal data protection is a fundamental element to be taken into account when talking Artificial Intelligence. In the specific case of Bard, the invasion of privacy could be a consequence of automatic learning. In other words, this tool has the ability to learn from data, using algorithms, so its application could mean the misuse of personal data. This creates a sticking friction with the GDPR, particularly with the principles of confidentiality and transparency. In addition, the intensive use of data can lead to biases that could adversely affect users”, she states.
Given this scenario, Google will need to ensure transparency when it comes to the collection and use of people’s data, if it wants Bard to be released in the EU. “This means minimising data, setting up ethics committees and bringing the American giant closer to local supervisory authorities to limit risks and establish a relationship of trust with European users”, Inés Chenouf explains. “This is all the more important following the European Commission's decision, on July 10th 2023, to adopt a new adequacy decision on data transfers from the EU to the USA. It is a safe bet that the data used by Bard may be transferred to US subsidiaries. This is why the protection of personal data and, by extension, compliance with the RGPD is such a key issue”.
Cybersecurity matters
From a cybersecurity point of view, chatbots like Google Bard are very secure, because they don’t use traditional technologies like SQL, and most vulnerabilities come from this area. This means that every security concern will come from privacy matters.
For companies, specifically, the best solution for privacy issues is to run your AI on premise, in your servers, using a Large Language Model like this one.
Out of curiosity…why the name “Bard”?
The word means “poet” and it refers specifically to William Shakespeare, known as “the Bard of Avon”. The goal is to highlight the AI chatbot’s linguistic skills.
Additional information:
The European Union (EU) has recently passed the Artificial Intelligence Act, the world’s first comprehensive AI law. It classifies all AI according to different levels of risk:
- Unacceptable risk
AI that violates fundamental rights and will be banned (e.g.: real-time biometric identification in public spaces; manipulation of vulnerable people, like children).
- High risk
AI that negatively impact people’s safety or their fundamental rights, and that will be carefully assessed before being put on the market (e.g.: AI systems used in areas like management and operation of critical infrastructure, law enforcement, border control management, among others). - Limited risk
AI that needs to comply with specific transparency requirements, so users are totally aware that they are interacting with a machine (e.g: chatbots like ChatGPT and Google Bard). - Minimal or no risk
Almost all the AI systems used in the EU fall into this category (e.g.: spam filters).