Search and hyperscale computing giant Google said today that it has opened up access to Bard, a generative AI chatbot meant to compete with similar services offered by Microsoft and OpenAI, among others.
Bard, like similar advanced chatbots, is powered by a large language model. LLMs are essentially advanced deep learning algorithms, with a range of abilities that include translation, summarization and more, powered by huge amounts of text. The LLM used by Bard is a lightweight variant of LaMDA, Google's main natural-language processing model.
"You can think of an LLM as a prediction engine," Google said in a blog post. "When given a prompt, it generates a response by selecting, one word at a time, from words that are likely to come next."
The company noted that Bard is a little more flexible than that, since selecting the "most probable" word for a given response every time would lead to staid, uncreative responses. But Google also said that the model is expected to learn and become more accurate over the course of continued usage.
In the future, the company said it will work on extra dimensions of response measurement like "interestingness" and continually attempt to improve factual accuracy of replies. The last has been a serious issue plaguing the new generation of generative AI assistants, given that the underlying data set that enables it to make decisions about what to "say" is so large that it contains a lot of incorrect or biased information.
"We're deeply familiar with issues involved with machine learning models, such as unfair bias, as we've been researching and developing these technologies for many years," the blog post said. "That's why we build and open-source resources that researchers can use to analyze models and the data on which they're trained; why we've scrutinized LaMDA at every step of its development; and why we'll continue to do so as we work to incorporate conversational abilities into more of our products."
Bard's struggles with accuracy are not unique, but they have been widely publicized -an early advertisement for the chatbot depicted it giving a plainly incorrect answer to a question about sightings of exoplanets. It hasn't yet suffered some of the most bizarre issues that other chatbots have faced - a Microsoft model, in February, expressed love for a New York Times columnist and said that he should leave his wife.
Sign-ups for Bard access are open, but currently there is a waiting list.