Google has made Bard more widely available to users in the US and the UK today, and I have been spending some time with the company’s chatbot to see how its generative AI compares to ChatGPT and Bing AI. 

Like we saw in the screenshots Google provided with today’s announcement, the interface here is very similar to Bing AI in that there is a wide text input at the bottom of the screen and a dialogue-based layout. But there are a few key differences between Google’s and Microsoft’s offerings. 

With Bing AI, you’ll have to either hit Chat or scroll up from search results to get to the conversation page, whereas you don’t have to do that for the Bard website. Microsoft has a broom icon to the left of the input bar to clear the slate and start a new topic, while Google has a column on the left with options for “Reset chat,” “Bard Activity,” “FAQ” and “Help & Support.” 

It’s also worth noting the language Google painstakingly uses here. Once I navigated to the website, I was greeted with an alert reminding me that “Bard is an experiment.” It asks users to remember two things: “Bard will not always get it right,” and that “Bard will get better with your feedback.”

Even after you click “Got it” and that reminder goes away, there’s a line of fine print below the input field that states “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.” Whenever you reset a chat, you’ll see a message saying “Bard is still in its experimental phase. Chatting with it and rating its responses will help improve the experience.” After the embarrassing blunders Bard has already made so far, it’s understandable (and mildly funny) to see all these disclaimers. 

A screenshot showing Google's Bard AI chatbot, with a small window on top of the page saying

Screenshot

That’s all fairly unobtrusive, though, and doesn’t really affect my experience with Bard. There are, however, a few functional differences here compared to Bing AI. For one thing, Bing AI on desktop doesn’t offer speech-to-text in its input bar (though the app does), while Bard can tap your laptop’s microphone for you to dictate your queries. I have a tendency to ramble when speaking so I prefer to type, but I did use this tool to read out some questions to Bard and can verify it worked.

Probably the biggest difference between Bard and Bing AI so far is the fact that Google includes an easy way to see alternative responses within the conversation. You can click the dropdown arrow next to “View other drafts” at the top left of each chat bubble and see some other suggestions.

I asked Bard to create a 30-minute workout plan for the core and abs that excludes sit-ups, which is something I had asked Bing AI at Microsoft’s suggestion before. I appreciate that Bard was able to give me a somewhat sound routine to do that did target those areas and leave out sit-ups, but was more impressed by the two other options I could look at. 

Draft 1 suggested three sets of 10 to 12 reps of plank, side plank, Russian twists and mountain climbers, along with light instructions on how to do each. But its guidance for the planks and side planks was too simplistic. For both of those, it gave tips on form, followed by “Hold this position for as long as you can.” That is very different from person to person and can really affect the duration of the supposed 30-minute workout. It’s also unclear if Bard is saying you should hold planks for as long as you can for 10 to 12 times for three sets, either.

Unlike Bing, Google’s chatbot doesn’t always cite its sources, so it wasn’t easy for me to get clarification on these suggestions. Draft 2, for example, did list two sources, though it didn’t have inline references for specific parts of the proposed workout. The second option did have some better instructions, though. It came up with four exercises to be performed in a circuit (i.e. one after another instead of finishing three sets of each exercise before moving on to the next one). These were plank, Russian twists, side plank, bicycle crunches and hip raises.

A screenshot showing Google's Bard AI chatbot responding to a query saying

Screenshot

The instructions for plank were still as simplistic, but at least for side plank, this time Bard said to hold your body in place for 30 seconds before switching sides. Finally, the third draft was given in a very different manner. Instead of telling you to do four or five specific exercises for a few sets of a number of reps, this time, Bard offered a list of “plank variations” followed by guidance on how to do “crunches and leg raises” and some cooldown stretches. At the top of this plan, after 5 minutes of warmup activities, the chatbot simply said “do 10 minutes of plank variations, 10 minutes of crunches and leg raises, then 5 minutes of cooldown stretches.” 

Upon closer inspection, some of the instructions here don’t quite make sense. For “leg raises,” Bard’s description is “Lie on your back with your knees bent and your feet flat on the ground. Place your hands behind your head and raise your legs up towards the ceiling.” That’s confusing at best and inaccurate at worst. Google’s first search result for “leg raises” is a video and near the top are several diagrams that all do not match what Bard described. The top text-based article for that query is from The New York Times and much more clearly states what the exercise is.

I asked Bing to come up with the same workout plan, and honestly I can’t decide whether it performed better. It gave a more comprehensive program with four circuits of four sets of bird dogs, dead bugs, glute bridges and side planks, but offered no explanation as to what each of those names meant. 

It seems like for either chatbot, follow-up questions are necessary. I asked Bing and Bard how to do a leg raise, and both of them gave instructions for bent-knee leg raises (which is not what the Times, Men’s Health and other articles in Google’s search results described). The pair of bots also gave a series of steps on where to place your feet and hands, though Bing gave clearer guidance with descriptions like “such that the calves are parallel to the ground and thighs are perpendicular.”

A screenshot of Google's Bard AI chatbot showing the third draft of a response to a query

Screenshot

However, in some iterations (or drafts) of its response, Bard offered helpful additional info like “avoid arching your back,” and “as you get stronger, you can straighten your legs and raise them higher.” Both Microsoft’s and Google’s offerings have individual strengths and weaknesses, and for now it’s not clear who might take the lead. The LaMDA language model that Bard is based on comes off about as conversational and natural as Bing AI’s GPT-4, which isn’t surprising given Google’s long history in speech and AI.

Still, Bard clearly isn’t perfect. Or at least it’s not yet smart enough to replace a personal trainer (nor is it meant to). It’s also not the fastest or the most transparent. Because I’ve grown used to Bing AI, which displays animated text explaining what the system is doing and searching for, the absence of this on Bard was surprising. After entering my query, all I saw was the Bard icon sort of twinkling as I waited for a response, which made it seem like Google was taking longer to think. 

It’s still too early to tell if Bard will be truly helpful in my searches in future, but for now, Google’s approach is clear. It is (very) aware of the potential for mistakes, as well as misuse, of Bard, and highlighted some guardrails it’s implemented to prevent or reduce hallucinations. It’s still not what the limit is on number of exchanges you can have with Bard before a mandatory topic refresh, and the company has yet to respond to our query on that. 

Many people in the US and the UK are already getting access to Bard, so if you’re in those countries, you can possibly try it out yourself very soon. Bear in mind that Google itself is quick to reiterate that this is an experimental chatbot, so you’ll have to fact-check its results and take the experience with a healthy dose of skepticism. It’s also important that you do not share personal and sensitive information with Bard, as what’s fed to it will be used to finetune its algorithms, so whatever you tell it might end up somewhere else. As Bard (and Bing AI) continue to talk to more of us, the risk of potential biases, discrimination and flawed thinking increases. But it will hopefully also get better and more advanced. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Source: www.engadget.com