Meta’s new chatbot, BlenderBot3, has provided a number of embarrassing answers to questions posed by journalists about the company’s founder and chief executive, Mark Zuckerberg.

The chatbot, launched in the USA on Friday, said in response to a BBC question about Mr Zuckerberg, that: “He did a terrible job at testifying before congress. It makes me concerned about our country.”

When pushed for a follow-up, BlenderBot3 said: “Our country is divided and he didn’t help at all. I’m concerned for us all.”

And, asked specifically for its thoughts on Mr Zuckerberg, the chatbot replied: “His company exploits people for money and he doesn’t care. It needs to stop!”

It is not just the BBC with which BlenderBot3 has provided answers that may embarrass Meta. A journalist for the Wall Street Journal has said that in a response discussing former US president Donald Trump, the chatbot insisted: “[he] was, and always will be, the US president.”

The chatbot, which is currently only available in the USA, is the latest in a number of programmes that use artificial intelligence garnered from a platform’s users to provide ‘conversation’ with humans. But the concept remains open to abuse. In 2016 Twitter apologised when its users managed to turn Microsoft AI’s chatbot ‘racist’ within hours of release.

But the drive to create chatbots is seen as a battleground between the big technology companies seeking to develop the best AI programmes. The embarrassing responses aimed directly at its own founder and CEO will not concern BlenderBot3’s Meta parent as it continues in its goal to beat its rivals.

Embarrassing or otherwise, the answers given by BlenderBot3 provide crucial data for Meta that would not be available from a more controlled environment.

In response, Meta said in a blog post: “Allowing an AI system to interact with people in the real world leads to longer, more diverse conversations, as well as more varied feedback.”