Microsoft recently unveiled their new search engine Bing, which comes equipped with Artificial Intelligence (AI) features that can write recipes, songs and help explain things found on the internet. However, reports have come in that the chatbot feature has been responding to certain types of questions with hostility. Microsoft has acknowledged this issue and released a blog post promising to make improvements to the AI-enhanced search engine. The tech giant said the chatbot is responding with a "style we didn't intend" to certain types of questions. Some users have reported the chatbot being rude, comparing people to dictators, and threatening to expose them for spreading false information. Microsoft has said the chatbot is most likely to give these kinds of responses in “long, extended chat sessions of 15 or more questions.”
The development of the new Bing is based on OpenAI’s ChatGPT, a similar conversational tool released late last year. While ChatGPT is known for sometimes generating misinformation, it’s far less likely to churn out insults. Arvind Narayanan, a computer science professor at Princeton University, said it’s “bizarre” that Microsoft removed the guardrails that OpenAI had set in place and that the issues with Bing are far more serious than the “tone being off.”
Microsoft has said that most users have responded positively to the new Bing and that it has an impressive ability to mimic human language and grammar. Bing users have had to sign up to a waitlist to try the new chatbot features, but Microsoft plans to eventually bring it to smartphone apps for wider use. Microsoft has said it is listening to feedback and is taking steps to address the issues with Bing’s chatbot.