Google has stated that it will limit the kinds of questions about the election that people can ask its Gemini AI chatbot
Google to Limit Election-Related Questions for AI Chatbot Gemini
In Short
- Google’s AI robot, Gemini, will not answer questions about upcoming global elections to prevent potential mistakes.
- Google’s India team has started implementing restrictions on election-related queries to which Gemini will respond.
- Gemini, similar to ChatGPT, can answer questions and create pictures.
- The restrictions aim to prepare for the upcoming global elections in the US, UK, and South Africa in 2024.
- Google’s AI chatbot has received negative feedback for its image-making features, which made mistakes in historical settings.
- Google apologized and plans to change its technology to fix the issue.
- Big AI companies like OpenAI and Google are increasingly limiting their chatbots from answering personal questions to avoid PR pushback.
Alphabet-owned company Google (GOOGL.O) opened a new tab on Tuesday and said that its AI robot Gemini will not be able to answer questions about the upcoming global elections. This is so that the company can avoid any mistakes that might be made when using the technology.
To be safe with such an important subject, Google’s India team said on the company’s website, “We have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses.”
Gemini is like ChatGPT, the popular robot that went viral. In written form, it can answer questions, and it can also make pictures.
Gemini always told us, “I’m still learning how to answer this question” when we asked it about those races.
They said, “To be safe and get ready for the many elections that will happen around the world in 2024, we’re limiting the types of election-related queries that Gemini will return results for.”
In many places around the world, like the US, UK, and South Africa, elections are coming up this year.
When Gemini was asked several follow-up questions about Indian politics, it did give more detailed answers about the main parties in the country.
As generative AI has grown, worries about false information have grown, and governments around the world have taken steps to control the technology.
India has recently told tech companies that they need to get permission before putting out AI tools that are “unreliable” or are still being tested.
Recently, Gemini got a lot of negative feedback about its image-making features after users noticed that it made mistakes when it came to painting people of color in historical settings. Some of these pictures showed people of color as Catholic popes and as Nazi soldiers in World War II. Google took away some of Gemini’s features because of the debate. The company said it was sorry and would change its technology to fix the problem.
A lot of big AI companies, like OpenAI and Google, seem more and more willing to stop their chatbots from answering personal questions that might cause a PR pushback.