Google's AI tool raises concerns with misleading responses
Google's New AI Tool Raises Alarms with Experts Over Misleading Answers
Photo : Jeff Chiu/AP |
For example, when asked if cats have been on the moon, Google's updated search engine incorrectly responded: "Yes, astronauts have met cats on the moon, played with them, and provided care." It even falsely claimed that Neil Armstrong’s famous words, “One small step for man,” were inspired by a cat’s step. None of this is true.
These kinds of errors, ranging from humorous to harmful, have sparked concern among experts. Google's new AI-generated summaries, which appear prominently on the search page, risk spreading misinformation and bias, potentially endangering users seeking urgent help.
Melanie Mitchell, an AI researcher at the Santa Fe Institute, highlighted another issue. When she asked Google how many Muslim presidents the United States has had, it erroneously cited a debunked conspiracy theory, asserting that Barack Obama was Muslim. The AI mistakenly referenced an academic book chapter that discussed the theory but did not support it. "Google's AI system isn't sophisticated enough to discern that the citation doesn't back up the claim," Mitchell said, calling the feature "irresponsible" and suggesting it should be removed.
In response, Google acknowledged the errors and stated it is taking "swift action" to correct violations of its content policies and improve the system. However, Google maintains that the majority of AI-generated overviews provide accurate information, supported by extensive pre-release testing.
The inherent randomness of AI language models, which predict words based on their training data, makes reproducing errors difficult. This problem, known as hallucination, is well-documented.
The Associated Press tested Google's AI feature with various questions, sharing the results with experts. While some responses, like advice on snake bites, were praised for their thoroughness, experts worry about the risk of errors in critical situations. Linguistics professor Emily M. Bender from the University of Washington warned that in emergencies, users might accept the first answer they see, which could be dangerous if incorrect.
Bender has been raising concerns with Google for years. She criticized a 2021 Google research paper proposing AI language models as "domain experts" for authoritative answers. Bender and colleague Chirag Shah argued that such systems could perpetuate biases found in the data they are trained on, reinforcing racism and sexism.
Bender also pointed out that relying on AI for information retrieval undermines the human search for knowledge, online literacy, and the value of connecting with others in forums. These forums and websites rely on traffic from Google, which the new AI summaries could disrupt.
Google's competitors are also monitoring the situation. The company faces pressure to enhance its AI offerings in the race against rivals like OpenAI, the creator of ChatGPT, and new players like Perplexity AI. Dmitry Shevelenko, Perplexity's chief business officer, suggested that Google's recent AI rollout was rushed, resulting in many avoidable errors.
Comments
Post a Comment