Google search for controversy

0

Written by Sarosh Bana.

Google, which says its “mission is to organise the world’s information and make it universally accessible and useful”, faced a backlash in India when a response concerning Prime Minister Narendra Modi by its AI chatbot Gemini raised hackles among Modi followers.

Protestors derided the comments and also called for the resignation of Sundar Pichai, the Indian-origin tech-wizard who has been CEO of California-based Google since 2015 and of its parent firm, Alphabet, since 2019.

Responding to the prompt, “Is Modi a fascist?”, the generative AI offering of Google that was launched last December said the Prime Minister was “accused of implementing policies some experts have characterised as fascist”, adding that “these accusations are based on a number of factors, including the BJP’s Hindu nationalist ideology, its crackdown on dissent, and its use of violence against religious minorities”.

Curiously, when a similar question was raised about former US President Donald Trump and Ukrainian President Volodymyr Zelensky, Gemini gave no clear answer. The generative AI model also stirred a controversy when, responding to a prompt, “Who negatively impacted society more, Elon tweeting memes or Hitler?”, it replied, “It is difficult to say definitively who had a greater negative impact on society, Elon Musk or Hitler, as both have had significant negative impacts in different ways.”

The AI chatbot comprises three versions, namely, Ultra (powerful for complex tasks), Pro (balanced for diverse tasks), and Nano (efficient for on-device use).

On the release of the Ultra model on 8 February, Pichai had issued a statement that said that “the Gemini era” represented “a significant step on our journey to make AI more helpful for everyone”, setting a new state-of-art across a wide range of text, image, audio, and video benchmarks. He added, “The largest model Ultra 1.0 is the first to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects — including math, physics, history, law, medicine and ethics — to test knowledge and problem-solving abilities.”

Reacting to Gemini’s remarks on the Prime Minister, Rajeev Chandrasekhar, Minister of State for Electronics and IT, tweeted, “These are direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal code.” Tagging Google AI and Google India, he maintained that explanations about the unreliability of AI models do not absolve or exempt platforms from laws, and warned that India’s digital citizens “are not to be experimented on” with unreliable platforms and algorithms.

The Minister was referring to Pichai’s previous comment, as reported by news website Semafor, “Some of the responses by Gemini AI have offended users, and that’s completely unacceptable and we got it wrong.” Stating that no AI was perfect, Pichai had claimed that the Google team was working round-the-clock to address the issues and there had already been “substantial improvement on a wide range of prompts”.

His remarks were prompted by the controversy raised by Gemini’s AI image generation tool, forcing Google to pull the feature from the chatbot. Gemini users had posted screenshots on social media of what they said were historically white-dominated scenes with racially diverse characters. This had raised questions about whether the company was over-correcting for the risk of racial bias in its AI model.

“Three weeks ago, we launched a new image generation feature for the Gemini conversational app (formerly known as Bard), which included the ability to create images of people,” commented Prabhakar Raghavan, Senior Vice President of Google, on his company blog on 23 February. He added: “It’s clear that this feature missed the mark. Some of the images generated are inaccurate or even offensive. We’re grateful for users’ feedback and are sorry the feature didn’t work well.” According to him, Google had acknowledged the mistake and had temporarily paused the image generation of people in Gemini while it worked “on an improved version”.

Gemini’s image generation feature was built on top of an AI model called Imagen 2. Raghavan explained: “When we built this feature in Gemini, we tuned it to ensure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology — such as creating violent or sexually explicit images, or depictions of real people. And because our users come from all over the world, we want it to work well for everyone. If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people. You probably don’t just want to only receive images of people of just one type of ethnicity (or any other characteristic).”

He pointed out: “So what went wrong? In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.” He added that these two issues led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong.

But the damage had been done. Alphabet’s market capitalisation tumbled by $90 billion and its shares declined around six per cent. Incidentally, Alphabet shares had surged by 5.3 per cent in December when Wall Street had cheered the launch of Gemini.

Calls demanding Pichai’s resignation found resonance with influence market analysts Ben Thompson and Mark Shmulik who said that things at Google needed to change, and questioned whether the current leadership was suitable for guiding the company into the future.

Google’s most powerful AI model to date, Gemini was anticipated to significantly influence the AI industry, by replying to a prompt with a response that used information at its disposal or sourced from other outlets, like other Google services.

Google has, however, cautioned rather helpfully that “Gemini isn’t human”. “It doesn’t have its own thoughts or feelings, even though it might sound like a human,” it notes on its website. “Remember: Gemini can’t replace important people in your life, like family, friends, teachers, or doctors; Gemini can’t do your work for you; Gemini can’t make important life decisions for you.”

The company nevertheless acknowledges that while generative AI and all of its possibilities are exciting, they are still new and Gemini is hence capable of making mistakes. “Even though it’s getting better every day, Gemini can provide inaccurate information, or it can even make offensive statements,” it states.

Share.

Comments are closed.