What future for ethical AI after Google scientist firing?

The firing of top Google AI ethics scientist Timnit Gebru last week – which she attributed to her criticism of the company’s diversity efforts – has sent shockwaves through the emerging field of ethical AI.

As of Wednesday, an open letter protesting Gebru’s firing had been signed by more than 2,200 Google employees and roughly 3,000 academics, business executives and activists in the tech world.

“Research integrity can no longer be taken for granted in Google’s corporate research environment, and Dr. Gebru’s firing has overthrown a working understanding of what kind of research Google will permit,” the letter reads.

Google did not respond to a request for comment.

Jeff Dean, head of Google’s AI unit, posted a statement last week saying Gebru’s departure came after she had failed to follow the proper protocols for publishing research, and pushed back on the company’s request she retract an academic paper.

He characterised her departure as a “resignation”.

Here’s what you need to know about the incident and its fallout:


Gebru co-led Google’s team on ethical AI, a research group within Google, and is best known for co-writing a landmark 2018 paper that found higher error rates in facial analysis technology for women with darker skin tones.

She also co-founded the nonprofit Black in AI, and is considered by her peers as an outspoken advocate for Black women in tech, pushing companies like Google to strengthen their commitment to diversity and inclusion initiatives.

Gebru did not respond to a request for comment.

“She is one of the top AI ethics researchers in the entire world … period,” said Meredith Broussard, a professor at NYU who studies AI and ethics.

“We are seeing the silencing of a Black woman who has been working at the top of her game, is widely respected, and is … publishing the truth,” she told the Thomson Reuters Foundation in a phone interview.

More than 50 Black women in the industry also penned a separate open letter, which said Google was engaged in “retaliation after Timnit was doing what she has been hired to do — ensuring AI technologies at Google are ethical”.


On a Medium page maintained by employee activists at Google, Gebru’s colleagues wrote that her firing stemmed from a dispute over an academic paper she had co-authored about “the ethical considerations of large language models”.

Gebru’s paper suggested that tech firms could do more to ensure that AI systems aimed at mimicking human writing and speech do not exacerbate historical gender biases and use of offensive language.

Dean posted his own letter claiming that Google had asked Gebru to withdraw the paper because it was was unsound and “didn’t take into account recent research” – an assessment disputed by some familiar with the paper and review process.

“It is one of the most well-cited papers I’ve seen in that field,” said Meredith Whittaker, the co-director of the U.S.-based AI Now Institute.

Ali Alkhatib, a research fellow at the University of San Francisco’s Center for Applied Data Ethics, serves on the program committee at the academic conference the paper was submitted to.

He said that any gaps in scholarship would have been picked up by the conference’s own academic peer review process: “The whole explanation from Google is baffling.”


The incident is resurfacing longstanding concerns about the tech industry’s commitment to building a diverse workforce, and allowing Black workers autonomy.

Google Chief Executive Sundar Pichai said that by 2025, the company aims to have 30% more of its leaders be from underrepresented groups. About 96% of Google’s U.S leaders are white or Asian, and 73% globally are men.

“As a Black woman, these companies know the work you do, and the communities you care about, and they want the validation you bring to their business,” said Aerica Shimizu Banks, a former Google employee who signed the letter supporting Gebru.

“But they don’t actually want you to do that work – they want you to look like you are doing it,” said Banks, who founded the tech consultancy Shizo.

The incident has raised questions about Google’s commitment to independent ethical AI research, as accusations of bias in algorithims and the ethics of facial recogntion systems are increasingly coming to the fore.

“This episode has cast a pall over what we can say, or how it will be received,” said Alex Hanna, a researcher on the Google AI Ethics team who studies potential bias in data informing computer vision in technology such as facial recognition and self-driving cars.

According to an October study by Harvard Medical School and the University of Toronto, more than half of AI-tenured faculty at four major U.S. universities were receiving funds from from just a handful of tech companies, including Facebook, Microsoft, and Google.

“Dr Gebru’s firing is provoking serious questions about to what extent is the business-side informing the ethical AI research being done at Google,” Alkhatib said.

Several AI scholars and researchers have voiced similar concerns.

Ameet Rahane, a U.C. Berkeley graduate who studies computational neuroscience, said he’d long hoped to be offered a job at Google to work on AI after he finished a PhD.

“Now … I would need to think long and hard before accepting it,” he said.


The Dispatch is present across a number of social media platforms. Subscribe to our YouTube channel for exciting videos; join us on Facebook, Intagram and Twitter for quick updates and discussions. We are also available on the Telegram. Follow us on Pinterest for thousands of pictures and graphics. We care to respond to text messages on WhatsApp at 8082480136 [No calls accepted]. To contribute an article or pitch a story idea, write to us at [email protected] |Click to know more about The Dispatch, our standards and policies   

About the author

Avi Asher-Schapiro | Thomson Reuters Foundation

Add Comment

Click here to post a comment