Incident 177: Google’s Assistive Writing Feature Provided Allegedly Unnecessary and Clumsy Suggestions

Description: Google’s “inclusive language” feature prompting writers to consider alternatives to non-inclusive words reportedly also recommend alternatives for words such as “landlord” and “motherboard,” which critics said was a form of obtrusive, unnecessary, and bias-reinforcing speech-policing.
Alleged: Google Docs developed and deployed an AI system, which harmed Google Docs users.

Suggested citation format

Anonymous. (2022-04-19) Incident Number 177. in McGregor, S. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
177
Report Count
5
Incident Date
2022-04-19
Editors
Sean McGregor, Khoa Lam

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover

Incident Reports

Starting this month—21 years after Microsoft turned off Clippy because people hated it so much—Google is rolling out a new feature called “assistive writing” that butts into your prose to make style and tone notes on word choice, concision, and inclusive language.

The company’s been talking about this feature for a while; last year, it published documentation guidelines that urge developers to use accessible documentation language, voice and tone. It’s rolling out selectively to enterprise-level users, and is turned on by default. But this feature is showing up for end users in Google Docs, one of the company's most widely-used products, and it’s annoying as hell.

At Motherboard, senior staff writer Lorenzo Franceschi-Bicchierai typed “annoyed” and Google suggested he change it to “angry” or “upset” to “make your writing flow better.” Being annoyed is a completely different emotion than being angry or upset—and “upset” is so amorphous, it could mean a whole spectrum of feelings—but Google is a machine, while Lorenzo’s a writer.

Social editor Emily Lipstein typed “Motherboard” (as in, the name of this website) into a document and Google popped up to tell her she was being insensitive: “Inclusive warning. Some of these words may not be inclusive to all readers. Consider using different words.”

Journalist Rebecca Baird-Remba tweeted an “inclusive warning” she received on the word “landlord,” which Google suggested she change to “property owner” or “proprietor.”

Motherboard editor Tim Marchman and I kept testing the limits of this feature with prose from excerpts from famous works and interviews. Google suggested that Martin Luther King Jr. should have talked about “the intense urgency of now” rather than “the fierce urgency of now” in his “I Have a Dream” speech and edited President John F. Kennedy’s use in his inaugural address of the phrase “for all mankind” to say “for all humankind.” A transcribed interview of neo-Nazi and former Klan leader David Duke—in which he uses the N-word and talks about hunting Black people—gets no notes. Radical feminist Valerie Solanas’ SCUM Manifesto gets more edits than Duke’s tirade; she should use “police officers” instead of “policemen,” Google helpfully notes. Even Jesus (or at least the translators responsible for the King James Bible) doesn’t get off easily—rather than talking about God’s “wonderful” works in the Sermon on the Mount, Google’s robot asserts, He should have used the words “great,” “marvelous,” or “lovely.”

Google told Motherboard that this feature is in an “ongoing evolution.”

“Assisted writing uses language understanding models, which rely on millions of common phrases and sentences to automatically learn how people communicate. This also means they can reflect some human cognitive biases,” a spokesperson for Google said. “Our technology is always improving, and we don't yet (and may never) have a complete solution to identifying and mitigating all unwanted word associations and biases.”

Being more inclusive with our writing is a good goal, and one that’s worth striving toward as we string these sentences together and share them with the world. “Police officers” is more accurate than “policemen.” Cutting phrases like “whitelist/blacklist” and “master/slave” out of our vocabulary not only addresses years of habitual bias in tech terminology, but forces us as writers and researchers to be more creative with the way we describe things. Shifts in our speech like swapping “manned” for “crewed” spaceflight are attempts to correct histories of erasing women and non-binary people from the industries where they work.

But words do mean things; calling landlords “property owners” is almost worse than calling them “landchads,” and half as accurate. It’s catering to people like Howard Schultz who would prefer you not call him a billionaire, but a “person of means.” On a more extreme end, if someone intends to be racist, sexist, or exclusionary in their writing, and wants to draft that up in a Google document, they should be allowed to do that without an algorithm attempting to sanitize their intentions and confuse their readers. This is how we end up with dog whistles.

Thinking and writing outside of binary terms like “mother” and “father” can be useful, but some people are mothers, and the person writing about them should know that. Some websites (and computer parts) are just called Motherboard. Trying to shoehorn self-awareness, sensitivity, and careful editing into people’s writing using machine learning algorithms—already deeply flawed, frequently unintelligent pieces of technology—is misguided. Especially when it’s coming from a company that’s grappling with its own internal reckoning in inclusivity, diversity, and mistreatment of workers who stand up for better ethics in AI.

These suggestions will likely improve as Google Docs users respond to them, putting an untold amount of unpaid labor into training the algorithms like we already train its autocorrect, predictive text, and search suggestion features. Until then, we’ll have to keep telling it that no, we really do mean Motherboard.

Google’s AI-Powered ‘Inclusive Warnings’ Feature Is Very Broken

The online giant is rolling out an ‘inclusive language’ function that prompts authors to avoid using certain words and suggests replacements

Predictive text is known for saving writers from embarrassing grammatical mistakes or spelling bloopers, but Google is now telling users not to use particular words - because they are not inclusive enough.

The online giant is rolling out an ‘inclusive language’ function that prompts authors to avoid using certain words and suggests more acceptable replacements.

Among the words it objects to are "landlord" - which Google says “may not be inclusive to all readers” and should be changed to “property owner” or “proprietor” - and "mankind", which it wants changed to “humankind”.

The tool suggests more gender-inclusive phrasing, such as changing “policemen” to “police officers”, and replacing “housewife” with “stay-at-home-spouse”.

But it also objects to the technical term "motherboard" - used for a printed circuit board containing the principal components of a computer or other device.

If a writer uses these and other terms a message pops up stating “Inclusive warning. Some of these words may not be inclusive to all readers. Consider using different words.”

Introduction of the wording has worried many writers who feel it is an obtrusive and unnecessary interference in what should be their free flow of ideas and language.

Silkie Carlo, the director of Big Brother Watch, told The Telegraph: “Google's new word warnings aren't assistive, they're deeply intrusive. With Google's new assistive writing tool, the company is not only reading every word you type but telling you what to type.

"This speech-policing is profoundly clumsy, creepy and wrong, often reinforcing bias. Invasive tech like this undermines privacy, freedom of expression and increasingly freedom of thought.”

Lazar Radic, a senior scholar in economic policy at the International Centre for Law and Economics, said the function was an example of “nudging” behaviour, which “presumes to override the preferences of individuals on the assumption that the nudger knows better than the nudgee what is better for him or her – and, further, for society as a whole.

“Not only is this incredibly conceited and patronising – it can also serve to stifle individuality, self-expression, experimentation, and – from a purely utilitarian perspective – progress.”

Mr Radic added: “What if 'landlord' is the better choice because it makes more sense, narratively, in a novel? What if 'house owner' sounds wooden and fails to invoke the same sense of poignancy? What if the defendant really was a 'housewife' – and refers to herself as such? Should all written pieces – including written forms of art, such as novels, lyrics, and poetry – follow the same, boring template?"

Sam Bowman, the founder and editor of the Works In Progress online magazine, wrote on Twitter: “It feels pretty hectoring and adds an unwanted political/cultural slant to what I'd rather was a neutral product in that respect, as a user.”

The new Google Docs programme which includes the ‘inclusive language’ warnings function is currently being rolled out to what the firm calls enterprise-level users, and is turned on by default.

Surprisingly, a transcribed interview of the neo-Nazi and former Klu Klux Klan leader David Duke - in which he uses offensive racial slurs and talks about hunting black people - prompted no warnings when it was entered into a Google docs programme that included the function.

But at the same time it suggested that President John F. Kennedy’s inaugural address should say “for all humankind” instead of the original phrase “for all mankind”.

When users at Vice magazine keyed in Jesus’s Sermon on the Mount an inclusive warning objected to the phrase God’s “wonderful” works, suggesting the words “great”, “marvellous” or “lovely” instead.

Some users have also reported that phrases such as “a man for all seasons” are flagged by the programme as not being inclusive.

The function even extends to matters of style, with Google suggesting a writer uses “angry” or “upset”, rather than “annoyed”.

Google said the feature was in an “ongoing evolution” designed to identify and “mitigate” unwanted word biases.

A spokesperson for the firm said: “Assisted writing uses language understanding models, which rely on millions of common phrases and sentences to automatically learn how people communicate. This also means they can reflect some human cognitive biases.

“Our technology is always improving, and we don't yet (and may never) have a complete solution to identifying and mitigating all unwanted word associations and biases.”

Big Brother (sorry, Big Person) is correcting you on Google

Google has been criticised for an "inclusive language" feature that will recommend word substitutions for people writing in Google Docs.

The tool will offer guidance to people writing in a way that "may not be inclusive to all readers" in a similar manner to spelling and grammar check systems.

Although the suggestions are just suggestions - they aren't forced on writers and the tool may be turned off - critics have described it as "speech-policing" and "profoundly clumsy, creepy and wrong".

The new feature is officially called assistive writing and will be on by default for enterprise users, business customers who might want to nudge particular writing styles among their staff.

The language the system favours reflects decades of campaigning for gender-neutral terms ("crewed" instead of "manned") and against phrases that reflect racial prejudice ("deny list" instead of "blacklist"), as well as more modern concerns about the impact of our vocabulary on how we identify people.

But despite enormous developments in how computers understand natural language, the technology is still in its infancy.

Among the words that the system has flagged in tests are "mankind", "housewife", "landlord" and even a computer "motherboard" - which may not cause offence.

Google states: "Potentially discriminatory or inappropriate language will be flagged, along with suggestions on how to make your writing more inclusive and appropriate for your audience."

The tool is reminiscent of Microsoft's infamously annoying assistant Clippy, which interrupted writers' own prose stylings with often unwelcome suggestions.

Vice News tested the feature by submitting several famous speeches and literary passages, including the Sermon on the Mount in the Bible, and found most received bad recommendations.

Notably it also found an interview with the former Ku Klux Klan leader David Duke - in which he spoke about hunting black people - prompted no inclusivity alerts or warnings.

Silkie Carlo, the director of Big Brother Watch, which campaigns for the protection of civil liberties, told The Telegraph: "Google's new word warnings aren't assistive, they're deeply intrusive. With Google's new assistive writing tool, the company is not only reading every word you type but telling you what to type.

"This speech-policing is profoundly clumsy, creepy and wrong, often reinforcing bias. Invasive tech like this undermines privacy, freedom of expression and increasingly freedom of thought."

Lazar Radic of the International Centre for Law and Economics told the newspaper: "Not only is this incredibly conceited and patronising - it can also serve to stifle individuality, self-expression, experimentation, and - from a purely utilitarian perspective - progress."

Google said: "Assisted writing uses language understanding models, which rely on millions of common phrases and sentences to automatically learn how people communicate. This also means they can reflect some human cognitive biases.

"Our technology is always improving, and we don't yet (and may never) have a complete solution to identifying and mitigating all unwanted word associations and biases."

Google Docs criticised for 'woke' inclusive language suggestions

The AI algorithms used by Google Docs to suggest edits to make writing more inclusive have been blasted for being annoying.

Language models are used in Google Docs for features like Smart Compose; it suggests words to autocomplete sentences as a user types. The Chocolate Factory now wants to go further than that, and is rolling out "assistive writing," another AI-powered system designed to help people write punchier documents more quickly.

Assistive writing is being introduced to enterprise users, and the feature is turned on by default. Not everyone is a fan of being guided by the algorithm, and some people find its "inclusive language" ability irritating, Vice reported.

Words like "policemen" could trigger the model into suggesting it be changed to something more neutral like "police officers." That's understandable, but it can get a bit ridiculous. For example, it proposed replacing the word "landlord" with "property owner" or "proprietor." It also doesn't like curse words as one writer found.

"Assisted writing uses language understanding models, which rely on millions of common phrases and sentences to automatically learn how people communicate. This also means they can reflect some human cognitive biases," a spokesperson for Google told Vice. "Our technology is always improving, and we don't yet (and may never) have a complete solution to identifying and mitigating all unwanted word associations and biases."

Fairness in AI is complicated

As experts strive to create the holy grail of a perfect, unbiased intelligent system, fairness in machine learning models is proving to be a tricky thing to measure and improve.

Why? Well, for starters, there are apparently 21 definitions of fairness in academia. Fairness means different things to different groups of people. What might be considered fair in computer science may not align with what's considered fair in, say, the social sciences or law.

All this has led to a nightmare for the field of AI, John Basl, a philosopher working at Northeastern University in the US, told Vox, adding: "We're currently in a crisis period, where we lack the ethical capacity to solve this problem." Trying to fix fairness is difficult, not only because people can't agree on what the term even means, but because the solutions for one application may not be suitable for another.

It's not always as simple as making sure developers are training on a more diverse, representative data set. Sometimes the impacts of an algorithm are different for different social groups. Although there is regulation in some use cases, like financial algorithms, there is no easy fix to make these models fair.

IBM: Ethics is a major roadblock to enterprises adopting AI technology

IBM CEO Arvind Krishna has risen through the ranks, working his way up over 30 years to lead IBM. He's witnessed booms and busts in the technology industry, and said that although AI is the future, he's careful about deploying its vast capabilities in the real world. Ah, yeah, that'll be why Watson wasn't fully realized.

"We are only probably 10 per cent of the journey in [artificial intelligence]," he said in an interview with the Wall Street Journal. "With the amount of data today, we know there is no way we as human beings can process it all. Techniques like analytics and traditional databases can only go so far."

"The only technique we know that can harvest insight from the data, is artificial intelligence. The consumer has kind of embraced it first. The bigger impact will come as enterprises embrace it." But Krisha admitted businesses are facing hurdles related to machine-learning models often being biased or the technology being used unfairly.

"We've got some issues. We've got to solve ethics. We've got to make sure that all of the mistakes of the past don't repeat themselves. We have got to understand the life science of AI. Otherwise we are going to create a monster. I am really optimistic that if we pay attention, we can solve all of those issues," he said.

Google Docs' inclusive writing auto-correct under fire

Google has rolled out a new “inclusive language” function that is intended to steer its users away from what it deems to be politically incorrect words, like “landlord” and “mankind.”

Google Docs introduced the “woke” feature this month that shows pop-up warnings to people typing in words or phrases considered to be non-inclusive, such as “policeman,” “fireman” or “housewife.”

The online word processor’s algorithm will alert them that their chosen terms “may not be inclusive to all readers” and then goes a step further by suggesting alternative, more inclusive words to use.

For example, it might suggest “humankind” instead of the gendered “mankind,” or “police officer” instead of “policeman.”

The new AI-powered language feature, called “assistive writing,” has been widely panned by critics, who have accused the search engine of being both intrusive and preachy.

Vice writers found that when they attempted to type in the words “annoyed” and “Motherboard,” these seemingly innocuous terms were flagged for being insufficiently inclusive.

Meanwhile, a transcription of an interview with former Ku Klux Klan leader David Duke, where he uses the n-word and says a host of other reprehensible things about black people, raised no red flags.

Google’s popular free online document editor raised issues with Martin Luther King Jr’s iconic “I Have a Dream” speech, suggesting that the civil rights leader should have replaced “the fierce urgency of now” with “the intense urgency of now.”

Google Docs’ algorithm also took issue with President John F. Kennedy’s use of the phrase “for all mankind” in his inaugural address, and helpfully suggested swapping it for “for all humankind.”

And even Jesus Christ did not get a pass from the search engine, with the writing feature taking a swipe at the use of the word “marvelous” in the Sermon on the Mount, and suggesting that the Son of God should have used “lovely” instead.

A Google spokesperson said that its controversial assisted writing feature is undergoing “ongoing evolution.”

“Assisted writing uses language understanding models, which rely on millions of common phrases and sentences to automatically learn how people communicate,” the representative said. “This also means they can reflect some human cognitive biases. “Our technology is always improving, and we don’t yet (and may never) have a complete solution to identifying and mitigating all unwanted word associations and biases.”

Google launches ‘woke’ writing function touting ‘inclusive language’

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents