Online censorship and algospeak: Confusifying the masses with newfangled linguification

Prakrito Nree Haider

If the title of this article made your head hurt, then you probably have experienced the spiralling feeling on social media wherein after a few days of not knowing the latest trends and topics, everything sounds and looks…different. It may feel like you just walked into an intensive Sanskrit course, and you may fail the whole course if you do not learn Sanskrit right this second. New words, phrases, memes, etc., come into the public arena known as social media every few days, and for a moment, that is all you see. An algorithm with no heart or emotion is coded to pump out the same things to you over and over until everyone is collectively sick of it, then it pumps out the next thing. Heaven forbid you make a reference to something from a few months ago, because we, the Internet(™), have collectively moved on from it, and you are the uncool one for not being ‘in the know’. Of course, everyone wants to be ‘in the know’.

About a year ago, Hollywood gave us the first half of the movie adaptation of the Broadway musical, Wicked. This brought the script and world of Oz to a whole new digital generation, and with it, the Ozian language, which involves taking words we know in modern English, tweaking them to form a new, different way of speaking. Some examples include ‘obsessulated’, ‘degreenify’, ‘congratulotions’, and ‘scandalocious’, to name a few. From the way the words are spelt and spoken, it is easy to understand what they refer to; it is just English words with a few letters added, after all, but this phenomenon reminded me of another form of English: Newspeak. From the novel 1984, by George Orwell, an English novelist and essayist, Newspeak is the fictional language of the totalitarian state in which the main character resides. This is a controlled language, based on English, but with a limited vocabulary, and a simplified grammar designed to hinder a citizen’s critical thinking skills. Some examples of words used in this language include ‘thoughtcrime’, ‘doublethink,’, ‘ungood’, ‘plusgood’, etc. Similar to Ozian, we can understand what these mean from context clues, but they are altered, changed, censored.

But are a few memes about Wicked’s odd language really censoring the masses and lulling them into giving up their power to form a totalitarian government? Not necessarily, but what is happening is a steady rise of self-censorship on the Internet. Most social media sites now have word filters due to moderation rules, sometimes outlined in their Terms of Service. Sites like Tiktok, Instagram, or Twitch, have moderation in order to ensure that graphic content or words are not pushed to the top of the for-you page. Some examples of this include replacing words like ‘kill’ or ‘die’ with ‘unalive’, ‘rape’ with ‘grape’, ‘suicide’ with ‘sewerslide’.

The theatrical roots of modern "lingufication": Wicked and the birth of "scandalocious" dialogue. Visual: Collected.

 

Similar to Newspeak, important words, ideas, and concepts are watered down. Bad is now ‘ungood’, and suicide is now ‘sewerslide’. The words do not feel the same, do they? Any violation of the word filters risks getting the account banned, or not being projected like other posts. With how firmly cemented social media and its culture is now, with trends and viral moments coming and going at a rapid pace, and the digital world taking a passenger seat to reality instead of a backseat, this way of referring to serious or maybe ‘inappropriate’ things goes from its insulated digital community into the general public.

Why does this matter? The reality of the issue is twofold: constant posting and rapid change of trends means a variety of content posted, of which even if they are not actively against the moderation rules, can still be consumed at a rapid pace with little care for the effects of that consumption, and two, the censorship being done by these corporations and tech moguls— is it helping? Does it stop us from seeing things we may not want to see, or does it desensitise and push our heads in the sand on important things? Should things like suicide and rape be watered down to emojis or silly rhyming words just to avoid an algorithm?

And speaking of censorship and an algorithm, the most recent and notable of censorship measures was in 2019, when China’s censorship policies for social media companies stretched from Doujin, a TikTok-like app in China, which was established. These policies led to aggressive blocking of any content on these apps which seemed critical of China and its government, by tracking the writing or saying of keywords flagged in their library and removing videos with those words.

An algorithm with no heart or emotion is coded to pump out the same things to you over and over until everyone is collectively sick of it, then it pumps out the next thing.

This model of censorship was then applied to TikTok, where violations of TikTok’s community guidelines were met with shadowbanning (a user’s content being reduced in its likelihood of being viewed by audiences), and videos were taken down for use of certain words. To circumvent these new guidelines, Algospeak (algorithm-speak) was introduced. Adam Aleksic, an American linguist and content creator authored “Algospeak: How Social Media is Transforming the Future of Language” in 2025, a book intended to analyse and explain Algospeak explains the phenomenon as something wherein users speak in a form of code to avoid referencing anything the guidelines deem inappropriate or violating the Terms of Service, like ‘unalive’ or ‘sewerslide’. 

Aleksic documents how users bypass corporate filters to discuss serious issues through a new, "confusifying" dialect. Visual: Collected.

 

This gives content creators and their consumers the leeway to continue making their content while circumventing censors for triggering topics. The larger effect, as we see with Aleksic’s examples of teacher and student interviews, is that these euphemisms filter their way into everyday speech, until it is not just a word for the digital space. According to Aleksic, “many adolescents use it when they’re uncomfortable talking about concepts of death and suicide, since “unalive” sounds like a less scary word.” The spreading of these words and codes is not just by us, or by consumers, but by the aforementioned algorithm, which circulates the quick, snappy words used largely by its user-base, and thereby the words of Algospeak secures itself in defining and changing the way we speak.

Censorship does not just limit what we see or hear on social media. Slowly, it is reshaping how we see things in reality. Continuing Aleksic’s research, we see how these coded terms can alter the emotional impact of serious subjects like death: “These words sound like a less intense version of what they represent, making them more palatable to easily offended audiences…they lack the shock value to really upset societal sensibilities.” It is because of this lack of shock value that it becomes so easy to see the world through rose-colored glasses. Slowly but surely, we allow ourselves to lose the ability to be shocked and horrified by shocking and horrifying things, because these words take the weight out of the situation. 

To put it bluntly, issues like suicide, rape, murder, war, etc. are serious, shocking things. When you take the emotional, psychological weight out of them by replacing them with words like ‘sewerslide’, ‘grape’, ‘unalive’, it takes the shock and the immense weight of these things out of it. These words are not related to the original at all; they are commonplace words put together to pass an algorithm. The implications of this are that we become desensitised and unempathetic to things that happen in reality, to real people, every day. We become accustomed to making light of a serious situation. We become numb to the sensation of seeing a horrible thing, letting Algospeak act as a cushion, a filter to more comfortably see the world through.

 There are, of course, situations where Algospeak is used to avoid triggering anyone with sensitive topics, but the general majority of people are conversely affected by this, because we are slowly conditioned to take the heavy, serious undertones of these issues and make light of it, ignore it, and reinvent it to be palatable to the ordinary consumer.  These words take the urgency out of the issues for us, the consumer, further normalising Algospeak by perpetuating it in our real lives.

Ella Steen and colleagues study on Algospeak titled “You Can (Not) Say What You Want: Using Algospeak to Contest and Evade Algorithmic Content Moderation” used the model of using Algospeak to evade the algorithm, and argued for needing improved content moderation to give value to marginalised content on social media, specifically on Tiktok, finding that “TikTok’s algorithmic content moderation is not capable of understanding the particular contexts in which participants discussed benign subjects. By default, this poses unjust restrictions to a large number of TikTok users who are talk-ing about benign subjects like sex education, sexualities and gender identities, ethnicity, or social and political activism without violating community guidelines.”

The implication of this is that despite changing the landscape of how we speak to each other now, the moderation and algorithm itself is, of course, not a human being that understands the difference between discussion of explicit issues versus things like sex education or identity. This bears the implications, therefore, of having entire communities and topics cut off from us, becoming aloof to their issues and struggles, and categorising anything related to these issues as ‘bad’ or ‘inappropriate’.

Onto the first issue defined a few paragraphs ago, the word desensitisation has been brought up in this paper. A few years ago, when I was in middle school, there was much debate about video games desensitising young people to horror and violence, and now here we are, debating how just words can desensitise us to serious topics. In a world where we are force-fed consumption, reels, videos, essays, tweets, etc. of short zingers to keep our attention, we are forced to reckon with the change in our language, but very rarely are we forced to reflect on how this may be affecting us when we experience or learn about important issues. When our information is filtered through silly words and emojis, what happens to our psyche? How can forces other than individual content creators capitalise on this?

When language is diluted, so is the thought behind it. Visual: Salman Sakib Sharyar

 

In her study on transnational repression, Heather Vitale finds “The poisoning and desensitisation of information environments also enables more sinister outcomes: wider passageways for regimes to commit acts of transnational repression against targeted communities. When language is diluted, it loses its gravity, allowing targeted or vulnerable communities to suffer. Not only are the voices of minority communities silenced, but these restrictions and censorship rules give way to creating wider passageways for a majority voice to target these vulnerable communities. When we lose marginalised voices to serve the grand narrative, we see how a community is silenced and censored. 

Conversations of race, identity, nationality, gender, etc. are all necessary and important discussions that need to be had in this day and age, but the censorship of these communities, their words and the resulting rebranding of their language and culture is a direct effect of this censorship.

On the example of TikTok, words from things like African-American Vernacular English (AAVE), a dialect of English with its own grammar and syntax evolving from English and African languages and dialects, spoken primarily by African Americans, are often minimised into ‘TikTok language’. The decades of history and the community of people within these cultures are thereby easily written away, and their words are co-opted by the majority, with no credit given, and they are thereby relinquished to the footnotes of history. This is a direct effect of how these communities are targeted— not always in an easily discernible way, but by taking their language, their culture, and sanitising it for the general public.

I want to bring back the discussion to George Orwell, author of 1984. On the topic of language and politics, his essay from 1946 titled ‘Politics and the English Language’ outlined a few ideas echoed in this paper— the decline of clear, direct language, driven by forces beyond our control, the pressure to water down our words, etc. “Our civilisation is decadent and our language – so the argument runs – must inevitably share in the general collapse. It follows that any struggle against the abuse of language is a sentimental archaism”, he writes. Despite being decades apart, his words still ring true. 

Our civilisation is decadent, indicative of the speed at which we have evolved and innovated to this point, where the world is available at our fingertips; and with that decadence comes the inevitable fall of our language, and now more than ever we are forced to be okay with our language changing. In a world of censorship, algospeak, and generative AI telling us how to speak, write, act, etc. the idea of clinging to our words and books, uncensored, becomes a smaller and smaller dream.

As Orwell states later in the essay, “But if thought corrupts language, language can also corrupt thought. A bad usage can spread by tradition and imitation, even among people who should and do know better.” This is echoed in how Aleksic describes the move from niche references to words used by the general public and especially younger generations who do not know any better, thereby showing how censorship can reshape vocabulary.

Befuddling and confusifying the general public into self-censorship has profound implications for how we consume information in this new day and age. It is simply not feasible for us to crawl under a rock and live there, or avoid online trends and fashions— it is almost inevitable that we are bombarded with this meme or that trend until it loses steam in a matter of days. 

It is even more inevitable that the artificial game of telephone that occurs with algospeak, wherein a word is used perhaps as a one-off or a joke, is then propelled into online spaces, and then trickled down to real life, where people use it in conversation, perhaps not even realising the change in their speech occurring so easily from simply consuming censored content. What begins as a protective measure to continue speaking about serious topics becomes something that reshapes our language entirely, cutting emotion and real experiences out of words and phrases to appeal to a broader audience, and thereby desensitising both creator and consumer from the issue. 


References:

  1. Aleksic, Adam. Algospeak: How Social Media Is Transforming the Future of Language. Knopf, 2025.
  2. Orwell, George. “Politics and the English Language.” 1946. The Collected Essays, Journalism and Letters of George Orwell, edited by Sonia Orwell and Ian Angus, vol. 4, Harcourt, Brace & World, 1968, pp. 127-140.
  3. Steen, Ella, et al. “You Can (Not) Say What You Want: Using Algospeak to Contest and Evade Algorithmic Content Moderation on TikTok.” Social Media + Society, vol. 9, no. 3, 2023, https://doi.org/10.1177/20563051231194586.
  4. Vitale, Heather Marie. “Repression Grows in a Desensitized World: Case Studies in Transnational Repression.” Journal of International Affairs, vol. 75, no. 1, 2022, pp. 131-42. JSTOR, www.jstor.org/stable/27203124.

Prakrito Nree Haider is currently studying political science at the University of Massachusetts, Boston. Reach her at prakrito.haider05@gmail.com


Send your articles for Slow Reads to slowreads@thedailystar.net. Check out our submission guidelines for details.