Scrolling through TikTok one evening, I paused at a comment on a viral video: “That outfit is ssa!”It took me longer than I’d like to admit to decode the slang. Ssa? It’s “ass” spelled backward. A cheeky attempt to dodge TikTok’s moderation. I remember laughing, but the moment stuck with me. I couldn’t help laughing at the irony. I was on a platform built for uninhibited self-expression, and people were talking in code. It’s not an isolated quirk, either.
Not long ago I noticed creators saying “unalive” instead of “dead” or “kill,” and using the corn emoji (🌽) to imply “porn.” In fact, the hashtag #unalivemeplease has over 9.2 million views on TikTok. A stark reminder that a whole new lexicon called algospeak has emerged right under our thumbs. At first, it felt like stumbling into an inside joke or a weird dialect of Internet-speak. But the more I saw, the more I realized: these linguistic gymnastics aren’t just Gen-Z having fun. They’re symptomatic of something much deeper brewing in our relationship with social media platforms.
Context – Why We’re All Speaking in Code Now
It turns out “ssa” for “ass” and “unalive” for “dead” are part of an increasingly common trend across the internet, as users try to bypass content moderation filters on apps like TikTok, YouTube, and Instagram. Welcome to algospeak, code words and winking euphemisms born from our collective attempt to appease the almighty algorithms. Platforms today use automated systems to detect and down-rank content that might be deemed violent, sexual, controversial, or not “brand-safe.” So, creative users have developed a brand-safe lexicon all their own. For instance, it’s now routine in many videos to say “unalive” rather than “dead,” “S.A.” for “sexual assault,” or “spicy eggplant” instead of “vibrator.” By swapping a few letters or using look-alike emojis, people hope to avoid getting their posts or comments removed or hidden by the algorithmic hall monitors.
Why is this happening now? A big reason is how content is distributed in the modern social media era. Take TikTok: unlike older platforms where what you saw largely came from who you followed, TikTok’s main feed (the For You page) is algorithmically curated and hyper-optimized to keep you watching. You could have a million followers, but whether they see your new video depends on the algorithm’s opaque whims. In this environment, creators tailor their content to please the algorithm first and people second. That means strictly abiding by content rules, spoken or unspoken, is more crucial than ever. And if the rules aren’t clear, people err on the side of caution by censoring themselves.
Another catalyst is the rise of automated moderation during crises and advertiser pressure. When the COVID-19 pandemic hit, social platforms scrambled to squash misinformation. TikTok reportedly down-ranked videos mentioning the pandemic by name, leading users to refer to it with winking nicknames like the “Backstreet Boys reunion tour,” “panini,” or “panda express." Bizarre, yes, but it kept videos alive. Similarly, after YouTube’s infamous “adpocalypse” in 2017 when advertisers pulled out over unsafe content, creators learned certain words could trigger demonetization. Even LGBTQ YouTubers found videos demonetized simply for saying the word “gay,” pushing some to either self-censor or swap in milder terms. On TikTok today, you’ll hear people say they belong to the “leg booty” community (a playful code for LGBTQ) or that something is “cornucopia” (standing in for homophobia). These linguistic workarounds trace back to very real financial and distribution incentives: algorithms favor “clean,” ad-friendly content, so users contort their language to fit that mold.
When you combine these two dynamics, we can see that algospeak exists because the platforms practically demand it. Automated content filters can be blunt instruments. Often just giant lists of “no-no” words scanned by AI. Post a video discussing sexual health or mental health in straightforward terms, and you risk the algorithm mistaking it for something that violates community guidelines. So creators adapt. As one TikToker quipped, “It’s an unending battle of trying to get the message across without directly saying it." The pandemic pushed even more of our conversations online, and that only heightened the impact these algorithms have on the very words we choose. To put it simply: moderation algorithms have become a new kind of invisible audience we all subconsciously perform for. And that means our language online is now bending and twisting in real time to avoid setting off the sensors.
Collision & Synthesis – When Engagement Algorithms Meet Authentic Connection
There’s a profound tension at play here. On one side, social platforms are obsessed with optimization, they tune feeds to maximize engagement and enforce broad rules to keep content “safe” for all ages and advertisers. On the other side, human communication is messy and authentic. Real issues aren’t always pretty or PG-rated, and genuine community connection often requires frankness. When these forces collide, we get algospeak: a weird middle ground where people still talk about everything, from sex to suicide, but in a sanitized, wink-nudge kind of way. It’s like we’ve all become characters in an Orwell novel, crafting Newspeak-esque alternatives to forbidden words. In fact, tech commentators have explicitly called terms like “unalive” “literally Orwellian.”It’s doublespeak for the algorithmic age. Born not from government oppression but from the opaque policies of Silicon Valley platforms.
Historical and fictional parallels are uncanny. In Orwell’s 1984, the totalitarian regime invents Newspeak to constrain thought, cut out words like “freedom” and maybe you snuff out the idea. In our reality, nobody from on high decreed that “suicide” or “sex” must not be spoken, but the algorithms effectively did so, in the name of moderation and ad-friendliness. And just as people living under repressive regimes have long resorted to code language to discuss taboo topics, today’s netizens are devising Aesopian language to discuss perfectly legitimate topics that algorithms misinterpret as taboo. Chinese internet users, for instance, famously created phrases like “grass mud horse” (which sounds like a vulgar insult in Chinese) to evade state censors. Now in the West, we’re seeing everyday social media users coin phrases like “le dollar bean” (Le$bain) for lesbian, because the algorithm might flag the normal word. When users feel they must hold up a palm emoji instead of saying “White people” to talk about race , or use “unalive” to talk about suicide, it raises a disturbing question: What is this doing to clarity and trust in our communities?
The unintended side effects of algospeak are many. For one, it can undermine clarity and emotional resonance. Kathryn Cross, a young health content creator, admitted it makes her feel “unprofessional” to use weirdly spelled words like “seggs” (sex) or “nip nops” (nipples) when discussing serious medical topics. Important conversations about mental health, sexuality, discrimination start to sound like inside jokes, even when they’re deadly serious. A mental health advocate on TikTok shared discomfort with the term “unalive” because, in trying to soften a heavy topic, it “makes a joke out of such a serious subject.” She worries (and many clinicians agree) that dancing around the word “suicide” could further stigmatize it, reinforcing the idea that it’s unspeakable. Indeed, a study found that people overwhelmingly prefer direct, respectful terms like “took their own life.” Some felt avoiding the word entirely is “dangerous” and “isolating.” In trying not to trigger the algorithm, we risk tiptoeing around the truth and losing the therapeutic benefit of plain acknowledgment.
There’s also an impact on trust and authenticity. When users must self-censor and speak in euphemisms, it sends a signal (conscious or not) that the platform isn’t a fully safe or honest space. Imagine joining a TikTok support group and seeing talk of “unalive ideation” and “S-H” (self-harm). If you’re not fluent in the lingo, you might feel confused or even alienated at first. Even once you decode it, a part of you knows everyone is code-switching for the algorithm’s sake. That adds a layer of performative fakery to what should be raw, real communication. Over time, that could chip away at users’ trust in the platform’s integrity. If the stated community guidelines say “we support open discussion of mental health,” but the unspoken rule is “just don’t say the actual words,” people notice that disconnect. It feels, frankly, disingenuous.
Crucially, algospeak tends to hit marginalized communities the hardest. TikTok creator Sean Szolek-VanValkenburgh noted this phenomenon “disproportionately affects the LGBTQIA and BIPOC community.” They’re often the ones coming up with the code words. That’s in part because their content gets flagged more readily; there’s a history of moderation algorithms mistakenly classifying words about queer identities or racial justice as “adult” or “harmful” content. For example, a German investigation in 2022 found that comments simply containing words like “gay,” “LGBTQ,” or even “Auschwitz” were hidden or blocked on some platforms, despite being educational or benign in context. The result? The people who most need to speak openly about their experiences have to continuously look over their shoulder (or rather, over their keyboard). Some Black and trans users have even become nervous to say “white” or “racist” on camera, resorting to literal hand gestures to indicate White people. When entire communities feel they must walk on eggshells linguistically, the authenticity of the discourse suffers. How do you build genuine connection under those conditions?
What algospeak ultimately reveals is a growing rift between engagement optimization and authentic connection. Platforms like Facebook, Instagram, and TikTok tout connection as their mission, but their design choices sometimes say otherwise. Endless algorithmic tuning has led to what tech critic Cory Doctorow calls “algorithmically distorted” spaces. Every post and comment is quietly filtered, scored, and sorted behind the scenes. This can boost short-term engagement. After all, controversy and sensationalism often score high in the attention economy, but it can also backfire by eroding user trust and enjoyment. Many users have come to feel that their feeds are over-curated, even manipulative. In fact, trust in social media is abysmally low: as of early 2024, only about 6% of people globally said they trust platforms like Facebook with their data, and large majorities suspect the content they see is driven by hidden agendas. Users might still scroll out of habit, but the love is gone, replaced by a cynical awareness that “the algorithm” is pulling the strings.
Even usage is showing cracks. Take Facebook: in the U.S., the share of adults using the site has flatlined since 2016, and among teenagers it’s nosedived (from 71% a decade ago to 33% now). While there are many reasons for Facebook’s decline (aging demographics, new platforms siphoning attention), one factor often cited is “algorithmic saturation.” Users grew tired of a News Feed dominated by algorithmic tweaks pushing engagement bait, ads, and suggested content. As one digital wellness report noted, TikTok and YouTube’s relentless feeds can leave users feeling “overwhelmed and distrustful when doom-laden clips surface uninvited,” sometimes prompting them to step away entirely. In other words, over-optimization can undermine the experience. When every platform becomes a slot machine for engagement, people eventually hunger for something more real; a space where they can speak freely without playing cat-and-mouse with an algorithm.
Algospeak, then, is like a warning light on the dashboard: it signals a deeper design problem. If users have to contort their language into leetspeak and innuendo just to have normal conversations about life’s real issues, maybe the platform’s approach isn’t as “safe” or “community-friendly” as it thinks. Sure, the intentions behind heavy moderation are often good: protect people from harassment, trauma, misinformation. But the execution can feel ham-fisted, akin to using a chainsaw for surgery. As digital rights activist Evan Greer points out, trying to stomp out specific words is a fool’s errand: bad actors will always find new tricks, while good actors (regular people with something to say) become collateral damage. In Greer’s words, this is “why aggressive moderation is never going to be a real solution to the harms that we see from big tech companies’ business practices.” The slippery slope is real. Demand platforms “remove more content, more quickly, regardless of the cost,” and you end up with overzealous algorithms that sanitize the internet into meaninglessness. The irony is thick: in chasing engagement and safety, platforms may create feeds so filtered and formularized that they alienate the very users they’re meant to engage. Sanitized, euphemistic content is emotionally weaker. It doesn’t hit the same way. A heartfelt post about a miscarriage or a rallying cry against racism loses some of its punch when phrased in coy algospeak. Communities lose clarity. Users lose trust that they can honestly be themselves. And a social network that isn’t trusted to allow authenticity is a social network on borrowed time.
Connection Blueprint – Building Platforms That Don’t Make Us Talk Backwards
So, what’s the path forward? How can product leaders, investors, and platform builders preserve the good intentions of content moderation (keeping people safe, brands comfortable) without forcing us all to become linguists in our own communities? Below is a “connection blueprint” – a few actionable ideas drawn from this algospeak saga, aimed at designing social systems where safety and authenticity coexist:
Radical Transparency in Moderation: Shine a light on the rules. One reason algospeak proliferates is that users are guessing what might get them flagged. Transparency builds trust: it shows users that moderation isn’t arbitrary, and it helps honest creators speak freely without accidentally tripping a wire. As a bonus, transparency pressures platforms to re-examine flawed filters. If you had to publicly admit “we shadow-ban the word lesbian,” you’d fix that real quick. In short, be open about the “why” behind content decisions. Empower users with knowledge so they don’t have to play the linguistics lottery. While I’ve shared thoughts on Grok in the past, xAI at least open sources Grok’s system prompts which is a step in the right direction.
User Empowerment and Community Control: Let users control their content experience. Offer customizable filters, topic preferences, and community moderation tools. Create designated spaces for sensitive discussions with proper oversight. Empower users as stakeholders, not just consumers, to reduce coded speech and enhance trust without silencing nuanced conversations.
Redesign Incentives for Authenticity: Shift success metrics from pure engagement to user well-being, trust, and long-term retention while communicating these metrics throughout the ecosytem. Reward diverse, meaningful content over clickbait. Value creator health and honest dialogue. A platform that nurtures authenticity builds loyalty and resists the short-term traps of over-optimization and “enshittification.”
Build in Safety Nets, Not Straitjackets: Moderate with guidance, not just punishment. Use prompts, friction, and context-aware tools to slow harm without silencing users. Let human reviewers aid nuance. Prioritize education and transparency over blanket suppression to foster open, respectful dialogue and reduce user self-censorship.
Each of these steps is about rebalancing the relationship between platforms and the people who use them. The big picture is that moderation should be a dialogue, not a monologue. If users feel the system is fair and on their side, they won’t feel compelled to find exploits to say what they mean. The result? Healthier communities, more genuine engagement, and platforms that can still be safe and ad-friendly without unintentionally inventing a whole new dialect of censorship-evasion.
In the end, algospeak is a mirror held up to our digital lives, reflecting a simple truth: when people have to talk in code, it means something in the system is broken. It’s a poetic rebellion, witty, resourceful, at times endearingly absurd, but it shouldn’t be a necessity for honest communication. The takeaway for anyone building or funding the next big platform is clear. Listen to the code words and what they signify. They’re users saying, “We still want to speak, to connect, to tell our stories. We just need you to let us do it in our own words.” The prompt for all of us: Can we create a social community where nobody has to say “ssa” when they really mean ass?
This was such a fascinating and eye-opening read. You captured the tension between authentic expression and algorithm-driven censorship so powerfully—I found myself nodding all the way through. I just subscribed, and I’m really looking forward to reading more of your work.