#content moderation

20 posts loaded — scroll for more

Text
firstoccupier
firstoccupier

Chaos Is Not a Growth Strategy

By Cliff Potts, CSO, and Editor-in-Chief of WPS News

Baybay City, Leyte, Philippines — March 5, 2026

Instability Is Not an Accident

After spending time on TikTok, one conclusion becomes unavoidable: the disorder is not incidental. It is structural.

Platforms experience bugs. Algorithms misfire. Moderation systems struggle at scale. Those are normal problems. What is not normal is sustained…

Text
fivesdigital
fivesdigital
Text
firstoccupier
firstoccupier

YouTube’s Appeals System Is Designed to Fail

By Cliff Potts, CSO, and Editor-in-Chief of WPS News

Baybay City, Leyte, Philippines — February 15, 2026

Reporting

For years, YouTube has told users and regulators that content moderation errors can be corrected through a fair and accessible appeals process. In public statements to European regulators, the company has described this system as a safeguard that protects creators from mistaken…

Text
firstoccupier
firstoccupier

When Curiosity Turned Into Scrutiny

By Cliff Potts, CSO, and Editor-in-Chief of WPS News

Baybay City, Leyte, Philippines — February 5, 2026

How I First Encountered TikTok

I didn’t join TikTok to build a following, sell a product, or perform for an algorithm. I joined out of curiosity. People kept talking about it, so I wanted to see what it actually was.

My first video was intentionally dull. I wasn’t testing boundaries or…

Text
themuthaphukkinpooch
themuthaphukkinpooch

Are We Speaking “Newspeak”? Why “Unalive” and “Sewer Slide” Feel Dystopian

If you’ve spent any time on YouTube or Reddit recently, you’ve likely encountered a strange, new vocabulary. People don’t “die” or “commit suicide”; they become “unalive” or “kermit sewer slide.” You won’t find discussions about “Facebook” or “TikTok”; you’ll see references to the “blue app” or the “clock app.”

This phenomenon, known as “algospeak,” is a user-created language designed for one…

Text
justinspoliticalcorner
justinspoliticalcorner

Jessica Corbett at Common Dreams:

European Union leaders and others around the world this week condemned President Donald Trump’s administration for imposing a travel ban on a former EU commissioner and leaders of nongovernmental groups that fight against disinformation and hate speech—or, as US Secretary of State Marco Rubio called them, “agents of the global censorship-industrial complex.”

Rubio said in a Tuesday statement that his department “is taking decisive action against five individuals who have led organized efforts to coerce American platforms to censor, demonetize, and suppress American viewpoints they oppose. These radical activists and weaponized NGOs have advanced censorship crackdowns by foreign states—in each case targeting American speakers and American companies.”


The five people barred from the United States are Imran Ahmed, the British CEO of the Center for Countering Digital Hate; Clare Melford, another Brit from the Global Disinformation Index; Josephine Ballon and Anna-Lena von Hodenberg of the German group HateAid; and Thierry Breton, a French leader who helped craft the EU’s Digital Services Act (DSA) as a commissioner.
“Is McCarthy’s witch hunt back?” Breton wrote on X—a social media platform that belongs to erstwhile Trump ally Elon Musk and was recently fined €120 million, or $140 million, for violating DSA’s transparency obligations.

“As a reminder: 90% of the European Parliament—our democratically elected body—and all 27 member states unanimously voted the DSA,” Breton noted. “To our American friends: ‘Censorship isn’t where you think it is.’”


The Trump Regime’s McCarthyist speech code laws have come for European opponents of social media disinformation by denying visas for 5 EU officials involved in content moderation.


See Also:

Text
possumcollege
possumcollege

Potentially Mature Content of the Day: A man in different outfits.

Can’t stress enough how many of these flagged posts are just people being.

Text
possumcollege
possumcollege

Potentially Mature Content of the Day: A woman

Text
possumcollege
possumcollege

Potentially Mature Content of the Day: Panties Mentioned

Text
superbbirdofparadise
superbbirdofparadise

Hey guys, please tell me when one of my posts is marked as Mature Content. Chances are, it’s actually something totally innocent that either got flagged by tumblr’s glitchy image recognition software, or I accidentally pressed the button somehow. For example, my last “mature” post was just an announcement for the TADC-themed cafe coming to California. While that post may not be everyone’s cup of tea, it did not and does not contain any sexual content, violence, drug usage, or anything else that I can think of that could be considered “adult” or “mature” content. Literally just a few very benign words and pictures of cartoon characters.

It’s interesting how content recognition software works, huh? Thanks to its colorful characters, YouTube considers it to be content for all ages/kids, even though they swear, talk about sex, and literally kill each other sometimes. But here, they get flagged as something reserved for only the most mature adults, even though they weren’t doing anything in the announcement poster. My only other (to my knowledge) flagged post was a screenshot from Mario Kart World. Mario Kart! Y'know, one of the most popular game franchises out there, which is known for being family-friendly? Meanwhile, tons of actual “mature” content is visible on here, even for those who actively try to block it from their dashes.

This turned into a bit of a ramble, but what I’m trying to say is don’t trust automatic moderation. It may have been created with good intentions, but it still makes plenty of mistakes.

Text
akaashmaharaj
akaashmaharaj

A Conversation with Bluesky’s Head of Trust and Safety

How does Bluesky’s decentralised approach to content moderation set it apart from other social media platforms?

I spoke with Aaron Rodericks, Bluesky’s Head of Trust and Safety, about combatting disinformation, election interference, and hate.

This was part of a series of conversations hosted by Reddit’s Mod Council, on how online platforms can uphold freedom of expression, public accountability, and information integrity.

🖥️ https://youtu.be/8Jx0WhWPPd0

Text
validupdates
validupdates

A Nigerian TikTok live shows a sexual act during a public stream. Viewers demand faster removals and stronger safeguards for minors. Read the full story below #ValidUpdates #TikTok #Nigeria #SocialMedia #OnlineSafety

Text
justinspoliticalcorner
justinspoliticalcorner

Caroline Orr Bueno at Weaponized Spaces:

In the summer of 2025, as the national debate over social media content moderation flared anew, most of us were watching the public fights — congressional hearings, new bills, platform memos, etc. Few noticed the quiet war being waged inside the moderation queues themselves.

This is the front line of Moderation Sabotage — a deliberate, tactical assault on the capacity and timing of trust-and-safety systems so that disinformation, inflammatory content, and manipulation slip through unchecked. It’s not a glitch. It’s a stealth weapon in Trump’s digital arsenal — and it looks a lot like the weapon Russia deployed in 2016.

The Quiet Storm Inside Platforms

To understand how Moderation Sabotage works, picture the content pipeline of a social platform like Facebook or X. Reports backlog, automated filters flag borderline cases, and human teams process appeals, escalations, and high-risk content. They’re stretched thin even on a normal day.

Now imagine a surge: hundreds or thousands of new, near-duplicate content items hitting just as staff shifts change, or during nights, weekends, or holidays. The filters get overwhelmed. The human teams scramble. Priority queues clog. Many posts stay live longer than they should — or, in many cases, indefinitely.


Russia deployed nearly identical tactics in 2016 as part of their effort to help Trump and hurt Hillary Clinton. I observed a pattern during the summer of 2016 in which Russian accounts would flood social media platforms with content in the early hours of the morning when most Americans were asleep and it was easier to influence trending topics given the lower volume of posts. Then, by the time most Americans woke up and checked social media, the topic would be trending, but the original accounts that pushed out content were buried beneath more recent posts. As a result, the manufactured trend looked like an authentic trend to the average social media user.

The goal of these tactics is to keep content alive long enough to hit critical visibility thresholds: trending, recommended, or surfacing in other people’s feeds. Once the false or manipulative content achieves that endurance, even later removal often comes too late because its momentum has already done its damage.

[…]

Why the Right Leverages It Better (For Now)

Moderation Sabotage isn’t inherently partisan. But historically, Trump’s digital networks have had structural advantages:

  1. High-volume infrastructure. Right-wing media, think tanks, podcast networks, and influencer ecosystems can be synchronized rapidly. That coordination turns multiple bots, pages, and accounts into distributed networks targeting moderation capacity.
  2. Narrative framing. The right has long positioned itself as a victim of “Big Tech censorship.” As a result, when content is removed — or even delayed – it becomes argumentative fuel. Moderation actions are instantly weaponized as proof, triggering more engagement.
  3. Proximity to platform debates. This network watches internal policy skirmishes closely (e.g. Meta’s rollback of its fact-checking program in January 2025). They time sabotages around changes or turmoil in moderation policy, knowing that internal confusion gives extra room to slip content through.
  4. Playbook discipline. Red teams test variations: which phrases gets flagged, and which don’t. They pull what fails. They scale what survives. This iterative mapping of moderation capacity is more resource-intensive than most left-leaning groups are structured to run.

[…]

The Counterpunch: Building Resilience

This isn’t hopeless. But resisting Moderation Sabotage requires strategy.

  • Platforms must invest in redundant capacity, especially during high-risk windows.
  • Transparency is vital — publishing queue latencies, backlogs, and banned content trends would shine light on sabotage as it happens.
  • External auditing teams and “firebreak” protocols can hold dubious content off the critical path while moderation catches up.
  • Regulators should require legibility in moderation workflows, not just takedown counts.
  • And finally, civil society must expose sabotage events as occurrences — not accidents.

Because once you see moderation delays not as errors but weaponized pauses, the smallest glitches become signals of a much deeper war.

The disturbing trend of weakened content moderation is how lies (often of a far-right nature) get spread before the truth has a chance to get out of bed.

Text
bytetrending
bytetrending

AI, Datasets & Ethical Boundaries

Artificial intelligence is rapidly reshaping how we interact with online content, from personalized recommendations to automated moderation systems. The sheer volume of information generated daily necessitates increasingly sophisticated tools capable of identifying and addressing harmful material, a task often delegated to AI models. These algorithms learn by analyzing massive datasets,…

Text
signalcli
signalcli

The Kafkaesque Journey, Pt. III: Being Real on Social Media

Sequels are almost never as good as the original. That’s just the way things work. But this one? This one surprised even me. Round three in our social‑media saga. The accusation this time? “Inauthentic account”. According to X.com (Twitter), an account registered under the signalcli.com domain, with a published @signalcli.com email, corporate phone number, and a banner that literally says signalcli.com — is apparently impersonating… SignalCLI. You couldn’t make this up.

Authenticity Theatre

When PR agencies insisted on a social presence, and this blog wasn’t “enough”, we tried Facebook first. That went nowhere. So we turned to X.com. Created an account, paid the $5 fee for the blue mark, started exploring advertising options — only to discover that we require a banking license to advertise. That’s right. A company that never touches user funds, never trades on anyone’s behalf, and functions essentially as a very sophisticated (calculator from the 80’s) crypto futures signals provider, with a disclaimer “Informational analysis only — not investment advice, no execution, no custody, no personalized recommendations” in the eyes of X.com, is a financial institution — and therefore requires a banking license.

“Fine”, said management, and made executive decision: advertise elsewhere. We invested in community outreach, asked colleagues, even competition, actual ad agencies. This worked — our X.com account started gaining traction. Followers increased. Then came the hammer: banned for… inauthentic behavior. Translation: X.com didn’t get their cut of the ad‑spend pie. If a slice is the goal, wouldn’t it be logical to — I don’t know — reduce the number of hoops required to pay you to advertise content? It’s the 21st century; platforms that make a solid piece of their budget from ads and device tracking (say “silicon” ten times to your phone and observe results) should probably hire people who know what they’re doing. Free hint: a calculator cannot have a banking license. Even advanced, even with custom‑built AI, even with a multi‑server setup — it’s just a calculator, folks.

Then, a bonus surprise: our corporate card was charged $20 by X.com. Accounting flagged it (the office could hear the ranting). Line item: “Advertising services.” Hang on — those services were explicitly refused earlier. Let that sink in: the platform that declined to run ads charged our card for running ads. Make sense to anyone?

Another round of senior‑management meetings: and there you go; our official marketing budget for X.com going forward is set at $5 a month. Just enough to keep the account’s blue mark alive for PR checkboxes. PR team was happy, Sophie finally grabbed some sleep, and we kind of forgot the whole story, until…

Enter MEXC (and the Coincidence)

On September 5, our friends at MEXC decided short‑term traders were making too much money. The fix: doubling taker fees from 15% to 30%. And yes, by the way — if you weren’t aware — 0.04% taker fee on MEXC is not exactly 0.04%; real fee floats around 30% area (it hovered around ~15% before: ugly but survivable; 30% — obviously — isn’t). We called it out, posted on social, and — for a mostly dormant account — the post actually moved. Not viral; just visible enough to notice.

Twenty‑four hours later the suspension email arrived. Reason: “inauthentic account”.

(Seriously considering ordering reading glasses for the ex‑Twitter-now-X.com company.) The profile listed the official website, corporate email, corporate phone number, and a banner on the account literally said signalcli.com. Our site and Telegram channel stated that this handle is our only official X.com presence. We nominated it in multiple PR releases and publications. Two months “authentic”, and the moment a straightforward (uncomfortable) question landed — suddenly “inauthentic.” Hmmmmmmmmm…

X.com has an appeal form. So off we went, but got shot down — this time by our legal team; apparently our statement “wasn’t professional,” and we shouldn’t be disrespecting ostriches (fun fact: that’s the only living thing in the world whose eyes are bigger than their brains). Off they went, converted our quite edgy response on “unauthenticity” into something sterile and vanilla, and sent it off with the remark, “sure we’ll get a response”. Well, no. We didn’t. Which tracks with earlier rounds: support on X.com doesn’t exist. You can file tickets; no one reads them. If you think otherwise, you’re adorable.

Fair play question?

Bottom line: there isn’t much concern over social networks that charge corporate cards for undelivered services; suspend accounts for what they label “too‑active growth”; keep accounts live for russian organizations with widely reported human‑rights issues; at the same time ban accounts for Ukrainians; allow questionable imagery to circulate through personal messages; the list could go on. At the end of the day, if those guys built a platform and made it extremely popular, they can do as they please; and if they prefer to be on that side of history — so be it, their choice.

The contradiction begins when those same networks — with that iffy policy footprint, and accounts sold by the thousands on open markets for three cents apiece — start lecturing others on inauthentic behavior and so-called “fake” profiles. The equivalent of being taught the benefits of virginity by the owner of a brothel doesn’t really work, does it?

The world of gray

The online mess is just a mirror. In the analog world: you step off a cliff, slip, get hurt — your fault. In the modern one: you sue the park, the rock, and gravity. A recent Canadian police presser told homeowners, in plain English: if you meet burglars at night, don’t fight — let them take what they want so nobody gets hurt.

NATO and the UN run a similar playbook: minimize friction, avoid stakes. Decision-makers are recruited to “not upset anyone”. Sharp minds with sharp edges seldom reach the top; the gray — do. In 1983 Ronald Reagan called the USSR an “Evil Empire”. Back then, leadership wasn’t “gray”; it was bright — in the best sense. They knew history — the invoice for Chamberlain’s 1938 ‘peace for our time’ was global — and brutal. Today, it’s different. Knowing your past is no longer a requirement, plausible deniability is. Results: two wars in Chechnya in the 1990s, Georgia in 2008, Ukraine in 2014, culminating in 2022 — and the carpets stay red. That’s responsibility-avoidance at scale.

This isn’t a geopolitics lecture; it’s a rhyme. We got banned for pointing at a much bigger outfit (and likely a much bigger X customer) and saying, “That’s not true”. Gray logic is universal: if drama is possible, step back; if impunity is likely, act; and if neither is safe, produce a thousand reasons to do nothing. So yes, banning us was “logical”: better to preempt the noise than host a boring, factual thread about fees.

And it wasn’t even about false advertising — although advertising 0.04% and charging ~30% does require a willing calculator. The request was simple: on small trades, keep the old fee (still excessive, but survivable). But simple invites drama, and drama threatens relationships, so the decision became: don’t amplify the post, don’t answer the question, and maybe nobody notices. Policy by “just in case”. You can almost hear the template being reused elsewhere.

The Outcome & Next Steps?

For us? Nothing changes. We’ll create yet another account, pay another $5 for non‑provided services and lack of support, keep publishing crypto futures signals, and make sure to highlight absurdities as we meet them along the road. Our support — unlike X.com’s — is available when you need it. Our stats are public, our policies straightforward, and there’s no questionable ethics: it’s simple, and it’s fair play. We do have a funny feeling this saga isn’t over; so expect yet another sequel. We’ll keep you posted.
Side note: didn’t think this gig would involve this much social-network drama.
Stay safe and happy trading, everyone!

Making our legal team happy note:
The statements in this article are grounded in contemporaneous records (e.g., screenshots, invoices, support logs). Redacted copies available upon request
.

Enjoying the content?
Awesome
!
Please support this project by sending USDT (BSC / BEP20 network) to:
0x7241275b9D37CcF0621480fD408CFf401762c485
Your contribution o
f $5–$10 helps keep these articles free and accessible to everyone.
Thank you for your support
!

Text
sarah6rain-blog
sarah6rain-blog

Pov when you’re an engineer but you gotta work in a call center as a content moderator because you couldn’t find a job so you have to try to keep your brain cells alive because content moderation literally shrinks your brain :)

I do brainrot as a job

I also found a cool lighter.. looks psychedelic…

Imma light everything on fire

Text
prententiousjackal
prententiousjackal

Me: Creators must make reasonable efforts to keep their child audience and adult audience separate.

Also me when I was under 18: *Watching both the Odin Wolf and the Nido Flow videos.* :D

Text
firstoccupier
firstoccupier

Facebook and the Misinformation Pandemic

How Digital Lies Fuel Authoritarianism

Facebook has emerged as a central platform for spreading COVID-19 denialism, vaccine misinformation, and dangerous conspiracy theories. This “misinformation pandemic” has had severe consequences, contributing to public health crises and empowering authoritarian movements worldwide.

Studies reveal that Facebook’s algorithms prioritize sensational and…

Text
firstoccupier
firstoccupier

Facebook and the Misinformation Pandemic

How Digital Falsehoods Fuel Authoritarianism

Byline: Cliff Potts, WPS News

New York, September 11, 2025 — Facebook has become the primary battleground in the fight against the global misinformation pandemic, serving as a major vector for COVID-19 denialism, vaccine falsehoods, and conspiracy theories that actively undermine democratic institutions. Despite repeated promises to combat…

Text
drag-tween
drag-tween

Yeah, methinks the ‘Gram has gone too far banning “woke” content…