Dear gentle reader,
What a year this month was… but February’s not gonna stop and said "hold my data breach."
And I know, it’s been MONTHS since my last post, and happy new year I guess, and we can talk about why it’s been that long, but today we have more pressing matters that we need to talk about and that forced me to finally write a piece.
If you've been following the social media landscape lately, you've probably noticed something unsettling: the walls are closing in on digital privacy, and nobody seems to know what they're doing. Or do they…
Let's talk about three stories that should make every human on the internets extremely uncomfortable.
Welcome to Marketing Right Side Up! My thoughts and depressions about marketing and internet in a written form!
I’m Mark, I help brands build predictable marketing systems from ground up.
TikTok's New Owners Have Entered the Chat (And They're Blocking Your Words)
So TikTok finally sealed the deal to avoid that U.S. ban we've been hearing about since 2020. Oracle, Silver Lake, and MGX now control over 80% of TikTok's U.S. operations, with ByteDance hanging onto just under 20%. Oracle, (yes, that Oracle, co-founded by Trump ally Larry Ellison) is now managing U.S. user data and overseeing the algorithm in their "secure cloud environment".
Sounds fine, right? Just a normal corporate restructuring. Except users immediately noticed something was iffy.
Within days of the takeover, TikTok users discovered they couldn't send direct messages containing the word "Epstein". Try to type it? Instant notification that your message "might violate TikTok's community guidelines". Meanwhile, content containing that keyword was still being displayed and featured across the platform. The censorship was selective, only in DMs…for now.
The timing? Chef's kiss levels of sus (i’ve been working on my Gen-Z lingo).
This happened just as the Oracle-led consortium took control, and as scrutiny around Trump's connections to the late convicted sex offender Jeffrey Epstein intensified.
Sidenote: As of writing this post, the US Government is 42 days overdue on releasing the Epstein Files (dictated by law) and they supposedly released only about 1% of the whole thing. Imagine how crazily bad the content of the rest must be, if in in the 30,000+ files released so far, most of which was shadily redacted, Trump was mentioned HUNDREDS of times…?
But wait, there's more! Creators also reported issues uploading content critical of ICE raids, killings and immigration enforcement. Videos about deadly shootings in Minneapolis and immigration operations mysteriously failed to upload or weren't viewable to followers and views tanked massively. TikTok insists nothing changed with their moderation policies, but California Governor Gavin Newsom wasn't buying it, as he launched an investigation into whether TikTok is violating state law by censoring Trump-critical content.
So let me get this straight: A platform now partially owned by a Trump ally is blocking DMs about Epstein while shadowbanning content critical of federal immigration enforcement? And we're supposed to believe this is just a coincidence?
I don’t mean to put my tinfoil hat on, but I ain’t buying it.
TikTok is one of the, if not THE most popular app in US for younger population to consume information. To Gen-Z, TikTok supplements the role of a search engine.
If I wanted to control the narrative and have a “almost state social media”, I wouldn’t do it any other way.
The uninstalls sky rocketed by 200%. Let’s see if the trend continues.
WhatsApp's "End-to-End Encryption" Might Be More Like "End-to-End-ish"
Speaking of things that make you go "hmmmm," a new lawsuit alleges that WhatsApp's claimed end-to-end encryption is basically theater.
Filed in late January in San Francisco federal court, the class-action lawsuit cites "courageous whistleblowers" who claim that Meta employees can access WhatsApp message contents whenever they want. According to the complaint, the process is disturbingly simple: an employee just sends a "task" through Meta's internal system to a WhatsApp engineer, explains why they need access, and badabing, badaboom, messages appear in a widget alongside unencrypted content, in near real-time.
The lawsuit claims this access has no time limitations; meaning Meta employees could theoretically read every message you've ever sent, even ones you think you deleted.
Same as it happened with your Instagram stories that were used for Meta AI training. Even those deleted…
Now, before we burn everything down: the lawsuit provides zero technical evidence for these claims. Meta called the allegations "categorically false and absurd" and threatened to seek sanctions against the plaintiffs' lawyers. WhatsApp has used the Signal protocol for a decade, which is genuinely robust end-to-end encryption. When used by Signal…
But here's the thing: we know WhatsApp collects metadata (who, when, where, to whom). We know from a 2021 ProPublica investigation that WhatsApp support staff can access messages that users report for abuse. And we know Meta's former head of security filed a lawsuit claiming he faced retaliation for trying to address "systemic cybersecurity failures".
So while this lawsuit is unproven yet and should be taken with a grain of salt, the broader question remains: How much should we trust platforms that profit from our data when they tell us that data is "private"? (Spoiler alert: we should not)
And now for the piece of legislation that makes every security expert scream into the void: mandatory age verification for social media.
Australia kicked things off by banning under-16s from social media platforms starting December 2025. The EU is following suit with proposals targeting under-16 access, and the UK's House of Lords voted 261-150 in January 2026 to require platforms to implement effective age assurance measures within 12 months. UK Ofcom has been enforcing age verification for adult content sites since July 2025.
Sounds reasonable, right? Protect the children! Except the implementation is a privacy and security nightmare. And the real motivator could be more about control on the internet and de-anonymization and less about protecting said children…
The ID Collection Problem
Here's the fundamental issue: The only reliable way to verify someone's age is to check government-issued ID documents. And of course, those IDs need to be stored somewhere. Which means we're creating massive honeypots of passport scans, driver's licenses, and national ID cards just waiting to be breached.
We don't have to imagine what could go wrong, it already happened. In October 2025, Discord suffered a data breach that exposed approximately 70,000 Australian users' government ID images (passports and driver's licenses) collected through age verification appeals. Discord confirmed the breach, blaming an unnamed third-party support provider.
The threat actor even provided proof to cybersecurity researchers: images of young people holding papers with their personal details, demonstrating that Discord doesn't actually delete verification submissions after processing, despite claiming they do.
You think Meta would…?
The Selective Enforcement Problem
But here's where it gets really absurd: Australia's law bans under-16s from TikTok, Instagram, and YouTube...but completely exempts Roblox.
Let me repeat that. ROBLOX - the platform that's been documented as a haven for child predators and currently faces over 35 lawsuits in the U.S., including allegations of enabling child sexual exploitation, THAT Roblox, gets a free pass.
Meanwhile, platforms like Discord, Steam, and Pinterest are also exempt. TikTok, Meta, and Snapchat are (rightfully) calling this out as "irrational" and "shortsighted".
This isn't child protection. This is theater.
The VPN Gold Rush (and security disaster)
And of course, kids are already bypassing these restrictions using VPNs. After Australia's law passed, there was a massive boom in "free VPN" app downloads, many of which are themselves security and privacy threats. Same goes for UK, same will happen in EU or any other country that will try to pass half-assed social media age regulations…
The UK even tried to get ahead of this by passing a second amendment requiring VPN providers to implement age verification (voted 207-159). Which is...I don't even know where to start with how unenforceable and dystopian that is.
All I’ll say is that try looking into Mulvad, the VPN provider that already proved they don’t store any user information, not even email. Or my beloved swiss Proton.
The Real Winners? Big Tech
Let's be clear about who actually benefits from mandatory age verification: platforms like Meta.
Every ID document submitted is another data point for their advertising profiles. They get to know exactly who you are, with government-verified identity documents. No more anonymity. No more privacy. Just pure, verified, monetizable data.
And the platforms that don't want to collect this data, like Wikipedia, (which launched a judicial review against the UK's Online Safety Act) face massive fines (up to £18 million or 10% of global revenue for Ofcom violations).
The Bigger Picture: Governments Don't Understand the Internet
All of this: the selective TikTok censorship, the WhatsApp encryption questions, the age verification disasters, points to a fundamental truth: Governments have no idea what they're actually doing with internet regulation.
The EU's Chat Control proposal, which has been controversial for years now, is now shifting focus from encryption backdoors to age verification. But age verification conflicts directly with data minimization principles. More identity processing means higher breach risks. And proposals requiring pre-encryption analysis or endpoint checks raise critical questions about whether confidentiality can even be maintained.
These laws show a vast misunderstanding of how social media actually works. They don't protect children - they just create surveillance infrastructure that endangers everyone while giving predators (who already know how to use VPNs) a false sense that the problem is "solved."
What This Means for Marketers
If you're building audience on platforms that are:
Censoring content based on political connections
Potentially compromising encryption for internal access
Collecting government IDs that will inevitably leak
...you need to diversify your strategy yesterday.
This isn't about doom and gloom. It's about being realistic: The social media landscape is becoming increasingly hostile to privacy, anonymity, and free expression. The walls are closing in from both the corporate and government sides.
Consider investing in:
Owned media (email lists, websites you control)
Emerging federated/decentralized platforms (check my series on this here)
Communities on platforms with better track records (though honestly, the bar is on the floor)
Stay Safe Out There
Look, I know this is heavy. We're watching real-time erosion of digital privacy wrapped in the language of "safety" and "protecting children." It's frustrating because these concerns are valid: child safety is important, encryption should be trustworthy, content moderation shouldn't be politically motivated.
But the solutions we're getting are worse than the problems.
So take care of yourself. Diversify your digital presence. Question the narratives. Use strong security practices. And remember: if a platform or government tells you they're collecting your data "for your safety," ask yourself: whose safety are they actually protecting?
Because spoiler alert: it's probably not yours.
Stay safe,
Mark
Sources:

