This essay uses France and the EU as the lens. The pattern is global. If your country isn’t here yet, it’s next.
Imagine…
You walk into a bar. Like you’ve done a thousand times. You meet some friends, you talk, you share your opinions about the world, politics, your boss, your ex, your doubts. You say what you actually think, because that’s what a bar is. A space where words can come out without consequences. A place of ordinary freedom.
Now imagine the same scene, tomorrow.
Before entering, they ask for your ID. They scan your face. They log the time you arrived. Inside, every word you say is recorded. Every topic discussed, every opinion expressed, every questionable joke, everything is archived, linked to your legal name, stored somewhere. You don’t know who’s listening. You don’t know who’s watching. You don’t know if what you say today will be used against you in three weeks, three years, or three political regimes from now.
And at the exit, maybe someone’s waiting. Someone who didn’t like what you said. Someone you’ve never seen before. Someone who knows exactly where you live.
Would you speak the same way?
Would you still go to the bar?
This isn’t fiction. This is exactly what’s happening right now.
France just passed a bill banning social media for anyone under 15, with mandatory age verification to enforce it. Australia did the same for under-16s in December. Eleven EU member states are lobbying Brussels to make this EU-wide policy. The infrastructure is being built piece by piece, country by country. The official excuse: protecting children. The reality: the end of online anonymity for everyone.
No more creating an account without giving your real name. No more speaking under a pseudonym. No more protection that allowed you to say what you actually think without your employer, your ex, your government, or a malicious stranger being able to find you.
And almost nobody protested.
In fact, just a few even showed up. Out of 577 deputies in the National Assembly, only 151 bothered to attend the vote. A fundamental shift in the relationship between citizens and the state, decided by a quarter of the representatives, while the rest were elsewhere. This is how freedoms disappear: not with dramatic battles, but with empty seats.
The bar just installed a checkpoint. And almost none protested.
That’s what drives me insane. Not the measure itself. It was predictable. But the acceptance. The silence. The massive indifference of a population that just lost a fundamental freedom and doesn’t even realize it.
People applauding because “it’s to protect the children.”
People shrugging because “I have nothing to hide.”
People who simply don’t understand what was just taken from them.
This text is for them.
It’s also for you, if you sense something is wrong but can’t articulate it. If you’re tired of sterile debates where you’re treated like a paranoid conspiracy theorist. If you want the arguments, the examples, the definitive answers to give to those who accept being locked up in exchange for an illusion of security.
I’m going to explain why mass surveillance is a catastrophe, not for abstract reasons, but for reasons that will affect your daily life. Concretely. Directly. Maybe tomorrow.
Here’s why you have everything to hide.
THE LIE, “I Have Nothing to Hide”
This is the phrase that kills the debate.
The moment you criticize surveillance, someone pulls it out. “I have nothing to hide.” As if it were an argument. As if it closed the discussion. As if being “honest” protected you from anything.
Let me explain why this is the most dangerous phrase you can say, and why the people who say it understand nothing about how power works.
The Law Changes. Your Data Doesn’t.
Here’s what “I have nothing to hide” assumes: that what is legal today will be legal forever. That the rules won’t change. That the government in power now will always be in power. That the definition of “acceptable” is fixed.
History has a word for people who believe this: victims.
In China, people were arrested years after posting critiques on Weibo, posts that were perfectly tolerated when they wrote them. The rules changed. Their words didn’t disappear. The surveillance infrastructure that recorded everything was still there, waiting.
You think it can’t happen here?
In France, the state of emergency declared after the 2015 terrorist attacks was supposed to be temporary. It was renewed six times over two years. Then, in 2017, they passed a law (the SILT law) that made the “exceptional” measures permanent. Administrative searches without a judge. House arrests based on “behavior” rather than evidence. Surveillance powers that were sold as emergency tools became everyday tools.
The emergency ended. The powers didn’t.
The PATRIOT Act in the United States. The anti-COVID tracing infrastructure that was never dismantled. ID requirements for porn sites that became ID requirements for social media. The pattern is always the same: temporary becomes permanent, targeted becomes universal, exceptional becomes normal.
And through all of it, your data sits there. Everything you searched. Everything you said. Everyone you talked to. Waiting.
You Don’t Decide What’s “Something”
Let me give you an example.
You search “how to buy a kitchen knife” because you’re learning to cook. Normal. Legal. Boring.
Five years from now, a new law passes targeting “domestic terrorism preparation.” The definition is vague, intentionally. Someone in your family gets into legal trouble. Your name comes up in the investigation. They pull your file.
There it is: “how to buy a knife.”
You know the context. You were making dinner. But the algorithm doesn’t know context. The investigator doesn’t know context. The prosecutor looking for patterns doesn’t know context. All they see is a data point that fits a narrative.
You had nothing to hide. Until suddenly you did.
Here’s another one.
You’re going through a divorce. It’s ugly. You join an anonymous support group online to vent, to get advice, to say things you’d never say in public. Things about your ex. Things about your lawyer. Things about how you feel.
Before: anonymous account, private conversations, catharsis.
After ID verification: your real name attached to everything. Your ex’s lawyer subpoenas the records. Every frustrated rant, every moment of weakness, every exaggeration you made in anger, is now evidence in a custody battle.
You had nothing to hide. You were just processing your emotions like a normal human being. But now it’s ammunition.
Or this.
You have a medical condition. Something embarrassing. Something you don’t want your employer to know. You Google symptoms. You visit forums. You search for doctors.
That data exists. It’s linked to your identity. It’s sitting in databases you don’t control. One breach, and we’ll get to the breaches, and suddenly it’s not private anymore.
Your insurance company “somehow” raises your rates. You don’t get the job you applied for, and they don’t tell you why. A stranger on the internet sends you a message referencing your condition.
You had nothing to hide. You were just sick.
The Seat-belt Analogy
People who say “I have nothing to hide” remind me of people who don’t wear seat-belts.
“I’m a good driver. Accidents happen to other people. I’ve never had a problem before.”
They’re not wrong about themselves, maybe they are good drivers. But they’re completely wrong about how risk works. Accidents don’t happen because you’re a bad driver. They happen because someone else runs a red light. Because there’s ice on the road. Because a deer jumps out. Because reality doesn’t care about your driving skills.
Surveillance works the same way.
You might be perfectly law-abiding. You might live a boring life. You might genuinely have nothing interesting in your data. But you don’t control:
- What becomes illegal tomorrow
- Who gets access to the databases
- What context your data is interpreted in
- Who decides you’ve become “interesting”
- What happens when the system gets hacked
The seat-belt isn’t for when you’re a good driver. It’s for when something outside your control goes wrong.
Privacy isn’t for when you’re a good citizen. It’s for when something outside your control goes wrong.
You’re Not Important, Until You Are
“I’m nobody. Why would anyone care about my data?”
This is the illusion of insignificance. And it’s almost more dangerous than “I have nothing to hide.”
You’re right, today, you’re probably not interesting to anyone. No government agency is watching you. No hacker is targeting you specifically. You’re one of billions, lost in the noise.
But importance isn’t permanent. It’s contextual.
You witness something you weren’t supposed to see. You become a whistle-blower by accident. You post something that goes viral. You date someone who turns out to be under investigation. You apply for a job that requires a background check. You run for local office. Your kid becomes famous. You piss off the wrong person.
Suddenly, you matter. Suddenly, someone wants to know everything about you. And everything is already collected, already waiting, already archived.
You didn’t need to do anything wrong. You just needed to become relevant.
The surveillance infrastructure doesn’t care if you’re important. It collects everything, on everyone, all the time. It’s patient. It can wait until you matter.
The Real Question
“I have nothing to hide” isn’t an argument. It’s a surrender.
It says: I trust that the current laws are just. I trust that the current government is benevolent. I trust that the system won’t be hacked. I trust that the definition of “acceptable” won’t change. I trust that I’ll never become interesting to anyone with power. I trust that my data will never be taken out of context. I trust that nothing in my life will ever go wrong in a way that makes my history relevant.
That’s a lot of trust.
Here’s the real question you should ask: If you have nothing to hide, why do you close the bathroom door?
It’s not because you’re doing something illegal. It’s not because you’re ashamed. It’s because some things are private. Because dignity requires boundaries. Because you’re a human being with an inner life that doesn’t belong to anyone else. Because there’s a difference between “not criminal” and “everyone’s business.”
Privacy isn’t about hiding wrongdoing. Privacy is about maintaining the space where you can be a full human being, messy, contradictory, uncertain, evolving, without every moment being recorded, analyzed, and potentially used against you.
When you say “I have nothing to hide,” you’re giving that up.
Not just for yourself.
For everyone.
THE TROJAN HORSE, “It’s For The Children”
There’s one argument that ends all debate. One shield that makes you morally untouchable. One phrase that turns any critic into a suspected monster.
“It’s to protect the children.”
The moment someone invokes children, rational discussion dies. You can’t argue against protecting children. You can’t question measures designed to keep kids safe. If you push back, you’re not a concerned citizen worried about civil liberties, you’re someone who apparently doesn’t care if children get hurt.
This is not an accident. This is a weapon.
And it’s being used to build a surveillance infrastructure that has nothing to do with children and everything to do with controlling adults.
The Pattern
Watch how it works.
2024: France mandates age verification to access porn sites, with full enforcement from April
- The justification? Protecting minors from harmful content. Who could argue against that?
2025: Australia bans social media for under-16s. December enforcement.
2026: France passes a bill banning social media for under-15s, with mandatory age verification. Eleven EU countries lobby Brussels to make this EU-wide. The justification? Protecting children from the dangers of social networks. Who could argue against that?
Next: ID verification to access any website with “potentially harmful content.” News sites with violent imagery. Forums discussing drugs or alcohol. Political content deemed “dangerous for young minds.” Who could argue against protecting children from that?
Then: ID verification for all internet access. After all, children can find harmful content anywhere. The only way to truly protect them is to verify everyone, everywhere, all the time.
See how it works?
Each step is small. Each step is “reasonable.” Each step is “for the children.” And each step moves us closer to a world where anonymous internet use is impossible, where everything you do online is linked to your legal identity, recorded, and stored.
The children are the Trojan horse. The surveillance state is what’s hiding inside.
The Emotional Short-Circuit
This manipulation works because it completely sidesteps rational thought.
When someone says “think of the children,” they’re not making a logical argument. They’re triggering an emotional response. Humans are hardwired to protect children. It’s biological. It’s deep. And it makes us stupid.
The moment you feel that protective instinct, your critical thinking shuts down. You stop asking basic questions like:
- Will this measure actually protect children?
- Are there less invasive alternatives?
- Who else benefits from this measure?
- What are the unintended consequences?
- Is this proportionate to the actual risk?
You don’t ask because asking feels wrong. Asking feels like you don’t care about children. Asking makes you the bad guy.
This is exactly what they want.
Questions Nobody Asks
Let’s ask anyway.
Will ID verification actually protect children from social media?
Kids have been bypassing age restrictions since the internet existed. They’ll use their parents’ IDs. They’ll use fake IDs. They’ll use VPNs. They’ll find workarounds that adults haven’t thought of yet because that’s what kids do.
The only people who will be fully tracked are law-abiding adults who comply with the system. The children this is supposedly protecting? They’ll find a way around it within weeks.
Are there less invasive alternatives?
Yes. Many.
You could require parental controls at the device level, giving parents the tools to manage their children’s access without surveilling all adults. You could fund education campaigns teaching parents about digital risks. You could hold platforms accountable for algorithmic recommendation of harmful content to minors. You could do what we do with cigarettes and alcohol: social enforcement, not ID checkpoints for everyone.
But none of these options create a database linking every citizen’s identity to their online activity. So none of these options are being seriously considered.
Who else benefits from this measure?
Governments get a surveillance infrastructure they couldn’t have built openly.
Corporations get verified identity data they can monetize.
Bad actors, hackers, stalkers, authoritarian regimes, get centralized databases to target.
The only people who don’t benefit are ordinary citizens who lose their privacy and children who won’t actually be protected.
What are the unintended consequences?
People who need anonymity for legitimate reasons, abuse survivors, political dissidents, whistle-blowers, anyone discussing sensitive topics, lose their protection.
Data breaches become catastrophic because the databases now contain real identities linked to online behavior.
Self-censorship increases because people know they’re being watched.
Trust in digital systems erodes.
And the next measure becomes easier to pass, because you’ve already accepted the principle.
What happens to the data of children under 15 who try to register?
The system is supposed to block minors under 15 from accessing social media. But how does it block them? By checking their ID and bio-metrics first.
Think about what this means. A 13-year-old tries to create an account. They submit their ID. They scan their face. The system verifies they’re underage, and rejects them.
But what happened to the data they just submitted?
Was it deleted immediately? Stored temporarily? Logged somewhere for “verification purposes”? Shared with a third-party age verification service?
No one is asking these questions. No one is demanding answers. The most likely outcome: the bio-metric data of children under 15 will be collected anyway, just flagged as “rejected” rather than “approved.”
A database of children’s faces, built in the process of “protecting” them from social media.
What You Actually Lose
Let me make this concrete.
Before ID verification:
You can create an anonymous account to discuss your mental health struggles without your employer finding out.
You can participate in political discussions without your views being permanently linked to your name.
You can join a support group for abuse survivors without your abuser being able to track you.
You can explore ideas, make mistakes, say stupid things, and grow, without a permanent record following you forever.
You can be a full human being online, with all the messiness that entails.
After ID verification:
Everything you do is linked to your legal identity.
Your employer can find your Reddit posts.
Your ex can find your dating profiles.
Your insurance company can find your health questions.
Your government can find your political opinions.
A hacker who breaches the database can find everything.
You are no longer a free person exploring the digital world. You are a tracked subject, performing for an invisible audience that might be watching at any moment.
And there’s a category of people who lose everything: whistle-blowers and independent journalists.
The person inside a corporation who sees fraud. The civil servant who witnesses corruption. The citizen who films police violence. These people depend on anonymity to speak without being destroyed. Pseudonymous accounts are how they contact journalists. Anonymous platforms are how evidence reaches the public.
After mandatory ID verification, every leak becomes traceable. Every source becomes identifiable. Every act of conscience becomes a career-ending, possibly life-ending risk.
The powerful have always wanted to know who’s talking. Now they will.
The Inversion
Here’s the darkest part.
The measure that’s supposed to “protect children” actually makes children less safe.
When you build a centralized database linking real identities to online accounts, you create a target. Hackers will attack it. Insiders will abuse it. Governments will expand its use.
The same database that’s supposed to keep kids off social media will contain their parents’, and eventually their own, identity data. When it gets breached (and it will get breached, we’ll get to that), children’s information will be exposed along with everyone else’s.
You didn’t protect children. You just created a new way for them to be targeted.
And you gave up everyone’s freedom to do it.
The Predators Already Inside
And here’s where it gets truly dark.
The same governments demanding bio-metric data “to protect children” have a documented problem with predators in their own ranks. This isn’t conspiracy theory, it’s court records and parliamentary investigations.
In September 2024, Pierre-Alain Cottineau, an LFI candidate, LGBT activist, and certified foster parent, was arrested after Dutch police found videos of him raping a handicapped 4-year-old girl on the darknet. Investigation revealed he ran a pedocriminal network, inviting strangers via encrypted forums to come rape the infants placed in his care by child protective services. He had been flagged for assaulting a child at age 15. No prosecution. He received his foster certification anyway.
In 2023, Matheus Branquinho, former LREM deputy substitute, was convicted for sexually assaulting two girls aged 6 and 8, plus possession of child pornography. These aren’t fringe figures. These are candidates, elected officials, party members. People who passed background checks. People with authority over children.
And then there’s Bétharram, perhaps the largest pedophilia scandal in French educational history: over 200 complaints for rapes and violence spanning 50 years at a Catholic boarding school. The current Prime Minister, François Bayrou, was president of the regional council and Minister of Education during some of these crimes. Multiple witnesses say he was informed. He denies everything.
Now imagine what happens when the EU builds a database containing verified identities, home addresses, and bio-metric data of every teenager aged 15 to 18.
Notice the age: 15. In France, that’s the age of sexual majority.
The system will collect facial recognition data and legal identities of millions of teenagers, starting precisely at the age they become “legal” for sexual contact.
When this database is breached (not if, when), predators will have exactly what they need: verified photos, real names, confirmed ages, home addresses. A catalog.
And it won’t only be hackers. Every government employee with administrative access, including predators we haven’t yet identified, will have unlimited access to this data.
They told you it was to protect children.
They’re building a database that predators will exploit for decades.
The Selective Protection
And here’s the final proof that “protecting children” is a pretext, not a priority.
If protecting children actually mattered, France would have:
- Automatic lifetime bans from working with minors for convicted pedophiles
- Mandatory prison time for child rape
- Functional oversight of child protective services
- A dedicated minister for child protection
France has none of these.
The numbers tell the story:
Sentencing:
19% of child rapists receive fully suspended sentences. No prison. Walk free. The average sentence for sexually assaulting a child is under two years. With standard reductions, most convicted pedophiles never see the inside of a cell.
Work bans:
A convicted pedophile can legally continue working with children. France has no automatic prohibition. An educator found guilty of possessing child pornography can keep teaching. The Senate tried to fix this in 2015. It still isn’t fixed.
Child protective services (ASE):
A parliamentary commission in 2025 called it a “state scandal.” 400,000 children are theoretically under state protection. They have 20 years less life expectancy than the general population. Nearly half develop psychiatric disorders. An estimated 15,000 are prostituted while in state care. The average age of entry into prostitution for these children is 11 to 14 years old. Parliamentary investigators described the foster system as “recruitment grounds for child prostitution networks.”
I’ll let you search by yourself for the other tremendous number of scandals and statistics around ASE…
Meanwhile: no dedicated minister for child protection since 2017. The state funds only 3% of the €10 billion budget. Two-thirds of foster families have never been inspected.
So let’s be clear about priorities…
- Building a bio-metric database of every teenager: immediate political support, billions in funding, fast-tracked legislation.
- Preventing convicted pedophiles from working with children: still no automatic law.
- Keeping children alive in state care: “state scandal,” no resources, no oversight, no minister.
The surveillance isn’t for the children. The children are for the surveillance. They’re the excuse. Not the beneficiaries.
The Real Purpose
We’re clear, children are not the reason for these measures. Children are the excuse.
The real purpose is to eliminate anonymity. To create a world where every online action is traceable to a legal identity. To build the infrastructure for a surveillance system that would have been politically impossible to propose directly.
If a government said “we want to track everything everyone does online,” there would be protests. There would be resistance. It would be recognized as authoritarian.
But if they say “we need to verify identities to protect children from harmful content”?
Silence. Acceptance. Applause.
Same outcome. Different packaging.
The Test
Here’s how you know a “protect the children” measure is actually about surveillance:
Does it solve the stated problem?
ID verification won’t stop kids from accessing social media. They’ll find workarounds. So no, it doesn’t solve the stated problem.
Are there less invasive alternatives?
Yes, device-level parental controls, education, platform accountability. None require surveilling all adults.
Who gains power from this measure?
Governments and corporations gain surveillance capabilities. Citizens lose privacy.
Is the measure proportionate?
Tracking every adult online to theoretically prevent some minors from accessing social media is wildly disproportionate.
Does it create infrastructure that could be abused?
Yes, centralized identity databases linked to online activity are surveillance infrastructure by definition.
When a measure fails all five tests, it’s not about children. It’s about control.
Your Responsibility
Every time you accept a surveillance measure because “it’s for the children,” you make the next measure easier.
You normalize the principle that safety justifies tracking.
You surrender the argument that privacy is a fundamental right.
You teach your government that this manipulation works.
The people who use children as shields to pass authoritarian measures are counting on your emotional response. They’re counting on you not thinking. They’re counting on you being too afraid of looking heartless to ask hard questions.
Don’t give them what they want.
Love children. Protect children. But don’t let your love be weaponized against your freedom.
The best thing you can do for children is preserve the free society they’ll inherit.
THE RATCHET, How It Always Expands
Let me tell you the most important thing about surveillance measures:
They never go away.
Not once. Not ever. In all of recorded history, no government has voluntarily dismantled a surveillance infrastructure after the “emergency” that justified it ended.
This isn’t cynicism. This is pattern recognition. And if you understand the pattern, you can predict exactly what’s coming next.
The Mechanism
It works like a ratchet, the mechanical device that only turns one way
Step 1: The Crisis
Something bad happens. A terrorist attack. A pandemic. A moral panic about children. It doesn’t matter what, any crisis will do. The key is fear. Fear makes people accept things they would never accept when calm. A crisis can be natural or provoked.
Step 2: The “Temporary” Measure
The government proposes emergency powers. Expanded surveillance. Restricted freedoms. But don’t worry, it’s temporary. Just until the crisis passes. Just until we’re safe again.
Step 3: The Normalization
Months pass. The measure stays in place. People get used to it, forget. The outrage fades. What was shocking becomes normal. What was temporary becomes “how things are.”
Step 4: The Expansion
The crisis evolves, or a new crisis emerges. The “temporary” measure wasn’t enough. We need to extend it. Expand it. Make it permanent. Add new capabilities. Cover more ground.
Step 5: The New Baseline
What was once an emergency power is now ordinary law. The ratchet has clicked forward. It will never click back. And the next crisis will start from this new baseline.
This is not theory. This is history.
France: The Laboratory
France is a perfect case study because the French have documented their own slide into permanent emergency powers with remarkable precision.
Vigipirate: The Eternal Emergency
The Vigipirate plan was created in 1978 as a response to terrorist threats. It was supposed to be activated during specific emergencies and deactivated when the threat passed.
In 1995, following a wave of attacks, Vigipirate was elevated to high alert.
It has never come down.
Thirty years later, France is still at “elevated” or “emergency” alert levels. The “temporary” measures, armed soldiers in the streets, security checkpoints, expanded police powers, have become permanent features of French life. An entire generation has grown up never knowing anything else.
The emergency never ended. It just became normal.
The State of Emergency: From Exception to Rule
After the November 2015 terrorist attacks, France declared a state of emergency. This granted extraordinary powers to the executive: administrative searches without judicial oversight, house arrests based on suspicion rather than evidence, restrictions on movement and assembly.
The state of emergency was supposed to last 12 days.
It was renewed. And renewed. And renewed again. Six times over two years, 719 days total.
And then something remarkable happened. In 2017, instead of ending the state of emergency, the government passed a law (the SILT law) that made the “emergency” powers permanent.
Let me say that again: the exceptional measures that were justified by immediate crisis were written into ordinary law. Administrative searches became “domiciliary visits.” House arrests became “individual administrative control and surveillance measures.” Different names, same powers.
The state of emergency officially ended on November 1, 2017. The powers didn’t.
In 2021, another law strengthened and made these measures truly permanent. No more expiration dates. No more renewals needed. What was sold as a temporary response to terrorism is now simply how France works.
The emergency ended. The surveillance didn’t.
COVID: The Rehearsal
The pandemic gave governments worldwide a trial run for mass surveillance infrastructure. Contact tracing apps. Vaccine passports. QR codes to enter buildings. Digital health certificates. Movement restrictions enforced through technology.
In France, the “passe sanitaire” became the “passe vaccinal.” Each version expanded requirements and tightened enforcement.
When the pandemic officially ended, did all this infrastructure disappear?
The apps are still on phones. The databases still exist. The legal frameworks are still in place. The technology has been tested, normalized, and is ready for the next crisis (may be actually recycled for the social media ID verification), health-related or not.
The ratchet clicked forward. It hasn’t clicked back.
The Global Pattern
France isn’t unique. The pattern repeats everywhere.
The PATRIOT Act (United States)
Passed in the panic after 9/11 with almost no debate. Supposed to be temporary, many provisions had sunset clauses.
Those sunsets were extended. And extended again. And made permanent. The NSA’s mass surveillance programs, revealed by Edward Snowden in 2013, operated under legal authorities created by “temporary” post-9/11 legislation.
Twenty-four years later, the core surveillance powers remain intact.
The Investigatory Powers Act (United Kingdom)
After Snowden revealed mass surveillance, the UK didn’t scale back. They legalized everything retroactively and expanded further. The 2016 law, nicknamed the “Snoopers’ Charter”, gave the government some of the most extensive surveillance powers in any “democracy”.
Exposure didn’t stop it. It accelerated it.
China’s Social Credit System: The Convenient Myth
The West points to China as a warning. But the system Brussels is actually building is more comprehensive than anything Beijing operates.
The unified AI-powered “social credit score” that rates every Chinese citizen’s behavior does not exist. It never did. Researchers from Yale, Stanford, Leiden University, and the Mercator Institute for China Studies have documented this extensively. What China actually operates is a fragmented ecosystem of business compliance tools, court enforcement blacklists for judgment defaulters, and largely abandoned local pilot programs. No universal score. No algorithmic behavior tracking. The most famous pilot in Rongcheng? After criticism from Chinese state media itself, central authorities explicitly forbade using personal scores “to punish citizens.” Survey data shows only 7% of Chinese citizens are even aware they might be in a pilot program.
Meanwhile, the EU is building mandatory identity verification linking biometrics to online activity for every citizen, no opt-out.
We’re not avoiding China’s path. We’re taking a worse one while telling ourselves comforting stories about it.
The British Laboratory
If you think mass surveillance leading to prosecution for opinions is theoretical, look at Britain.
In 2023, UK police made over 12,000 arrests for online speech. That’s 30 arrests per day for social media posts. The number has more than doubled since 2017.
The laws enabling this are deliberately vague. Section 127 of the Communications Act 2003 criminalizes messages of “grossly offensive” character. The Malicious Communications Act 1988 covers content causing “annoyance, inconvenience, or anxiety.” Not threats. Not incitement. Anxiety.
In 2022, a 51-year-old veteran was arrested at his home in handcuffs. His crime? Reposting a meme on Facebook. When he asked why he was being detained, the officer’s response was recorded on video: “Someone has been caused anxiety based on your social media post. That is why you have been arrested.”
Anxiety. From a meme. Handcuffs.
Then came summer 2024. After a stabbing in Southport killed three children, false information spread that the attacker was an asylum seeker. Riots followed. The government’s response wasn’t just to prosecute rioters. It was to prosecute people who posted opinions from their homes.
Jordan Parlour: 20 months in prison for a Facebook post. His crime was writing that people should “smash” a hotel housing migrants. The post received 6 likes.
Julie Sweeney: 15 months for a single Facebook comment. She’s a 53-year-old carer for her disabled husband. She wrote something vile. But 15 months in prison for words typed in anger on a screen?
The Director of Public Prosecutions stated plainly: “We do have dedicated police officers who are scouring social media. Their job is to look for this material, and then follow up with identification, arrests, and so forth.”
Scouring. Their full-time job is reading your posts.
According to Policy Exchange, investigating online speech has consumed 666,000 hours of police time. Meanwhile, 90% of all crime in the UK went unsolved in 2023. 89% of violent and sexual offenses went unsolved in 2024. But they have the resources to arrest people for memes.
And it goes further than posts. In October 2024, a British army veteran was convicted for praying silently near an abortion clinic. He didn’t speak. He didn’t hold a sign. He didn’t interact with anyone. He stood with his head slightly bowed for a few minutes. The judge ruled that his posture “would have been perceptible to an observer” as prayer. Guilty. Ordered to pay £9,000.
The crime: thinking the wrong thoughts in public.
Freedom House, the international organization that monitors democracy worldwide, downgraded the UK’s freedom score in 2025 specifically because of “the proliferation of criminal charges, arrests, and convictions concerning online speech, including speech protected under international human rights standards.”
This is Britain. Common law. Magna Carta. The country that invented modern liberty.
When the UK Prime Minister was asked about the social media prosecutions, he said: “Whether you’re directly involved or whether you’re remotely involved, you’re culpable.”
Remotely involved. Meaning: you posted something.
Britain isn’t a warning about what surveillance might become.
Britain is a demonstration of what surveillance already is.
It is still happening, this started 2/3 years ago and continues in the present year 2026, while you’re reading this.
The Progression You’re Living Through
Let me map it for you:
2020s:
- ID required to access porn sites (France 2024, UK pending)
- Social media bans for minors with age verification (Australia 2025, France 2026)
- EU coalition lobbying for continent-wide implementation Real-time content scanning ‘for child safety’ (EU ChatControl proposals)
2030s (if current trajectories continue without resistance):
- ID required for all internet access
- Digital identity linked to financial services
- Social media behavior affects credit scores
- AI monitoring of private messages “for safety”
Each step will seem small. Each step will be justified by a crisis. Each step will be “temporary” until it isn’t.
Does this progression seem paranoid? The people in 1995 would have found the 2025 situation paranoid. The people in 2015 couldn’t have imagined mandatory facial recognition for social media or COVID hysteria.
The future arrives gradually, then suddenly.
Why It Never Reverses?
You might ask: why doesn’t the ratchet ever click backward? When emergencies end, why don’t the powers get rescinded?
Institutional inertia.
Once a capability exists, the people who operate it have jobs that depend on it continuing. Bureaucracies don’t voluntarily shrink themselves.
Political cowardice.
No politician wants to be the one who “weakened security” before the next attack. Rolling back surveillance powers is politically risky; expanding them is politically safe.
Normalized expectations.
Once people get used to something, removing it feels like loss. Soldiers in train stations become part of the landscape. ID checks become routine. Objecting starts to feel weird.
Sunk cost.
Billions have been spent on surveillance infrastructure. Admitting it was unnecessary or excessive means admitting waste. Easier to keep using it.
The next crisis.
There’s always another crisis coming. Why dismantle powers you might need again soon?
The ratchet is designed to only turn one way. Every mechanism reinforces the direction of travel.
The Numbers
In France alone, since 2015:
- 14 laws related to terrorism have been adopted
- Emergency measures have been renewed or made permanent 7 times
- Surveillance capabilities have been expanded in every single law
- Zero significant surveillance powers have been rescinded
The score is 14-0. The direction is one-way.
Globally, the pattern holds. Name one major democracy that has significantly rolled back surveillance powers in the last 25 years.
You can’t.
The Naive Objection
At this point, someone will say: “You’re being naive. Terrorism is real. Pedophiles are real. Criminals are real. We need tools to fight them. What’s your alternative? Doing nothing?”
This is a fair challenge. It deserves a serious answer.
Security without mass surveillance isn’t hypothetical. It exists. It works. And it’s more effective than the dragnet approach.
Targeted vs. mass surveillance
There are two models of security intelligence.
Model one: Collect everything on everyone. Store it forever. Search it when needed. Hope the needle appears in the haystack.
Model two: Identify specific threats through traditional investigation. Get warrants. Surveil specific targets. Focus resources on actual suspects.
Model two is what worked before the mass surveillance era. It’s what still works when agencies actually use it.
The 2015 Paris attackers were known to intelligence services. They were on watch-lists. They had been flagged by multiple countries. The information existed. It wasn’t a collection failure. It was an analysis failure, too much noise drowning out the signal.
Study after study has shown the same pattern. The NSA’s mass surveillance programs, revealed by Snowden, did not prevent a single terrorist attack that couldn’t have been prevented through traditional targeted methods. The Privacy and Civil Liberties Oversight Board reviewed the evidence. The conclusion: mass collection wasn’t the decisive factor in any case.
More data isn’t better intelligence. It’s often worse intelligence, more noise, more false positives, more resources wasted chasing ghosts while real threats slip through.
What privacy-respecting security looks like
Warrant-based targeted surveillance: When there’s specific suspicion of a specific person, get a warrant from a judge, surveil that person. This is how democracies have always handled serious threats. It works.
Metadata analysis without content access: You can identify suspicious patterns (unusual communication networks, travel patterns, financial flows) without reading everyone’s messages. This respects privacy while enabling detection.
Human intelligence: Informants, undercover operations, community relationships. Less glamorous than technology. More effective against organized threats.
International cooperation: Sharing specific threat information between agencies. Doesn’t require every country to build domestic mass surveillance.
Platform cooperation with warrants: When law enforcement has specific evidence of specific crimes, platforms can be legally compelled to provide specific data. This already exists. It already works.
Encryption with lawful access for specific targets: End-to-end encryption protects everyone. When specific criminal evidence exists, targeted device access (with warrant) can retrieve information. No backdoors that weaken security for everyone.
The honest trade-off
I’m not arguing that security and privacy never conflict. Sometimes they do. Sometimes targeted surveillance of genuine threats is necessary and justified.
The argument is about proportionality and effectiveness.
Mass surveillance of entire populations is disproportionate. The intrusion on millions of innocent people cannot be justified by marginal gains in security.
Mass surveillance is also ineffective. Resources spent monitoring everyone are resources not spent investigating actual threats. Haystacks don’t help you find needles.
And mass surveillance is dangerous. The infrastructure built for “security” will be used for other purposes. It always is. The emergency powers become permanent powers become everyday powers.
The choice isn’t between surveillance and chaos. The choice is between effective, targeted, rights-respecting security and ineffective, mass, rights-destroying theater.
We know which one works.
We’re being sold the other one.
What This Means For You
Every measure you accept today becomes the baseline for tomorrow.
When you accept ID verification for porn sites, you make ID verification for social media possible.
When you accept ID verification for social media, you make ID verification for all websites possible.
When you accept facial recognition for platforms, you make facial recognition for physical spaces possible.
When you accept surveillance “for emergencies,” you make surveillance permanent.
This is not a slippery slope fallacy. This is documented history. The slope has been slipped. We’re watching it happen in real time.
The only way to stop the ratchet is to refuse the first click. Once it’s clicked, it doesn’t go back.
The Question
The people who built the surveillance infrastructure that existed in 1990 never imagined what we have in 2025.
The people building today’s infrastructure are not imagining what will exist in 2060.
But someone will inherit it. Someone will use it. And by then, there will be no memory of a time before the checkpoints, before the ID verification, before the facial recognition, before the permanent state of emergency.
Your children will grow up thinking this is normal.
Is that the world you want to leave them?
THE ASYMMETRY, Who Watches The Watchers?
There’s a question that surveillance advocates never answer.
If transparency is so important for safety, if “nothing to hide means nothing to fear,” if tracking everyone’s activity is necessary for the public good, then why doesn’t it apply to them?
Why do the people who build surveillance systems exempt themselves from surveillance?
The answer tells you everything you need to know about what this is really about.
The Rules Are For You
In 2021, a scandal emerged that should have ended careers and triggered criminal investigations.
Ursula von der Leyen, President of the European Commission, personally negotiated a €35 billion vaccine contract with Pfizer’s CEO via text messages. When journalists and the European Ombudsman requested these messages, as required by transparency laws, the Commission refused.
Then they claimed the messages had been deleted.
The President of the European Commission, the person overseeing digital regulation for 450 million people, destroyed evidence of public negotiations worth tens of billions of euros. No consequences. No prosecution. No accountability.
This is the same European Union whose member states are now building the infrastructure to require you to verify your identity to post on social media.
They won’t show you their text messages. But they want to see yours.
This isn’t an isolated case. It’s the pattern.
Politicians who vote for surveillance laws use encrypted phones and private servers. Intelligence officials who build mass surveillance programs communicate through channels they know aren’t monitored. Corporate executives who harvest your data pay premium prices for privacy services that shield their own families.
The people who tell you transparency is essential practice opacity.
The people who say “if you have nothing to hide, you have nothing to fear” hide everything.
The rules are for you. Not for them.
Two Justice Systems
Let me show you how this works in practice.
When you hide something:
You don’t declare €50 from selling old clothes on Vinted. The tax authority’s algorithm flags the discrepancy. You receive a letter demanding payment plus penalties. If you don’t pay, they garnish your wages. If you resist, you face prosecution.
The system works perfectly.
When they hide something:
A multinational corporation shifts €2 billion in profits to a shell company in Luxembourg. Their army of lawyers structures it as “tax optimization” rather than evasion. The scheme is technically legal because they helped write the laws. When caught, they negotiate a settlement for pennies on the euro. No executive faces charges. The company issues a press release about their “commitment to compliance.”
The system works perfectly, for them.
When your data is exposed:
Your medical records, browsing history, and private messages are stored in databases you don’t control. When those databases are breached, your information spreads across the internet. You have no recourse. You can’t sue effectively. You can’t get the data back. You just have to live with the violation.
The system shrugs.
When their data might be exposed:
A journalist requests a politician’s communications under freedom of information laws. The request is denied for “national security.” Appeals take years. Documents that are eventually released are so heavily redacted they’re meaningless. The politician retires comfortably before any accountability arrives.
The system protects its own.
The Privacy Premium
Privacy has become a luxury good.
If you’re wealthy, you can:
- Hire lawyers to structure your affairs through opaque corporate entities
- Use private banking that doesn’t share information with standard databases
- Employ security consultants who scrub your digital footprint
- Send your children to schools that prohibit phones and social media
- Live in gated communities with security that keeps the surveilled world at a distance
- Access healthcare through private clinics that don’t feed data into national systems
If you’re not wealthy, you get:
- Default privacy settings that expose everything
- “Free” services that monetize your data
- Public systems that require full transparency to access benefits
- Schools that push children onto monitored platforms
- Neighborhoods saturated with cameras and sensors
- Healthcare that conditions treatment on data sharing
The rich buy their way out of the surveillance society. Everyone else is the product.
The Double Bind
This asymmetry creates a vicious trap.
When ordinary people demand privacy, they’re told it’s suspicious. “Why do you need privacy? What are you hiding? Only criminals want to hide things.”
When elites maintain privacy, it’s called “security.” Executive privilege. Trade secrets. National interest. Personal safety concerns.
The same behavior, wanting to control who sees your information, is criminal when you do it and prudent when they do it.
This isn’t hypocrisy. It’s hierarchy.
Privacy is a marker of power. The more power you have, the more privacy you’re entitled to. The less power you have, the more transparent you’re required to be.
Surveillance flows downward. Opacity flows upward.
What They Know About You
Let me make the asymmetry concrete.
What the system knows about you:
- Every purchase you make with a card or phone
- Every website you visit (and how long you stay)
- Every search you perform
- Every message you send on monitored platforms
- Every place you go (via phone location, cameras, license plate readers)
- Every person you communicate with (and how often)
- Your medical history, prescriptions, and health queries
- Your financial situation in granular detail
- Your political opinions (inferred from behavior patterns)
- Your psychological profile (built by algorithms)
- Your face, linked to all of the above
What you know about them:
- What they choose to tell you
- What journalists manage to uncover despite obstruction
- What leakers risk their freedom to reveal
- What courts occasionally force into the open, years later, heavily redacted
The information asymmetry is total. They know everything about you. You know almost nothing about them.
This is not a relationship between a government and its citizens. This is a relationship between a warden and inmates.
And don’t be wrong, the government is supposed to serve you, you pay them. It is THEIR duty to do for the good and to be FULLY transparent. But they don’t and a lot are criminals and/or highly suspected or related to criminals.
The Accountability Fantasy
Surveillance advocates claim that oversight prevents abuse. There are laws. There are courts. There are watchdogs.
Let’s examine this fantasy.
The CNIL (France’s data protection authority):
In 2024, over 5,600 data breaches were reported in France. The CNIL issued fines totaling a fraction of the damage caused. No executive went to prison. No major company was shut down. The breaches continued.
The watchdog barks but doesn’t bite.
Judicial oversight:
The SILT law that made emergency surveillance permanent includes a provision for “judicial oversight” of certain measures. In practice, judges approve the vast majority of requests. They don’t have the resources or technical expertise to meaningfully evaluate surveillance programs. Oversight becomes rubber-stamping.
The judge signs but doesn’t stop.
Parliamentary control:
Legislators receive classified briefings about surveillance programs. They’re not allowed to discuss what they learn. They can’t consult outside experts. They can’t warn the public. Even when they’re disturbed by what they see, they’re legally prohibited from acting on it.
The representatives know but can’t speak.
International oversight:
The European Court of Human Rights occasionally rules against mass surveillance. Governments ignore the rulings or make cosmetic changes. Enforcement mechanisms don’t exist. Years pass. Nothing changes.
The court rules but can’t enforce.
Oversight without enforcement is theater. It creates the appearance of accountability while ensuring its absence.
The Question of Trust
Surveillance systems require you to trust:
- That current laws will be applied fairly
- That future laws won’t be worse
- That everyone with access will behave ethically
- That the data won’t be hacked or leaked
- That political changes won’t lead to abuse
- That you’ll never become a target
- That the system won’t make mistakes about you
This is an enormous amount of trust to place in institutions that have repeatedly demonstrated they don’t deserve it.
Ursula von der Leyen deleted her text messages. Intelligence agencies lied to oversight committees. Police officers abused databases to stalk ex-partners. Corporations sold data they promised to protect. Governments used surveillance against journalists and activists. And those are just the cases we know about.
Why would you trust a system that doesn’t trust you?
Why would you accept transparency obligations from people who refuse transparency themselves?
The Real Relationship
Here’s what the asymmetry reveals about the true nature of modern governance:
You are not a citizen to be represented.
You are a subject to be managed.
Citizens have rights that the state must respect. Subjects have permissions that the state grants and revokes.
Citizens delegate power upward and demand accountability. Subjects have power imposed downward and provide accountability.
Citizens and their government have a relationship of mutual obligation. Subjects and their rulers have a relationship of control.
The surveillance asymmetry shows you which relationship you actually have.
What Would Symmetry Look Like?
Imagine if the rules applied equally.
Every government communication archived and publicly searchable. Every politician’s calendar, meeting, and message available for citizen review. Every police officer’s bodycam footage automatically released. Every surveillance request logged and auditable. Every database accessible to the people it contains data about.
If surveillance is necessary for safety, let it be universal. If transparency is required for trust, let it flow in both directions. If “nothing to hide means nothing to fear,” let the powerful prove it first.
They will never accept this. They will call it “dangerous” and “unworkable” and “a threat to national security.”
Which tells you everything about what surveillance is actually for.
THE SILENT VIOLENCE, How It Changes You
The worst thing about mass surveillance isn’t what they do to you.
It’s what you do to yourself.
The Invisible Prison
You don’t need to be punished for surveillance to control you. You just need to know you might be watched.
This is the genius of the panopticon, the prison design where a central tower can see into every cell, but prisoners can’t see into the tower. They never know if they’re being observed at any given moment. So they behave as if they’re always being observed.
The guard doesn’t need to watch everyone. The prisoners watch themselves.
This is what mass surveillance does to a society. It doesn’t need to actively monitor every person every moment. It just needs to create the possibility of being monitored. The uncertainty does the rest.
You become your own guard.
The Calculations You Don’t Notice
Here’s what surveillance does to your mind, so gradually you don’t notice it happening.
Before you search:
You want to understand something. A medical symptom. A political movement. A controversial idea. A dark curiosity. Your fingers hover over the keyboard.
And you hesitate.
“What if this shows up somewhere? What if someone sees this? What if I need to explain this later?”
So you don’t search. Or you search something safer. Or you phrase it carefully to seem less suspicious to an algorithm you’ve never seen.
You’ve just censored yourself. No one told you to. No law required it. You did it automatically.
Before you post:
You have an opinion. Something true. Something that might help others. Something you believe matters.
And you hesitate.
“What if my employer sees this? What if it’s screenshot and shared? What if this comes up in ten years when the context has changed?”
So you don’t post. Or you water it down. Or you post anonymously, except soon that won’t be possible anymore.
You’ve just silenced yourself. The surveillance didn’t need to act. Its existence was enough.
Before you associate:
Someone interesting wants to connect. A group exploring ideas you’re curious about. A community discussing things you care about.
And you hesitate.
“What if being associated with these people flags me somehow? What if the group is monitored? What if membership in this community becomes a problem later?”
So you don’t join. Or you join but don’t participate. Or you participate but hold back.
You’ve just isolated yourself. The network of potential connections, potential growth, potential solidarity, you’ve cut yourself off from it. Not because you were forced to. Because you were afraid.
The Thoughts You Stop Thinking
This is the deepest level of control: when surveillance shapes not just what you say but what you think.
When you know that any idea you explore might be recorded, you stop exploring certain ideas. When you know that any question you ask might be logged, you stop asking certain questions. When you know that any connection you make might be mapped, you stop making certain connections.
A researcher wants to study radicalization. She needs to understand how extremist forums work, what arguments they use, how they recruit. But she can’t Google extremist content without worrying her search history will flag her as a suspect. So she doesn’t. The paper doesn’t get written. The insight doesn’t happen. The prevention strategy never develops.
The range of your thought narrows. The boundaries of your mind contract. You become smaller, not because anyone explicitly told you to shrink, but because you’re constantly aware of the invisible walls.
This is the silent violence of surveillance. It doesn’t hit you. It doesn’t imprison you. It doesn’t even threaten you directly. It just creates an atmosphere where certain ways of being human become too risky.
Over time, you forget you ever thought differently. The cage becomes invisible because you’ve internalized its dimensions.
The Social Cooling
Researchers have a term for this: social cooling.
Just as global warming describes the climate effects of carbon emissions, social cooling describes the social effects of data collection. When people know they’re being watched and measured, they become more conformist, more risk-averse, more afraid to stand out.
The symptoms of social cooling:
Increased conformity.
People gravitate toward the mainstream, avoiding positions that might be flagged as unusual or extreme. Eccentricity becomes dangerous. Originality becomes risky.
Reduced experimentation.
People stop trying new things because failure might be recorded permanently. The ability to make mistakes and learn from them, essential for growth, is compromised.
Self-commodification.
People start managing themselves like brands, curating their profiles for algorithmic approval. Authenticity gives way to performance.
Anticipatory compliance.
People start following rules that don’t exist yet, avoiding anything that might become problematic in an uncertain future. They don’t just follow the law, they try to guess what the law might become and comply in advance.
Erosion of trust.
When everyone knows everyone else is being watched, the basis for genuine connection weakens. Every interaction carries the possibility of being recorded and used.
A society experiencing social cooling becomes less creative, less dynamic, less free, without any explicit repression. The control is invisible because it operates through the choices people make about themselves.
What You Stop Doing
Let me be concrete about what disappears.
You stop discussing sensitive medical issues online because your health becomes a liability. Insurance companies, employers, and algorithms might use it against you.
You stop exploring unconventional political ideas because having the “wrong” opinions on record could affect your career, your relationships, your freedom.
You stop seeking help for mental health struggles because the stigma is bad enough without a permanent record that follows you everywhere.
You stop being honest in private messages because “private” doesn’t exist anymore. Every conversation might be scanned, flagged, or leaked.
You stop creating art that challenges because provocation that might be celebrated in one era could be prosecuted in another, and your work is permanently attributed to you.
You stop asking questions that make you vulnerable, about sexuality, about faith, about doubts, because vulnerability requires privacy, and privacy no longer exists.
You stop being young and stupid online, making mistakes, saying regrettable things, learning through error, because every mistake is forever and there are no second chances.
In short: you stop being fully human.
The Generation That Never Knew
For people who grew up before mass surveillance, there’s still a memory of what freedom felt like. A sense that something has been lost. An ability to recognize the cage, even while inside it.
But what about those who never knew anything else?
Children born today will grow up with facial recognition as normal. ID verification for every service as expected. Permanent records of everything they do as simply how the world works.
They won’t experience surveillance as an intrusion because they’ll have no experience of its absence. The cage won’t feel like a cage because they’ll never have lived outside it.
This is perhaps the greatest violence: not what surveillance does to us, but what it will do to them. Not the freedoms we lose, but the freedoms they’ll never know they’re missing.
We can remember privacy. They won’t even have the vocabulary for it.
The Acceptable Thoughts
As surveillance narrows the range of acceptable expression, something sinister happens to the range of acceptable thought.
Ideas that can’t be safely expressed start to feel dangerous to even consider. The boundary between “I can’t say this” and “I shouldn’t think this” blurs. Self-censorship becomes self-control.
This is how you get a society where everyone believes roughly the same things, not because the ideas are correct, but because deviation is too risky. Conformity becomes safety. Orthodoxy becomes survival.
And here’s the darkest part: in such a society, the truth becomes unspeakable. Because truth is often uncomfortable, often challenging, often at odds with the official narrative. A society optimized for surveillance is a society where truth-telling is personally dangerous.
When speaking the truth can destroy your life, people stop speaking it.
When even thinking the truth can be detected through behavioral patterns, people stop thinking it.
This is the end state of mass surveillance: not a world where dissent is punished, but a world where dissent becomes literally unthinkable. The range of permitted thought becomes the range of possible thought.
The Violence You Can’t See
When someone puts a gun to your head, you know you’re being coerced. You might comply, but you retain your inner freedom. You know the truth of the situation.
Mass surveillance is more insidious because the coercion is invisible. There’s no gun. There’s no explicit threat. There’s just an atmosphere, a possibility, a vague awareness that shapes every choice you make.
You can’t rebel against what you can’t see. You can’t resist what you’ve internalized. You can’t fight for freedoms you’ve forgotten you ever had.
This is violence that leaves no bruises. Control that requires no commands. Oppression that feels like personal choice.
You censored yourself. You chose to stay silent. You decided to conform.
They didn’t make you do anything.
They just made sure you would make yourself do it.
THE BREACH, When (Not If) It All Leaks
Everything I’ve written so far assumes the surveillance system works as intended. That the data is collected, stored, and used by the authorities who are supposed to have it.
But here’s the thing: it never works as intended.
Every database gets breached. Every system gets hacked. Every centralized collection of sensitive information eventually spills into the hands of people it was never meant for.
This isn’t pessimism. This is the historical record. And what’s happening right now in France proves it.
The French Data Apocalypse
2024 was the worst year for data breaches in French history. 2025 is worse, 2026 is on the tracks.
Let me give you the numbers, not abstract statistics, but the actual list of what happened:
February 2024: Viamedis and Almerys
The two companies that handle health insurance reimbursements for virtually all of France were breached simultaneously. Result: the personal and medical data of 33 million people exposed. Social security numbers. Health coverage details. The foundation for identity theft at a national scale.
March 2024: France Travail (formerly Pôle Emploi)
The national un/employment agency was breached. 43 million people affected, essentially the entire working population of France, current and past. Names, addresses, social security numbers, employment history. Everything you’d need to steal someone’s identity or destroy their financial life.
The State penalizes the State, using the State’s money (*people’s money), for failing to properly protect the data that the State forces citizens to entrust to it, with no personal accountability and no individual sanctions whatsoever. I’ll stop here…
Late 2024: The Cascade
Then it started coming in waves. Telecom providers: Free and SFR, millions of customer records exposed. Retail chains: Boulanger, Cultura, Auchan, Picard, purchase histories and personal details leaked. Financial services: Harvest software breach cascaded to clients including major banks MAIF and BPCE.
2025: The Sensitive Targets
Multiple hospitals via Mediboard: 750,000 complete medical records extracted and put up for sale. The Fédération Française de Football: member data exposed. UNSS (school sports): children’s data compromised. And the Fédération Française de Tir, France’s shooting federation, with its list of every legal firearm owner in the sport shooting community.
The Numbers:
According to the CNIL, France’s data protection authority: 5,629 data breaches were reported in 2024. That’s a 20% increase from the previous year. The number of breaches affecting more than 1 million people doubled. According to security researchers: 48 major French organizations were breached in a single year. And that’s just the ones we know about.
What “Data Breach” Actually Means
Let me show you what “data breach” actually means with one specific example.
The Fédération Française de Tir, France’s shooting federation, was hacked. Member data was extracted and is now circulating in criminal networks.
Think about what that database contains and if crossed with other data:
- Names and addresses of legal gun owners
- What weapons they own
- What type, what caliber
- Where they’re stored
- When they practice (and therefore when they’re not home)
This isn’t credit card numbers that can be cancelled. This is a map of every legal firearm owners in France’s sport shooting community.
Criminals now know exactly where to steal weapons. They could know which houses have what guns. They could know the schedules of the owners.
When those stolen weapons are used in crimes, who gets investigated first? The legal owners whose guns were taken. Their data was leaked, their property was stolen, and now they’re suspects.
The breach didn’t just expose information. It created a supply chain for armed crime and a list of convenient scapegoats.
This is what centralized databases produce. Not security. Infrastructure for criminals.
What This Means For You
Let’s translate these statistics into reality.
If you’re French, your data has almost certainly been compromised. Not “might have been.” Has been. Multiple times. From multiple sources.
Right now, somewhere on the internet, there exists:
- Your name, address, phone number, and email
- Your social security number
- Your health insurance information
- Your employment history
- Your purchasing habits
- Possibly your medical records
- Possibly your banking details
- Possibly your family composition
This information is being bought and sold. It’s being used for targeted phishing attacks. It’s being combined with other leaked datasets to build comprehensive profiles. It’s sitting in the hands of criminals, foreign intelligence services, and anyone else willing to pay.
You had nothing to hide. But now everything is exposed anyway.
The Breach Is Guaranteed
Here’s what people who design surveillance systems never admit: security at scale is impossible.
Every centralized database is a target. The more valuable the data, the more attractive the target. The more attractive the target, the more resources attackers will devote to breaching it.
And attackers have fundamental advantages:
They only need to succeed once.
Defenders need to succeed every time, forever. One mistake, one vulnerability, one insider threat, and everything spills.
They have time.
A database might be secure today. But vulnerabilities are discovered constantly. What’s secure in January might be breachable by March. Attackers can wait.
They have motivation.
Nation-states want intelligence. Criminals want money. Activists want exposure. The incentives to breach are powerful and diverse.
They have the numbers.
Thousands of hackers are probing every major system every day. Automated tools scan for vulnerabilities constantly. The attack surface is infinite; the defense resources are finite.
This is why every major surveillance database eventually leaks. Not because the designers are stupid. Not because security isn’t taken seriously. But because the mathematics of the situation guarantee failure over time.
It’s not a question of if. It’s a question of when.
The ID Verification Database
Link to the X’s original post for video captions
Now apply this to the new ID verification requirement for social media.
France (The EU) is building a system that will link:
- Your legal identity (government ID)
- Your bio-metric data (facial recognition)
- Your social media accounts
- Your online activity and posting history
All in one centralized system. Or multiple connected systems that can be cross-referenced.
When, not if, this system is breached, attackers will have:
- Your real name linked to every account you’ve ever had
- Your face linked to everything you’ve ever posted
- Your activity patterns revealing your political views, sexual preferences, health concerns, financial situation, social connections
- Everything needed for perfect identity theft
- Everything needed for targeted blackmail
- Everything needed to destroy your life
And unlike a password, you can’t change your face. Unlike a credit card, you can’t cancel your bio-metric data. Once it’s leaked, it’s leaked forever.
The ID verification system isn’t protecting you from anything. It’s creating a master target that, when breached, will cause damage beyond anything we’ve seen before.
The Insider Threat
Breaches don’t only come from outside. They come from inside too.
Every surveillance database is accessed by humans. Administrators. Analysts. Contractors. Support staff. Each person with access is a potential point of failure.
Some will be bribed. Some will be blackmailed. Some will be ideologically motivated. Some will just be curious, looking up ex-partners, celebrities, neighbors.
In the United States, NSA employees were caught using surveillance systems to spy on love interests, so often it had its own name: LOVEINT. Police officers regularly abuse database access for personal reasons. Every system with human operators has this problem.
The surveillance apparatus being built isn’t just vulnerable to hackers. It’s vulnerable to every single person who can access it. And the more powerful the system, the more people need access, and the more potential points of failure exist.
What Happens After
When your data leaks, here’s what you can expect:
Phishing attacks that are terrifyingly personalized.
Scammers who know your name, your bank, your recent purchases, your family members. Fake emails that are nearly impossible to distinguish from real ones because they contain information only legitimate sources should have.
Identity theft at a level that’s almost impossible to recover from.
Someone opens credit cards in your name. Files tax returns in your name. Gets medical treatment in your name. Commits crimes in your name. Years of your life spent trying to prove you’re you.
Targeted harassment from anyone who wants to hurt you.
Your home address available to stalkers. Your medical conditions available to employers who won’t tell you why you didn’t get the job. Your private struggles available to anyone who wants leverage.
Blackmail and extortion.
Especially for anything embarrassing in your history. Private messages you thought were deleted. Searches you thought no one would see. Connections you thought were private.
Discrimination that you can’t prove.
Insurance rates that go up for mysterious reasons. Job applications that get rejected without explanation. Loans that are denied based on factors you’re never told about.
And the worst part: there’s almost nothing you can do about it. The data is out. It doesn’t come back. The damage continues indefinitely.
Physical violence that follows you into the real world.
This isn’t hypothetical, it’s happening now.
The cryptocurrency community has documented dozens of “wrench attacks”, home invasions where victims are tortured until they transfer their holdings. In France, Belgium, and across Europe, people have been kidnapped, beaten, and threatened at gunpoint.
The attackers know exactly who to target. How? Data breaches. Exchange hacks. Leaked customer lists. Anyone who ever bought Bitcoin through a platform that got breached is now on a list somewhere, a list that criminals buy and sell.
Until now, pseudonymous accounts provided some protection. You could discuss crypto, share strategies, build a following, without revealing your legal identity.
After mandatory ID verification? That protection vanishes.
Everyone who ever talked about Bitcoin, gold, silver, or any investment under a pseudonym faces a choice: submit your real identity and become a target, or disappear from social media entirely.
The people who comply will have their financial interests linked to their home addresses. The people who refuse will lose their communities, their audiences, their connections.
Either way, the surveillance state wins. Either you’re exposed, or you’re silenced.
And then there’s weaponized exposure.
You don’t need to break the law to be destroyed. You just need your name attached to something embarrassing, something out of context, something that can be spun.
A “leak” to a journalist. Your real identity connected to an old forum post. Your political opinions surfaced right before a job interview. Your medical searches shared with someone who wants leverage.
The pattern is already established: leak, then presumption of guilt by media, then public humiliation, then social death. No trial. No defense. No due process. Just exposure.
Right now, pseudonymity provides some protection. You can be doxxed, but it takes effort. After mandatory ID verification, the database does the work. One breach, one insider, one ‘accident,’ and the ammunition is ready.
Character assassination becomes industrialized.
The Deepfake Catastrophe
And then there’s what they’ll create with your face.
Deepfake technology has reached the point where anyone’s likeness can be inserted into pornographic content with minimal effort. The tools are free. The results are convincing. The victims have no recourse.
Right now, this mostly affects public figures and people who’ve shared photos online. It’s already devastating, revenge porn has destroyed lives, ended careers, driven people to suicide.
Now imagine what happens when facial recognition databases are breached.
Not just photos. Bio-metric data. The precise measurements of your face. The exact data needed to generate perfect deepfakes.
Millions of people, including teenagers between 15 and 18 whose bio-metrics are now in the system, will have their faces used to create pornographic content they never consented to. Content that will spread across the internet forever. Content that no take-down request will ever fully remove.
You can’t change your face. You can’t unsee what’s been made. You can’t explain to every future employer, partner, or family member that the video isn’t real.
The system that was supposed to “protect” people will provide the raw material for their violation.
The Promise They Can’t Keep
Every surveillance system is sold with promises of security. “Your data will be protected.” “We take privacy seriously.” “State-of-the-art encryption.” “Strict access controls.”
These promises are worthless. Not because the people making them are lying, but because the promises are impossible to keep.
No one can guarantee that a system will never be breached. No one can guarantee that an insider will never abuse access. No one can guarantee that today’s encryption won’t be broken by tomorrow’s technology. No one can guarantee that a future government won’t change the rules about how the data is used.
When they tell you the ID verification database will be secure, ask them: more secure than France Travail? More secure than your health insurance companies? More secure than all 48 organizations breached in a single year?
They can’t answer because they know the truth: it’s not a matter of if, but when.
The Irreversible Damage
Here’s the final cruelty: bio-metric data can’t be reset.
If your password leaks, you change it. If your credit card number leaks, you get a new one. If your address leaks, you can move.
But if your face leaks? If your fingerprints leak? If your iris scan leaks?
You can’t change your face. You can’t get new fingerprints. You can’t replace your bio-metrics.
When the ID verification database is breached, and it will be breached, the damage will be permanent. Your bio-metric identity, linked to your complete online history, will be in criminal hands forever.
No password reset will fix it. No fraud alert will contain it. No amount of “credit monitoring” will undo it.
This is what they’re building. This is what they’re demanding you submit to. A system that, when it fails, will fail catastrophically and irreversibly.
And they’re selling it to you as “protection.”
THE CHOICE, What You Can Still Do
I’m not going to lie to you.
The situation is bad. The infrastructure is being built. The laws are being passed. The population is largely asleep. The ratchet keeps clicking forward.
But it’s not over. Not yet.
You still have choices. They’re harder than they used to be, and they’ll be harder still tomorrow. But they exist. And making them matters, not just for you, but for the possibility of a different future.
Here’s what you can actually do.
First: Stop Complying In Advance
The surveillance state runs on voluntary compliance. Most people hand over their data willingly. They accept every default. They click “agree” without reading. They use their real names everywhere because it’s easier. They give up before they’re even asked.
Stop doing this.
Use the privacy options that exist.
Most services have settings that reduce data collection. They’re buried in menus because companies don’t want you to find them. Find them anyway. Turn off location tracking. Disable ad personalization. Opt out of data sharing. It takes an hour. Do it.
Stop using your real name where it’s not required.
Not every account needs to be linked to your identity. Use pseudonyms. Use separate emails for separate purposes. Compartmentalize your digital life. Make yourself harder to profile.
Refuse unnecessary data collection.
When a store asks for your phone number, say no. When an app asks for permissions it doesn’t need, deny them. When a service requires more information than necessary, find an alternative. Every piece of data you don’t give is a piece of data that can’t be leaked, sold, or used against you.
These aren’t revolutionary acts. They’re basic hygiene. But most people don’t do them, which means doing them already puts you ahead.
Second: Use Tools That Respect You
The technology exists to protect your privacy. It’s not as convenient as the surveillance alternatives, but it works.
For messaging:
Signal. End-to-end encrypted. Open source. Doesn’t store your messages. Used by journalists, activists, and anyone who takes communication security seriously. It works just like WhatsApp but without Facebook reading everything you say.
For email:
ProtonMail or Tutanota. Encrypted email services based in jurisdictions with stronger privacy laws. They can’t read your emails even if they wanted to. Free tiers are available.
For browsing:
Firefox with privacy extensions, or Brave. Blocks trackers by default. Doesn’t sell your history. Actually respects “do not track” requests.
For search:
DuckDuckGo, Startpage, or Brave Search. They don’t log your searches. They don’t build a profile of your interests. They just answer your questions.
For your phone:
Minimize apps, maximize settings. Every app is a potential leak. Remove what you don’t need. Disable what you don’t use. Check permissions regularly.
For the advanced:
Linux, VPNs, Tor. If you’re willing to learn, there are tools that provide much stronger protection. Linux is a free operating system that doesn’t spy on you. VPNs hide your internet activity from your ISP. Tor provides anonymity for sensitive browsing. None of these are necessary for everyone, but they’re available for those who need them.
The tools exist. Using them is a choice you can make today.
Third: Make It Expensive For Them
Surveillance systems assume compliance. They’re designed for a population that does what it’s told. When people resist, even partially, even symbolically, it creates friction. Enough friction and the systems become unworkable.
Use cash when possible.
Every card transaction is logged. Cash isn’t. You don’t need to go fully cash-only, but using it for sensitive purchases reduces your trackable footprint.
Pollute your data.
Click on ads you’d never buy. Search for things you don’t care about. Visit websites outside your normal pattern. The more noise in your signal, the less useful your profile becomes.
Support alternatives.
Use and promote services that respect privacy. Pay for them if you can, because if privacy tools can’t survive economically, they won’t exist. Every euro you spend on privacy-respecting services is a euro not spent on surveillance capitalism.
Make them work for it.
When companies demand unnecessary data, push back. File complaints. Ask why they need it. Request data deletion under GDPR. Most won’t fight. The bureaucratic cost of resistance often exceeds the value of compliance.
None of this will stop the surveillance state alone. But all of it increases the cost of surveillance, and systems that cost too much eventually fail.
Fourth: Say No Out Loud!
Individual action matters, but collective action matters more. And collective action starts with breaking the silence.
Talk about this.
Most people haven’t thought seriously about surveillance because no one in their life talks about it. Be the person who does. Not lecturing. Not preaching. Just sharing what you’ve learned, when it’s relevant, with people who might listen.
Normalize refusal.
When ID verification for social media becomes mandatory, some people will refuse. If you’re one of them, say so. If enough people publicly refuse, it becomes a political issue rather than individual deviance.
Support organizations fighting this.
Electronic Frontier Foundation. La Quadrature du Net. Access Now. Privacy International. These organizations fight legal and political battles that individuals can’t. They need money. They need attention. They need people to know they exist.
Vote on this issue.
Politicians who support surveillance need to know it costs them votes. Politicians who oppose it need to know they’ll be supported. Make it known that this matters to your vote. Write to your representatives. Make noise.
Refuse to be ashamed.
When you’re called paranoid for caring about privacy, don’t back down. When you’re told only criminals need anonymity, correct them. When you’re made to feel like a freak for resisting, remember: you’re not the freak. They’re the ones sleepwalking into a cage.
The people who benefit from surveillance want you isolated and silent. Every conversation you have about this reduces their power.
Fifth: Prepare For Worse
I wish I could tell you that doing all this will stop what’s coming. I can’t.
The trajectory is clear. The infrastructure is being built. The legal frameworks are expanding. The population is largely compliant. Even if we do everything right, things will probably get worse before they get better.
So prepare.
Reduce your dependence on systems that can be used against you.
The more you need digital services to live, the more leverage surveillance systems have over you. Build relationships that don’t depend on platforms. Maintain skills that don’t require connectivity. Have plans that work when the systems don’t.
Know your jurisdiction.
Privacy laws vary wildly between countries. Know what protections you have, and don’t have, where you live. If you have the ability to move, know which jurisdictions are better.
Build community.
The hardest part of resisting surveillance isn’t technical, it’s social. You need people who understand, who support, who won’t think you’re crazy. Find them. Whether online or offline, build connections with people who see what’s happening.
Maintain your inner freedom.
Even in the worst-case scenario, total surveillance, complete identification, no anonymity anywhere, you can still maintain some autonomy over your own mind. You can still know the difference between what you’re forced to say and what you actually think. You can still preserve an inner space that they don’t control.
This isn’t defeatism. It’s realism. The best outcomes require working for change while preparing for the alternative.
The Singapore Objection
There’s one counterargument I haven’t addressed. The sophisticated one.
“What about Singapore? Dubai? Qatar? China? These places have extensive surveillance, and they’re among the best places on Earth to live. Clean streets, low crime, functional government, economic dynamism. Maybe surveillance isn’t the problem. Maybe it’s who’s doing it and what they deliver in return.”
This argument deserves a real answer. Not dismissal. An answer.
First: selection bias.
For every Singapore, there are dozens of surveilled states that are hellholes. North Korea is surveilled. Turkmenistan is surveilled. Most of the world’s dictatorships have extensive monitoring systems, and most of them are places you’d never want to live.
Surveillance correlates with authoritarianism far more often than it correlates with prosperity. The exceptions are exceptions precisely because they’re rare.
Second: the trade-off matters.
Singapore offers a deal. An explicit social contract. You accept restrictions on speech, on political organization, on certain freedoms. In exchange, you get: some of the best public housing on Earth. World-class healthcare. Personal safety that Europeans can only dream of. Property rights that are actually enforced. A government that demonstrably works. Economic opportunity that took a swamp to first-world status in one generation.
Dubai offers a similar deal. No income tax. Business freedom that makes European entrepreneurship look medieval. Infrastructure that functions. Security you can feel walking the streets at 3 AM. A future you can build.
These places said: “Give us some of your freedom. We’ll give you prosperity, security, and competence.”
And they delivered.
Now look at France.
What’s the deal being offered?
You give up your privacy. Your anonymity. Your ability to speak freely online. Your data, your face, your identity linked to everything you do.
And in exchange you get… what exactly?
Declining purchasing power. Crumbling infrastructure. A healthcare system bleeding doctors. Schools that fail children. Pensions that might not exist when you need them. Streets you’re afraid to walk at night. A bureaucracy that treats you as a suspect. Taxes that approach confiscation. A government that can’t keep its own databases secure for six months.
France isn’t offering Singapore’s trade. France is demanding Singapore’s surveillance while delivering Venezuela’s competence.
That’s not a social contract. That’s theft.
Third: you cannot import a piece of a system.
Singapore and the other places cited works because of a specific culture, a specific history, a specific scale, a specific quality of governance developed over decades. You cannot extract “surveillance” from that package and paste it onto France and expect Singapore’s results.
What works in a city-state of 6 million with Confucian cultural foundations, genuine elite competence, and a population that broadly trusts its government… does not transfer to a nation of 68 million with adversarial government-citizen relations, endemic institutional corruption, and a bureaucratic class that views citizens as resources to be extracted.
The surveillance isn’t what makes Singapore work. The competence is. The surveillance is what Singapore can afford because it delivers everything else.
France has neither the competence nor the trust nor the results. It just wants the control.
Fourth: intention matters.
When Singapore or China surveils, the stated intention is social order and national survival. You can debate whether that’s justified, but the intention is at least coherent with the outcome. They want a functioning society and they’re building one.
When France surveils, what’s the intention?
Not child protection, we’ve established that. Not security, the police solve 10% of crimes. Not prosperity, the economy stagnates. Not trust, they delete their own text messages while demanding yours.
The intention is control for its own sake. Power without purpose. Surveillance without the corresponding responsibility to deliver results.
Singapore’s surveillance might be a devil’s bargain, but at least the devil keeps his promises. France’s surveillance is just the devil.
The real lesson of Singapore:
If you’re going to argue that surveillance is acceptable, you need to demonstrate that you’ve earned the right to demand it. That you deliver what you promise. That the trade is fair.
Show me French streets as safe as Singapore’s. Show me French schools as effective as Singapore’s. Show me French bureaucracy as competent as Singapore’s. Show me French government as trustworthy as Singapore’s.
Then we can discuss whether the surveillance trade-off is worth it.
Until then, the comparison isn’t an argument for French surveillance.
It’s an indictment of it.
Nothing Is Permanent
None of this is guaranteed to work. The forces building the surveillance state have more money, more power, and more momentum than those of us who oppose it.
But that’s not the only reason to resist.
You resist because compliance makes you complicit. Because surrendering without a fight means you chose your cage. Because the person you are, or want to be, doesn’t just accept having their freedom taken.
You resist because even if you lose, you lose as someone who tried. And you might not lose. History is full of systems that seemed unstoppable until they stopped. The Soviet Union collapsed. Apartheid ended. Walls came down.
No one can promise that resistance will succeed. But everyone can guarantee that surrender will fail.
Make your choice.
THE COST OF SILENCE, Closing.
I started this essay angry. I’m finishing it something else.
Not defeated. Not hopeless. But aware of what we’re losing in a way that settles into you and doesn’t leave.
There’s a world I remember that my children might never know.
A world where you could walk down a street without cameras tracking your face. Where you could read a book without anyone knowing which page you stopped on. Where you could have a conversation that simply ended when the conversation ended, no recording, no transcript, no permanent archive.
A world where you could be alone. Actually alone. Unobserved. Unmonitored. Free to think whatever you wanted because no one was measuring your brainwaves for deviant patterns.
A world where you could be young and stupid, say foolish things, believe wrong ideas, make embarrassing mistakes, and then grow past them. Because the past was the past. Because people could change. Because there was no permanent record following you from adolescence to death.
A world where the default was privacy and surveillance required justification, not the reverse.
That world is dying. Maybe already dead. And most people don’t even realize it because they never knew it existed.
This haunts me.
Not that we’re losing our freedom. History is full of lost freedoms. People adapt. They survive. They find ways to live even in the tightest spaces.
What haunts me is that we’re giving it away.
No one is conquering us. No army is occupying our streets. No dictator seized power in a coup. We’re doing this to ourselves. Voting for it. Paying for it. Installing it on our phones and inviting it into our homes. Applauding each new measure because someone told us it would make us safer, protect our children, stop the bad people.
We’re building our own prison and calling it progress.
And the worst part, the part that keeps me awake, is that most people don’t even see the bars. They’re so used to being watched that they’ve forgotten what it felt like to be unseen. They’ve internalized the surveillance so completely that they do the watching themselves.
They’ve become their own jailers. And they think they’re free.
I think about the children.
Not as a rhetorical device, I’m sick of children being used as political weapons. I mean actually think about them. The ones being born now. The ones who will grow up in whatever world we leave behind.
They’ll never know what it felt like to have a thought that wasn’t potentially observed. They’ll never know the freedom of being anonymous in a crowd. They’ll never know privacy as anything other than a vaguely historical concept, like “telegram” or “rotary phone.”
And because they never knew it, they won’t miss it. They won’t fight for it. They won’t even understand why someone would want it.
The cage won’t feel like a cage to them. It will just feel like the world.
This is the true cost of our silence. Not what we lose, but what they’ll never have. Not the freedoms we surrendered, but the freedoms they’ll never know existed.
We’re not just giving up our own privacy. We’re giving up theirs. Making choices for people who haven’t been born yet. Building a world they’ll be trapped in.
And we’re doing it because we were too comfortable to resist. Too distracted to notice. Too afraid to speak up.
Because we “had nothing to hide.”
So here’s my answer.
No.
I refuse to go quietly into the checkpoint society. I refuse to pretend that surveillance is safety. I refuse to accept that my face is a password and my thoughts are data and my life is a feed to be monitored.
I refuse to believe that freedom is outdated, that privacy is suspicious, that the only choice is between control and chaos.
I refuse to hand my children a world where they have to perform for invisible audiences every moment of their lives.
I refuse to be silent.
This essay is my refusal. If you’ve read this far, maybe it’s yours too.
Here’s what I want you to do.
Not everything. Not perfectly. Not all at once.
Just something.
Change one setting. Install one privacy tool. Have one conversation. Support one organization. Refuse one unnecessary data collection.
Make one choice that says: I see what’s happening, and I don’t accept it.
Then make another.
That’s all any of us can do. One choice at a time. One refusal at a time. One conversation at a time.
It’s not enough. But it’s not nothing.
And nothing is what they’re counting on.
The bar is installing a checkpoint.
You can walk through it, hand over your ID, and accept that this is just how things are now.
Or you can stop at the door.
Look at what’s being asked of you.
And say NO.