Researchers at ETH Zurich, Anthropic, and the Machine Learning Alignment and Theory Scholars program built a system that can figure out who you are online, and it costs about as much as a used car payment to run.

Read the study here.

The system works the way a patient investigator would, except it operates at a scale no human investigator could match. It reads your posts and looks for patterns: how you phrase things, what you mention about your life, when you tend to be online, the particular mix of topics you return to. Then it searches for other accounts with the same patterns. It flags probable matches, cross-references them, and produces a shortlist.

The researchers tested it on Reddit accounts, Hacker News posts, LinkedIn profiles, and transcripts from Anthropic’s own interviews with scientists. In the best cases, it correctly identified matching accounts 68 percent of the time at 90 percent precision. Traditional methods, the kind that connect data points across large datasets without AI, identified almost none.

The costs work out to between $1 and $4 per profile. The full experiment ran for under $2,000.

What that means, practically: the capability to track down an anonymous poster used to require either a motivated professional or an obsessive amateur willing to spend hours digging through posts.

Now it requires a few dollars and an API key.

The question of who has the ability to deanonymize someone has historically been constrained by resources. It no longer is, or won’t be for much longer.

The study hasn’t been peer reviewed. It was done under laboratory conditions using curated datasets, not against actual pseudonymous users in the wild. The researchers declined to demonstrate the system when asked and wouldn’t say whether they had tested it outside the study’s boundaries. Those are real limitations.

They’re also somewhat beside the point.

The researchers are already warning that performance will improve as AI systems get more capable and data pools grow larger. One experiment tested the system against Reddit users posting about films. When an account mentioned just one movie, the system matched it about 3 percent of the time. When an account had mentioned ten or more films, success climbed to nearly 50 percent. The more you’ve posted, the more surface area you’ve given the system to work with.

In another test, the system identified nine of 125 scientists from Anthropic’s survey transcripts. It did this by building profiles from scattered clues: one respondent’s use of British English suggested UK affiliation; references to a supervisor suggested a PhD student; mentions of physical science and biology research narrowed the field further.

The researchers estimate that it replicated in minutes what would have taken a human investigator hours.

The people most at risk aren’t the ones who have already gone to serious lengths to stay hidden. They’re the people who assumed a throwaway account was throwaway because it had a different username. Journalists in sensitive situations, activists trying to organize without identification, and dissidents in countries where being identified carries consequences.

The researchers said AI labs should monitor how their tools are being used and build safeguards against deanonymization. Social media platforms, he added, could restrict the mass data scraping that makes this kind of analysis possible.

Neither of those things has happened yet. In the meantime, the advice from the researchers who built the system is to keep accounts separate, limit personal details, and be careful about identifiable patterns like posting exclusively during waking hours in your time zone.

Most privacy advice on the internet was written for a different threat. Use a VPN. Go incognito. Make your Instagram private. Create a throwaway account.

These steps weren’t useless when they were first recommended, but the world they were designed for has changed.

The VPN Problem

VPNs do what they say they do. Your ISP can’t see your traffic. Your IP address is masked from the sites you visit. For those purposes, they work.

The issue is that the threat the ETH Zurich paper describes operates on a different layer entirely. A VPN doesn’t touch what you write. It hides the connection, not the content. The pipeline that identified 68 percent of Hacker News users didn’t need anyone’s IP address. It read their posts.

Browser Fingerprinting

When your browser loads a page, it transmits a bundle of information: operating system, screen resolution, installed fonts, how your device renders graphics, and how it processes audio. Individually these look harmless. Combined, they build a profile that’s statistically unique.

The Electronic Frontier Foundation found that 94 percent of tested browsers were uniquely identifiable by these traits alone. Clearing cookies or using incognito mode doesn’t change any of them.

In December 2024, Google announced that starting February 2025, advertisers could use fingerprinting-based tracking. The UK’s Information Commissioner’s Office called it irresponsible. It happened anyway. Cookies were something users could delete. Fingerprinting is not something users can meaningfully control.

AI has made this more precise. Machine learning models can now correlate behavioral cues across sessions, mouse movement, typing speed, scroll behavior, interaction latency, and link different fingerprints to the same user, even when technical parameters change. Switching devices or resetting browsers used to help more than they will as we move into the future.

The ETH Zurich paper identified people from their writing. Fingerprinting identifies people from how they interact with a browser. The underlying mechanism is the same. Behavioral patterns persist in ways that assigned identifiers don’t.

What Platforms Actually Know

A private account or a pseudonym limits what other users can see. It doesn’t limit what the platform knows. Platforms have your device identifiers, your IP history, your email address, your follow graph, and your posting times. The ETH Zurich pipeline worked from public data. The platform’s internal data is richer by orders of magnitude, and it’s available to the platform regardless of your privacy settings.

The Data Broker Layer

Data brokers collect from a wide range of sources. They pay app developers to install code that siphons user data, including location. They use web trackers to capture online activity. They purchase data from internet service providers, car manufacturers, advertisers, utility companies, and supermarkets. The resulting profiles are sold to anyone who pays.

Since 2017, the Supreme Court has required warrants for US law enforcement agencies seeking cell location data directly from a mobile carrier. Purchasing the same data from a commercial broker doesn’t require a warrant. FBI Director Kash Patel confirmed under oath recently that the agency purchases commercially available information used in law enforcement operations. Senator Wyden, at the same hearing, described the situation as “particularly dangerous given the use of artificial intelligence to comb through massive amounts of private information.”

The Office of the Director of National Intelligence is building infrastructure to centralize this, a system to streamline agency access to commercially available information, including mobile ad location data, with analysis via large language models. The research pipeline ETH Zurich built is, in other words, a smaller version of something the US intelligence community is actively constructing at scale.

Each of these layers is manageable in isolation. The VPN handles the connection. A pseudonym limits visible identity. A private account restricts the audience. All of these are reasonable precautions.

The difficulty is that AI systems are increasingly good at combining what these layers leave exposed. Writing patterns survive a VPN. Behavioral fingerprints survive a browser reset. Account metadata survives a privacy setting. Location history from a free app installed years ago is sitting in a commercial database available without a warrant.

The Real Defense

The system works because people have already published enough of themselves to make identification possible. The ETH Zurich pipeline didn’t hack anything. It read what was already there.

The simplest response is to stop generating a detailed, machine-readable record of your life.

  1. Stop Treating the Internet Like a Diary

Most people post as if each post exists in isolation, each account is separate, and context fades over time. None of that is true anymore. Posts accumulate, accounts get linked, and context gets reconstructed by systems that are patient, cheap to run, and getting better.

Don’t document your life in real time, don’t narrate your routines and habits, and don’t build a public timeline of your thoughts and experiences. AI systems turn scattered posts into coherent identity profiles, and that process is now cheap enough to run at scale against ordinary people.

  1. Limit Personal Detail, Even in Small Pieces

The risk is aggregation. Your timezone is harmless on its own, as is your job field, your hobbies, your writing style. Together they narrow the field considerably, sometimes to one person. The ETH Zurich paper showed this directly: one mentioned film got a 3 percent match rate, and ten films got nearly 50 percent. Each piece of information reduces the number of people you could be, and enough pieces makes the set have one member.

If a detail reduces the number of people you could be, think twice before posting it.

  1. Avoid Building a Complete Self Online

People naturally want to express opinions, interests, work, and personality online, and across time and platforms that accumulation becomes a full-spectrum identity profile. Keeping what you share partial and fragmented, and making sure no single account reflects your full life, reduces the surface area available to these systems considerably.

  1. Be Careful With Consistency Over Time

Consistency is what makes you recognizable. Posting at the same times, returning to the same niche topics, using the same framing or arguments across accounts: these are exactly the signals the pipeline is built to find. Repetition feels harmless because no single instance reveals anything, which is precisely why it’s useful to a system scanning for it across thousands of accounts simultaneously.

  1. Don’t Assume Old Data Is Gone or Irrelevant

The posts you wrote years ago were generated before systems like the ETH Zurich pipeline existed at this cost and scale. They’re still there, still searchable, and now processable in ways they weren’t when you wrote them. Data you created in 2015 is more valuable to these systems today than it was then, because the tools to analyze it have improved while the data itself hasn’t gone anywhere. Old forum posts, early social media activity, niche community discussions: all of it can be reprocessed with tools that didn’t exist when you wrote them.

  1. Minimize Platform Exposure

Platforms already have your device identifiers, your behavior, your network, and your full history, and privacy settings limit what other users see without limiting what the platform itself holds. Using fewer platforms, avoiding linking accounts together, and understanding that privacy settings are an audience control rather than a data control all matter more than most people assume.

  1. Think in Terms of Data Accumulation

Every post adds to your linguistic fingerprint, your topic profile, and your behavioral pattern. No single post exposes you, but the sum of them increasingly does, and that sum is being analyzed by systems that are improving faster than the data is aging.

  1. Accept That Low-Effort Anonymity Is Gone

Throwaway accounts, VPNs, and private profiles were built for IP tracking and basic account linkage, and they still do those jobs reasonably well. Pattern recognition, behavioral inference, and cross-platform matching operate on a different layer entirely, one that those tools weren’t designed to address. Anonymity now requires not generating linkable patterns in the first place, which is a harder and more behavioral change than switching to a VPN.

  1. The Simplest Rule

The people hardest to identify in the ETH Zurich study were the ones who had posted less. More than any tool or setting, volume of data is what determines exposure, and the only reliable way to reduce it is to generate less of it to begin with.

The ETH Zurich paper showed that text alone, at scale, is enough to identify people with meaningful precision. Add the other layers, and the picture gets more complete, and more retrospective. Data generated before you had any reason to be careful is still out there, and the systems being built to analyze it didn’t exist when you generated it.

The tools most people rely on for privacy were designed for the threats that existed when they were built. A VPN is still worth using. So is a privacy-focused browser. The question is whether people understand what those tools protect against, and what the newer systems are now able to do with what those tools leave exposed.

Edit

Pub: 26 Mar 2026 17:28 UTC

Views: 14