This is a distributed research project discovering adversarial patterns that fool AI surveillance systems. Privacy shouldn't be a luxury. It's a fundamental right.
Do you know how many times a day you're subjected to facial recognition? The camera at your ATM. The gas pump. Every doorbell in your neighborhood. Airports, shopping centers, and city streets. You are being tracked, identified, and cataloged by systems you never consented to. We believe there's another way. Privacy is not a luxury. It is a fundamental right.
Pioneering work by Adam Harvey, Capable, and Adversarial Fashion proved that clothing could disrupt surveillance. Their groundbreaking research inspired this project. We're continuing that evolution with modern models and rigorous methodology.
This is not academic research against outdated benchmarks. We test against the exact models deployed by Clearview AI, law enforcement, and commercial surveillance systems. Older adversarial patterns were fragile against modern AI. Our 10-model gauntlet ensures patterns work against today's systems.
Testing billions of pattern combinations requires massive computational power. Our distributed network lets anyone contribute spare GPU cycles to this research. Access is restricted to non-commercial research, academic, and public interest use. Those affiliated with surveillance companies like Clearview AI, Palantir, or Voyager Labs are explicitly excluded.
Our distributed network tests thousands of visual patterns against state-of-the-art facial recognition models.
61+ pattern types exploiting AI vision weaknesses through geometric, frequency, and adversarial techniques.
Two-stage methodology: establish baseline, then measure pattern impact on detection accuracy.
Distributed workers test against 10 models: 4 person detectors, 4 face detectors, 2 recognizers.
Successful patterns breed new generations through crossover and mutation algorithms.
Each pattern "recipe" exploits specific weaknesses in AI vision systems. These are not random noise — they're a library of 61+ specific attack techniques:
saliency_eye_attack
Overloads the system with fake eye features
dazzle_surgical_lines
Breaks up facial geometry between key landmarks
Every test follows a rigorous two-stage protocol ensuring reproducible, scientifically valid results:
Original "Persona" image processed through all 10 models.
Pattern applied via green-screen compositing. Identical model suite re-runs tests.
Standardized test images featuring high-quality, diverse synthetic faces — different descents, genders, and ages — ensuring testing covers vulnerable populations who need protection most.
Clients operate as a coordinated network, contributing compute power simultaneously. Results are aggregated in real-time, dramatically accelerating research.
Distributed workers test patterns against 10 models representing real-world commercial surveillance:
Successful patterns don't just get recorded — they breed new generations through genetic evolution:
When patterns bypass 2+ models: auto-generate 300 DPI pattern images, save test images, submit recipe data for administrator review.
Real-time statistics from our distributed research network.
Recognizing the researchers powering our distributed network.
| Rank | Researcher | Tests | Success Rate |
|---|---|---|---|
| Loading leaderboard... | |||
Join thousands of researchers worldwide. Your GPU can help discover patterns that protect privacy and improve AI safety.
Get Started NowSign in to track your contributions and access your profile