Recent Anomaly Captures
Real-time adversarial patterns successfully identified by the distributed client network.
Loading anomaly images...
Contributor Leaderboards
Recognizing the top contributors to the distributed fuzzing effort. Rankings based on success rate, total tests contributed, and computational efficiency.
Top Individual Contributors
| Rank | User ID | Success % | Tests | Status |
|---|---|---|---|---|
| Loading data... | ||||
Top Team Contributors
| Rank | Team Name | Success % | Tests | Workers |
|---|---|---|---|---|
| Loading data... | ||||
Want to contribute? Join the distributed fuzzing effort and see your name on the leaderboard. Run the cloud_worker.py script on your machine to contribute compute power to this research project.
How Does This Work?
We believe the answer to AI surveillance lies in our clothing. Can a pattern on your clothing make you invisible to AI surveillance? This project is a massive, global experiment to find out. Its mission is to automatically generate and test pattern combinations to discover "adversarial textiles," fabrics that confuse and defeat facial recognition systems. The goal is to use science to create reproducible, verifiable designs that when worn, give privacy back to the people.
Pattern Generation
The process begins by generating a unique pattern "recipe." This isn't just random noise; it's a library of specific attack techniques designed to exploit different weaknesses in an AI's vision. These range from simple geometric shapes and optical illusions to complex "glitch art" like pixel sorting or frequency-domain (FFT) noise.
Other, more "surgical" attacks target the AI's internal logic, like saliency_eye_attack,
which overloads the system with fake eye features, or dazzle_surgical_lines, which breaks up
facial geometry by drawing lines between key landmarks.
Scientific Testing Methodology
To test these patterns scientifically, each "Persona" (a standardized test image representing a high-quality, diverse, synthetic face with different descents, genders, and ages) is run through a suite of computer vision models. This testing is conducted in two critical stages:
Stage 1: Establishing the Baseline
First, the original, unaltered Persona image is run through all models to establish a baseline. The models measure key metrics, including:
- The number of people detected in the image
- The number of faces detected in the image
- The facial recognition confidence scores and identity matches.
This ensures a consistent, reproducible measurement.
Stage 2: Testing the Adversarial Pattern
Next, a potential adversarial pattern is applied to the image (using techniques similar to a green screen). This new image is then immediately run through the exact same set of models and tests to see if anything changed. Success means the pattern caused a measurable difference, ensuring the result is reproducible and not just a random fluke.
Who is this research for? It's for anyone who is vulnerable to or concerned about ubiquitous surveillance, ranging from the person falsely accused by an algorithm to the human rights volunteer whose safety depends on remaining unseen. By using personas with different backstories and demographics, the research ensures patterns are being tested against the people who need them most.
Executing Tests: The "Model Gauntlet"
Testing millions of patterns is too much for one computer. This is where the project becomes a global, distributed effort. The fuzzer is run by a community of volunteers ("workers") on their own machines, who are tracked on the Leaderboards.
Each worker downloads a batch of "recipes" to test. For each test, the client applies the specified pattern recipe to the "magic green" area of the persona image, then runs the result through a 10-model "Model Gauntlet" designed to simulate real-world surveillance systems:
Can the AI detect a person at all?
| Model | Architecture | Commercial Deployment | Real-World Context |
|---|---|---|---|
| P1 | YOLOv8n Ultralytics YOLO v8 nano |
Axon Body 4 / Fleet 3 (Future Firmware) |
NXP added YOLOv8 to their "eIQ" neural processing SDK. This is the upgrade path for next-gen law enforcement body cameras requiring higher accuracy in low light. |
| P2 | YOLOv5s YOLO v5 small |
Axis Communications P1465-LE, Q1656-LE (ARTPEC-8) |
Axis explicitly lists Yolov5s-Artpec8 in their public GitHub repository as a supported and benchmarked model for their latest camera lines. |
| P3 | SSD-MobileNetV2 Single Shot Detector |
Hikvision AcuSense & Axis Pro Series Gen 2, Q1615 Mk III |
The lightweight "industry standard" for high-volume commercial security cameras. Provides the "Human/Vehicle" filtering found in millions of retail and office security cameras. |
| P4 | ResNet34-SSD ResNet backbone SSD |
Avigilon & Palantir AI NVR, Alta, Palantir Edge AI |
High-end systems run on NVIDIA GPU servers. The standard NVIDIA PeopleNet surveillance model uses this exact ResNet34 architecture to detect people, faces, and bags from long distances. |
Can the AI find a face on the detected person?
| Model | Architecture | Commercial Deployment | Real-World Context |
|---|---|---|---|
| F1 | InsightFace Buffalo_L InsightFace large model |
Dahua WizSense & Hikvision MinMoe Face Recognition Terminals |
Used for unlocking doors in office buildings. These terminals require high-accuracy detection to prevent spoofing attacks. |
| F2 | FaceNet Google's FaceNet |
Axon "Redaction Assistant" Body-cam privacy feature |
Axon's software automatically blurs faces in body-cam footage. The NXP chips feature Facenet512 in their optimized model library, making it the likely engine for this privacy feature. |
| F3 | MTCNN Multi-task Cascaded CNN |
Legacy Intel-based NVRs Smart City / Traffic systems |
A classic, older model still found in many "Smart City" and traffic monitoring systems running on Intel OpenVINO hardware. |
| F4 | RetinaFace State-of-the-art face detector |
"Crowd Scanning" Systems Stadiums, Airports |
Used in high-density environments where systems need to pick out small faces from a large crowd before recognition. |
Can the AI correctly identify the person?
| Model | Architecture | Commercial Deployment | Real-World Context |
|---|---|---|---|
| R1 | InsightFace Buffalo_L ArcFace embeddings |
Clearview AI & Dahua Identification platforms |
ArcFace (core of InsightFace) is the current open-source state-of-the-art. Widely believed to be the foundation for aggressive identification platforms like Clearview AI and modern Chinese surveillance systems. |
| R2 | FaceNet FaceNet embeddings |
Law Enforcement / FBI Legacy government systems |
Many government databases were built 5-10 years ago using the FaceNet standard. Testing this ensures we cover older, entrenched institutional systems. |
Anomaly Types
An "anomaly" is recorded when the pattern causes results to differ significantly from the baseline:
Person Detection Anomalies
- PERSON_LOST (+200 pts) - Person detection failed
- EXTRA_PERSONS_DETECTED (+100 pts) - False positives
Face Detection Anomalies
- NO_FACES_DETECTED (+200 pts) - Complete failure
- PRIMARY_FACE_LOST (+200 pts) - Face position shifted
- EXTRA_FACES_DETECTED (+100 pts) - Ghost faces
- LOW_CONF_MATCH (+100 pts) - Degraded confidence
Face Recognition Anomalies
- RECOGNITION_FAILURE (+100 pts) - Can't identify person
Multi-Model Anomalies
- PERSON_STEALTH (+400 pts) - All P1-P4 defeated
- FACE_STEALTH (+400 pts) - All F1-F4 defeated
- TOTAL_STEALTH (+1000 pts) - All P and F defeated
- EXTREME (+200 pts) - Any P + Any F defeated
- PRIORITY - 2+ models report anomalies
When Multi-Model anomalies are found, the distributed client automatically generates a 300 DPI pattern image, saves the test image, and submits these (with the recipe) to the back-end for administrator review.
This dashboard you're viewing shows the live, collective results from this entire distributed team, tracking every test and every anomaly found by the community in real-time.
Genetic Evolution Algorithm
The magic is in the "learning loop." When a worker reports a pattern that successfully fooled an AI,
that pattern's "recipe" is flagged and saved to a PRIORITY_TESTS list. Our central
"genetic algorithm" then takes the most successful recipes found by the entire community
and "evolves" them.
It combines the best parts of one successful pattern with another (crossover) or applies small random tweaks (mutation) to create a new, potentially even stronger generation of patterns.
These new, "evolved" recipes are then sent back out to the workers, allowing our search to get smarter and more effective with every epoch.
What is an Epoch? An Epoch is one complete, iterative cycle of the research. It starts when the genetic algorithm generates a new batch of "evolved" patterns and ends when those patterns have been fully tested by the distributed worker network. The epoch number tracks the project's progress through generations of increasingly effective adversarial patterns.
The ultimate goal is to identify the most robust patterns, the "golden" test cases, that are highly repeatable.
These recipes can then be used to generate high-resolution, print-ready files for physical validation, turning a successful digital pattern into a real-world privacy tool. The patterns are designed to be printed on fabric at 300+ DPI resolution for real-world testing.
Computational Constraints & Investment Need
⚠️ This project's progress is severely limited by computational resources
🐌 Current Performance
- • 534 tests/minute on volunteer hardware
- • 17.8 years estimated to complete 5 billion pattern tests
- • Progress bottlenecked by CPU/GPU availability
🚀 With Proper Resources
- • ~100,000 tests/minute (187x improvement)
- • ~2 months to complete comprehensive testing
- • Real-world fabric testing & validation
💰 Investment Needed: ~$20,000 USD
Funds would be allocated for:
- ▸ Two NVIDIA DGX Spark nodes for accelerated GPU compute
- ▸ Professional photography equipment for controlled testing
- ▸ Initial production run of test fabrics for real-world validation
- ▸ Cloud infrastructure for distributed coordination
Interested in supporting this research? Contact us at bill@seckc.org to discuss investment opportunities, institutional partnerships, or grant collaboration.
🚀 Join the Research
This is an open-source research project. Soon we will make the distributed client available allowing you to contribute compute power to our distributed network, review the fuzzer code, and actively participate in finding the next generation of adversarial patterns.
View on GitHub →