AI deepfakes are the new weapon of gender-based violence. 90% of non-consensual AI content targets women, forcing them into digital silence. GLITCH-SHIELD proactively kills the data.
GLITCH-SHIELD is a proactive engineering framework designed to restore digital sovereignty to women and girls. It moves beyond awareness and provides a functional, hard-tech defense mechanism through three pillars:
Adversarial Perturbation Engine: At its core, the project uses a specialized algorithm that identifies the facial landmarks AI models use to map a person’s identity. It then injects noise into those specific pixels at a frequency invisible to human biology. If a malicious user tries to run this photo through a deepfake generator, the noise causes a catastrophic failure in the generator’s neural network, resulting in a distorted image that cannot be shared.
The Proof of Origin Ledger: To combat the gaslighting that often accompanies digital abuse, every image processed through GLITCH-SHIELD is tagged with a unique, encrypted hash. This allows women to instantly prove the originality of their content in legal or corporate settings, cutting through the months of bureaucracy usually required to remove fake content.
Universal Integration Design: For this to work, it must be accessible. The project is designed as a bridge tool. A user uploads her photo to the GLITCH-SHIELD vault, it is armored, and then exported for use on social media or professional sites. It requires no technical knowledge from the user, democratizing high-level cybersecurity for every girl with a smartphone.
For 30 years, we have fought for women to have a seat at the table. Now, AI is being used to kick them out of the digital room entirely. GLITCH-SHIELD is the technical realization of the Beijing Platform for Action in the age of AI—ensuring that a woman’s digital presence is hers, and hers alone.




GLITCH-SHIELD.pdf