“Harm reduction instead of war,” Leo read aloud.
She turned her laptop toward him. On the screen: a proposal for a new project— —a voluntary digital etiquette layer for video calls. Not a weapon. Just a gentle nudge when someone talked too long, or a reminder to mute when eating chips.
“I won’t,” Mia whispered. “I’ll become the counter villain.” Over the next two weeks, Mia turned their cramped apartment into a cyber-war room. She learned about Zoom’s meeting ID generation, unsecured join links posted publicly on social media, and the simple Python scripts that could automate chat bombs and soundboard clips. She built her own bot—named —designed not to spam, but to detect spammers. zoom bot spammer
“You saved the poetry reading,” he said. “And the knitting circle. And probably a dozen disaster calls no one will ever know about.”
The professor froze. Students laughed. Mia laughed too—until the bot crashed the session five minutes before her presentation. “Harm reduction instead of war,” Leo read aloud
Patches could join a meeting, scan for rapid-fire messages or repeated audio loops, and then fight back with a single command: a quiet, forced removal of the spammer, followed by a polite “Sorry, wrong room” posted in the chat.
“Yeah,” Mia said quietly. “But I also built the first bot. Even Patches started as a spam tool before I rewired it.” Not a weapon
The first real test came during a public poetry reading Leo was hosting. Midway through a haiku about forgotten leftovers, crashed in, blasting airhorn sounds and a looped message: “Subscribe to cheese_facts daily!”
For the first time, Mia felt real fear. Not of the spam—but of what it meant. A single defender couldn’t stop a coordinated attack. She realized: fighting bots required people . The next morning, she posted in a dozen forums: “Former bot builder turned protector. Need your help. Let’s build a community watch.”