A Doritos Bag, an Algorithm, and Eight Police Cars: When School Safety Tech Misfires

It took an empty Doritos bag, a school surveillance camera, and a fast-moving chain of alerts to turn an ordinary afternoon into a scene with eight police cars and guns drawn on a 16-year-old.

When a snack became a threat

After football practice in Baltimore County, a teenager tucked an empty bag of crisps into his pocket and headed outside. Minutes later, a police convoy rolled in. “Police showed up, like eight cop cars, and then they all came out with guns pointed at me,” he told local TV, describing officers ordering him to the ground.

He was handcuffed and searched. No gun was found—because there wasn’t one. The Baltimore County Police Department later said officers “responded appropriately and proportionally based on the information provided at the time,” adding, “The incident was safely resolved after it was determined there was no threat.”

The alert that shouldn’t have escalated

The trigger was not a witness call; it was an artificial intelligence gun-detection alert tied to school cameras. According to the school’s principal, the safety team “quickly reviewed and cancelled the initial alert after confirming there was no weapon.” Yet the situation didn’t end there.

In her letter to parents, the principal said she informed the school resource officer, who then contacted the local precinct for additional support. Police arrived and detained the student. It’s a case study in how safety technology, even when reviewed by humans, can spiral if communication and protocols aren’t airtight.

Inside the AI pipeline

The vendor behind the alert, Omnilert, said its system detected what appeared to be a firearm and sent the image to its review team, which verified and forwarded it to the district safety staff “within seconds.” The company said its involvement ended once the case was closed in its system and maintained that the process “operated as designed.”

“Real-world gun detection is messy,” the company says in its materials, arguing that rapid human verification is meant to prioritize safety and awareness.

That messiness is the heart of the debate. A crinkled, pocketed chip bag can resemble a silhouette a model has learned to flag, particularly in low resolution or partial view. The safeguards—human review, local assessment, and clear handoffs—exist precisely because such systems are probabilistic, not certain.

Where the system broke down

By the school’s account, staff canceled the alert after confirming there was no weapon. But the next steps still set law enforcement in motion. That gap—between a cleared alert and a full police response—appears to be the procedural fault line.

Local officials have called for an inquiry into how the alert was handled. “I am calling on Baltimore County Public Schools to review procedures around its AI-powered weapon detection system,” a county councilman wrote on Facebook. The goal is not just to audit a vendor but to examine the human loop wrapped around it.

The wider record on AI weapon detection

Accuracy claims in this field have faced increasing scrutiny. Last year, a high-profile U.S. weapons screening firm was barred from making unsupported claims about its system’s ability to detect all weapons, following investigations that found the marketing outpaced the technology. The lesson was blunt: these tools can help, but they cannot promise certainty.

Experts have long warned that the cost of false positives in high-stakes settings is measured not only in wasted time, but in potential harm. Every alert that escalates to an armed response imposes risks on students, staff, and officers. That’s why clear thresholds, precise communication, and unambiguous authority for cancellation are as important as the algorithm itself.

The teenager left more cautious than comforted

No one was injured, and police emphasized the incident ended safely. But safety is felt as much as it is declared. The student told local news that he now waits inside after practice because he doesn’t think it’s “safe enough to go outside, especially eating a bag of chips or drinking something.”

That sentiment reveals the human cost of a false alarm: the quiet erosion of normalcy. A routine walk home becomes a calculus of what to carry, how to move, where to stand—choices shaped by the fear of being misread by a lens or a model.

Designing for worst days without creating new ones

Schools adopt AI detection with good intentions: to spot danger early and compress response times on the worst day. But the technology functions within a sociotechnical system—software, people, training, and policy—that must be designed for both accuracy and restraint. When the alert is wrong, the system’s ability to stand down quickly is as crucial as its ability to sound the alarm.

  • Codify who has authority to cancel an alert and stop escalation, and ensure that decision is honored across all partners.
  • Mandate joint drills with school staff, SROs, and local police that simulate false positives and practice de-escalation.
  • Audit and publish performance metrics—false positives, response times, and outcomes—to build accountability and improve models.
  • Train operators on context cues and camera limitations, and calibrate thresholds to local environments to reduce noise.

Trust, transparency, and the next alert

Trust in school safety is not just about having more tools; it’s about proving those tools and policies work together when seconds matter. Police said they “responded appropriately and proportionally” based on the information they received. That caveat—what they received—underscores how vital it is to send the right signal at the right time, or to send none at all.

For the student at the center of this story, the takeaway is painfully simple: “I don’t think no chip bag should be mistaken for a gun at all.” For everyone else, the work is more complex—closing the gaps between detection and decision, and ensuring that protecting kids doesn’t mean pointing guns at them by mistake.

The Doritos bag will pass into local lore, but the questions it raises will not. As districts weigh safety investments this year, the measure of progress will be whether they build systems that can stand down as swiftly as they stand up. Readers can follow our ongoing coverage of AI in public spaces and share their perspectives on where the line between vigilance and overreach should be drawn.

Similar Posts