AI Misjudges Voice Activation Trigger of Campus Shooting Alarm

Recently, a primary school in Missouri was plunged into panic due to a false “campus shooting alarm” mistakenly issued by an Artificial Intelligence (AI) system. Parents, students, and staff initially believed that a shooting incident had occurred on the school grounds. This incident has once again raised concerns about the accuracy and reliability of AI in the field of public safety.

According to a statement released by the Lawrence County Sheriff’s Office in Missouri on April 16, the incident occurred on April 10. That day, an instant crime alert application called CrimeRadar sent out an alert to users, indicating that there was a shooting incident at Mt. Vernon Elementary School.

The school has approximately 320 students. The unexpected alarm caused many parents and students to panic, prompting the school district to immediately activate the campus shooting emergency plan. The local sheriff’s office stated that the school district “performed admirably in ensuring that relevant emergency plans were implemented correctly.”

Subsequent investigations revealed that this incident was actually a false alarm triggered by a misjudgment by the AI.

CrimeRadar stated that its automated system misunderstood the voice content in an emergency dispatch call, misinterpreting the original phrase “show me out at” (indicating the police arriving at the scene) as “shooting at.”

CrimeRadar apologized, saying, “We deeply apologize for the inconvenience caused to families, teachers, students, law enforcement officers, and the entire community.”

The company mentioned that after users flagged the alert as incorrect information, the system promptly made corrections and prevented further dissemination.

The statement indicated that the company has updated its audio processing and contextual recognition mechanisms and reinforced the manual verification process regarding school and firearm-related incidents to prevent similar incidents from happening again.

Described as an “AI-powered” instant crime update platform on the Apple App Store, CrimeRadar focuses on automatic monitoring and rapid push of local police incidents.

This incident once again exposed the limitations of AI in speech recognition and critical information interpretation, especially in highly sensitive areas such as public safety and campus events, where errors can have serious consequences.

In fact, similar AI error incidents have occurred recently.

Reports from English media stated that the prominent US law firm Sullivan & Cromwell recently issued a formal apology to a federal judge for using AI-generated content in a court document.

The law firm cited multiple AI-generated legal cases as examples in a document submitted to the Manhattan bankruptcy court—an occurrence known in the industry as “AI hallucination.”

These errors were eventually discovered by the opposing law firm. Sullivan & Cromwell’s partner, Andrew Dietderich, wrote a letter of apology to Judge Martin Glenn on April 18.

Sullivan & Cromwell is one of the oldest and most prestigious law firms in the US, currently handling multiple appeals cases on behalf of former President Trump.

From the false report of a campus shooting to a prestigious law firm mistakenly handling court documents, AI, despite its efficiency improvement, may pose even greater risks in critical moments if lacking sufficient human review.