I study school shootings. Here’s what AI can and can’t do to stop them

Editor’s Note: David Riedman is the founder of the Database for school admissions for primary and secondary education, an open source research project documenting school shootings dating back to 1966. He conducts research on gun violence in schools and has authored multiple peer-reviewed articles on homeland security policy, critical infrastructure protection, and emergency management. Previously, he served 18 years as a firefighter and emergency medical technician in Maryland, where he reached the rank of captain. The views expressed in this commentary are his own. Read more opinion at CNN.

Since the start of the 2023-2024 school year in August, a gun has been fired at least 300 times on a K-12 campus. Over the past decade, the number of school shootings has increased tenfold, from 34 in 2013 to 348 in 2023.

David Riedman-David Riedman

David Riedman-David Riedman

This rapidly escalating pattern of gun violence on campus has left parents, teachers and school officials desperate for a solution.

Many schools have acquired new artificial intelligence and technology products that are being marketed to districts seeking help in tracking down a potential shooter on campus. This intense pressure on school officials to do something to protect students has transformed school safety from a niche field into a multibillion-dollar industry.

Public schools often lack funding, equipment, and staff, and AI offers incredible potential to automatically detect threats faster than any human. There is not enough time, money and staff to monitor every security camera and look in every pocket of every student’s backpack. If humans can’t do this job, using AI technology can be a powerful proposition.

I’ve collected data on more than 2,700 school shootings since 1966, plus safety issues like swatting, online threats, averted plots, near misses, stabbings, and students caught with guns.

Based on my research, there is no simple solution to this set of threats because school security is extremely complex. Unlike airport terminals and government buildings, schools are large public campuses that provide hubs for community activities outside of traditional school hours.

A weekday evening at a high school might include basketball, a drama club, English language classes for adults, and a church group renting the cafeteria — with potential security gaps amid this flurry of activity.

Two common applications of AI right now are computer vision and pattern analysis with large language models. These provide the ability to monitor a campus in ways that humans cannot.

In this image from surveillance video, law enforcement officers respond in a hallway after the gunman entered Robb Elementary School in Uvalde, Texas, on May 24, 2022.  - Texas House Investigative Committee/ReutersIn this image from surveillance video, law enforcement officers respond in a hallway after the gunman entered Robb Elementary School in Uvalde, Texas, on May 24, 2022.  - Texas House Investigative Committee/Reuters

In this image from surveillance video, law enforcement officers respond in a hallway after the gunman entered Robb Elementary School in Uvalde, Texas, on May 24, 2022. -Texas House Investigative Committee/Reuters

AI is used in schools to interpret metal detector signals, classify objects visible on CCTV, identify the sound of gunshots, monitor doors and gates, search social media for threats, look for red flags in student records and recognize students’ faces. identify intruders.

This AI software functions best when it addresses well-understood and clearly defined problems, such as identifying a weapon or an intruder. If these systems work properly, when a security camera sees a stranger holding a gun, AI software will highlight the face of an unauthorized adult and the object classification will identify the gun as a weapon. These two autonomous processes activate another set of AI systems to lock the doors, call 911, and send text message alerts.

What AI can and cannot do

With school security we want certainty. Is the person in the camera footage holding a weapon? We expect a ‘yes’ or ‘no’ answer. The problem is that AI models provide “maybe” answers. This is because AI models are based on probability.

For AI that classifies images as weapons, an algorithm compares each new image to the patterns of weapons in training data. AI doesn’t know what a weapon is, because a computer program doesn’t know what something is. When an AI model is shown millions of images of weapons, the model will try to find that shape and pattern in future images. It is up to the software provider to determine the probability threshold between a weapon and no weapon.

This is a messy process. An umbrella can score 90%, while a gun partially obscured by clothing scores only 60%. Do you want to avoid a false alarm for every umbrella, or receive a warning for every gun?

AI software interpreted this CCTV image as a gun at Brazoswood High School in Clute, Texas, causing the school to go into lockdown and police to rush to the campus. The dark spot is a shadow on a drainage ditch where a walking person is standing in line.

Cameras generate poor quality images in low light, bright light, rain, snow and fog. Should a school use AI to make life-or-death decisions based on a dark, grainy image that an algorithm can’t accurately process? A major transportation system in Pennsylvania canceled its contract with the same supplier as Brazoswood, saying its software could not reliably detect weapons.

Schools need to understand the limits of what an AI system can – and cannot – do.

With cameras or hardware, AI is not magic. Adding AI software to a magnetometer doesn’t change the physics of a gun and a metal water bottle producing the same signal. This is why an AI screening vendor is being investigated by the FCC and SEC for allegedly inaccurate marketing claims to schools across the country.

A costly undertaking

The largest cost in school security concerns the physical equipment (cameras, doors, scanners) and the staff who operate them. AI software on an old security camera generates revenue for the security solutions company without the vendor or school having to spend money on equipment. Saving money is great until a shadow triggers a police response to what AI believes is an active shooter.

Instead of schools choosing to test or purchase the best solutions based on merit, vendors lobby to structure funding from local, state, and federal governments to create a shortlist of specific products that schools are required to purchase . During a period of rapid AI innovation, schools should be able to select the best available product rather than being forced to contract with one company.

Schools are unique environments and require security solutions – both hardware and software – that are designed for schools from the start. This requires companies to analyze and understand the characteristics of gun violence on campus before developing an AI product. For example, a scanner made for sports venues that only allows fans to bring a limited number of items will not function well in a school where children each carry backpacks, binders, pens, tablets, cell phones and metal water bottles. day.

For AI technology to be useful and successful in schools, companies must address the biggest security challenges on campuses. In my research of thousands of shootings, the most common situation I see is a teenager who usually carries a gun in his backpack and fires shots during a fight. Manually searching every student and bag is not a viable solution because students spend hours in security lines instead of in classrooms. Searching bags is not an easy task and shootings still occur in schools with metal detectors.

Neither CCTV’s image classification nor retrofitted metal detectors address the systemic problem of teenagers freely carrying a gun at school every day. Solving this challenge will require better sensors with more advanced AI than any product currently available.

Schools cannot be fortresses

Unfortunately, school security is currently drawing on the past instead of imagining a better future. Medieval fortresses were a failed experiment that ultimately concentrated rather than reduced risk. We fortify school buildings without realizing why European empires stopped building castles centuries ago.

The next wave of AI security technology has the potential to make schools safer with open campuses with invisible layers of frictionless security. If something does go wrong, open spaces offer the most opportunities to take cover. Children should never again be locked in a classroom as they were by the gunman who killed nineteen children and two teachers in Uvalde, Texas, in 2022.

Schools walk the line between a troubled past and a more secure future. AI can hinder or enable how we get there. The choice is ours.

For more CNN news and newsletters, create an account at CNN.com

Leave a Comment