
if we were Trust Providers of School Monitoring Systems, K-12 Schools Will Soon Be Operating Like Some Kind of Gathering Minority Report, peopleand Robocop. “Military-grade” systems steal student data, spot only hints of harmful thoughts, and dispatch officers before potential perpetrators can commit their vile deeds. Should someone be able to evade the predictive system, they will inevitably be stopped by next-generation weapons detection systems and biometric sensors that can interpret a person’s gait or tone of voice to warn authorities of impending danger. The final layer could be the most technologically advanced — some form of drone, or even a robotic dog capable of disarming, distracting or paralyzing dangerous people before any real harm is done. If we invest in these systems, our thinking is that our children will be safe in the end.
Not only is this not our present, it will never be our future – no matter how large and complex surveillance systems become.
Over the past few years, many companies have sprung up, promising various technological interventions to reduce or even eliminate the risk of school shootings. Proposed “solutions” range from tools that use machine learning and human monitoring to predict violent behavior, to artificial intelligence paired with cameras that determine an individual’s intent through body language, to microphones that identify the likelihood of violence based on tone of voice. Many people use the ghosts of dead children to sell their technology. Surveillance firm AnyVision, for example, used images of the Parkland and Sandy Hook shootings in presentations showing its facial and gun-recognition technology. Immediately after the Uvalde shooting last month, Axon announced plans to use Taser-equipped drones as a means of dealing with school shooters. (The company later suspended the program after members of its ethics committee resigned.) The list goes on, and each company would have us believe that it alone can solve the problem.
The failure here is not just in the systems themselves (e.g. Uvalde seems to have at least one of these “safety measures”), but in the way people conceive of them. Like policing itself, every failure of a surveillance or security system often leads to calls for broader monitoring. Without predicting and preventing hazards, companies often cite the need for more data to address gaps in the system — and governments and schools typically embrace it. In New York, the city’s mayor has decided to double down on the need for more surveillance technology, even though surveillance mechanisms have failed to prevent (or even capture) the latest subway shooting. Meanwhile, schools in the city are reportedly ignoring a moratorium on facial recognition technology. New York Times In 2021 alone, U.S. schools will spend $3.1 billion on safety products and services, the report said. Recent gun legislation in Congress includes another $300 million to increase school safety.
But fundamentally, the promise of many of these forecasting systems is to measure certainty when it is impossible to be sure. Tech companies have been touting the concept of complete data, and thus the perfect system, as something that just crossed the next ridge — an environment where we are completely surveilled, where any and all anti-social behavior is predictable and thus can prevent violence. But a comprehensive dataset of persistent human behavior is like the horizon: it can be conceptualized, but never really reached.
Currently, companies use a variety of bizarre techniques to train these systems: some stages simulate attacks; others use action movies, such as John Wick, with few good real-life metrics. At some point, as scary as it sounds, it’s conceivable that these companies will train their systems on data from real-world shootings. However, even if footage of real events does become available (and these systems require a lot of footage), these models still cannot accurately predict the next tragedy based on the previous one. Uwald is different from Parkland, and Parkland is different from Sandy Hook, which is different from Columbine.
Technologies that provide predictions about intentions or motivations are making statistical bets on the probability of a given future, regardless of its source, based on data that is always incomplete and context-free. The underlying assumption when using a machine learning model is to identify a pattern; in this case, the gunman is exhibiting some “normal” behavior at the crime scene. But finding such a pattern is impossible. This is especially true given the near-constant change in vocabulary and practices of adolescents. Arguably, compared with many other segments of the population, young people are changing the way they speak, dress, write and present themselves – often to avoid and escape adult surveillance. Developing a consistently accurate model for this behavior is nearly impossible.