School supervision will never protect children from shootings | MarketingwithAnoy

If we are to believe in the providers of school monitoring systems, K-12 schools will soon operate in a manner similar to an agglomeration of Minority report, Person of interestand Robocop. “Military grading systems” would sloppy student data, merely intercepting the hint of malicious ideas and sending officers before the potential perpetrators could carry out their heinous acts. In the unlikely event that someone was able to evade the predictable systems, they would inevitably be stopped by next-generation weapon detection systems and biometric sensors that interpret a person’s gait or tone and warn authorities of impending danger. The last layer may be the most technologically advanced – some kind of drone or perhaps even a robot dog that would be able to disarm, distract or deactivate the dangerous individual before any real damage has occurred. If we invest in these systems, the mindset sounds, our children will finally be safe.

Not only is this not our present, it will never be our future – no matter how comprehensive and intricate surveillance systems become.

Over the last many years, a host of companies have sprung up, all promising a range of technological interventions that will limit or even eliminate the risk of school shootings. The proposed “solutions” range from tools that use machine learning and human surveillance to predict violent behavior, to artificial intelligence paired with cameras that determine individuals’ intent via their body language, to microphones that identify potential for violence based on a tone of voice. . Many of them use the ghost of the dead children to chase their technology. Surveillance company AnyVision, for example, uses images of the Parkland and Sandy Hook footage in presentations demonstrating its face and firearm recognition technology. Immediately after the Uvalde shooting last month, the company has Axon announced plans for a taser-equipped drone as a means of dealing with school shooters. (The company later paused the plan after members of its ethics council withdrew.) The list continuesand every company wants us to believe that it alone has the solution to this problem.

The error here is not only in the systems themselves (Uvalde, for one, seemed to have at least one of these “precautions” in place), but in the way people perceive them. Like the police themselves, any failure of a surveillance or security system typically results in people requiring more extensive surveillance. If a danger is not predicted and prevented, companies often mention the need for more data to fix the gaps in their systems – and governments and schools often buy into it. In New York, despite the many flaws in surveillance mechanisms to prevent (or even capture) the latest metro shooterhas the city mayor decided to double the need for himself more surveillance technology. Meanwhile, the city’s schools reportedly ignores the moratorium on face recognition technology. New York Times reports that U.S. schools spent $ 3.1 billion on security products and services alone in 2021. And recent congressional gun laws includes an additional $ 300 million to increase school security.

But basically, as many of these prediction systems promise, it is a measure of security in situations that no one can know about. Technology companies consistently pitch the notion of complete data, and therefore perfect systems, as something just above the next ridge – an environment where we are so completely monitored that any and all antisocial behavior can be predicted and thus violence can be prevented. But a comprehensive data set of ongoing human behaviors is like the horizon: it can be conceptualized, but never actually achieved.

Currently, companies are engaging in a number of bizarre techniques to train these systems: Some stage scornful attacks; other users Action movie like John Wick, hardly good indicators of real life. At a time when macabre it may sound, it is conceivable that these companies would train their systems on data from recordings in the real world. But even if footage from real events became available (and in the large quantities these systems require), the models would still not accurately predict the next tragedy based on the past. Uvalde was different from Parkland, who was different from Sandy Hook, who was different from Columbine.

Technologies that offer predictions about intentions or motivations make a statistical bet on the probability of a given future based on what will always be incomplete and contextless data, regardless of the source. The basic assumption when using a machine learning model is that there is a pattern to be identified; in this case, there is a “normal” behavior that the shooter exhibits at the crime scene. But finding such a pattern is unlikely. This is especially true given the almost constant shifts in teens ’lexicon and practice. Probably more than many other sections of the population, young people change the way they speak, dress, write and present themselves – often explicitly to avoid and evade the watchful eye of adults. It is almost impossible to develop a consistently accurate model for that behavior.

Leave a comment