Armed police swarm student after AI mistakes bag of Doritos for a weapon
212 comments
·October 23, 2025neilv
omnipresent12
https://www2.ljworld.com/news/schools/2025/aug/07/lawrence-s...
Another false positive by one of these leading content filters schools use - the kid said something stupid in a group chat and an AI reported it to the school, and the school contacted the police. The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time. They are suing Gaggle, who claims they never intended their system to be used that way.
These kinds of false positives are incredibly common. I interviewed at one of their competitors (Lightspeed), and they actually provide a paid service where they have humans review all the alerts before being forwarded to the school or authorities. This is a paid addon, though.
reaperducer
The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time.
All he wanted was a Pepsi. Just one Pepsi. And they wouldn't give it to him.
random3
It’s actually “AI swarmed” since no human reasoning, only execution, was exerted - basically have an AI directing resources.
trhway
delegating decision to AI, excluding human from the "human in the loop" is kind of unexpected as a first step, as in general it was expected that exclusion will start from the other end. Sideway i wonder how is that going to happen on the battlefield.
for this civilian use case the next step - AR googles worn by police with that AI will be projecting onto the googles where that teenager has his gun (kind of Black Mirror style), and the next step after that is obviously excluding the humans even from the execution step.
janalsncm
In any system, there are false positives and false negatives. In some situations (like a high recall disease detection) false negatives are much worse than false positives, because the cost of a false positive is a more rigorous screening.
But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.
Aside from improving the quality of the detection model, we should try to reduce the “cost” of both failure modes as much as possible. Putting a human in the loop or having secondary checks are ways to do that.
nkrisc
In this case false positives are far, far worse than false negatives. A false negative in this system does not mean a tragedy will occur, because there are many other preventative measures in place. And never mind the fact that this country refuses to even address the primary cause of gun violence in the first place: the ubiquity of guns in our society. So systems like this is what we end up with when we ignore to address the problem of guns and choose to deal the downstream effects of that instead.
lelandfe
I was swatted once. Girlfriend's house. Someone called 911 and said they'd seen me kill a neighbor, drag their body into the house, and was now holding my gf's family hostage.
We answered the screams at the door to guns pointed at our faces, and countless cops.
It was explained to us that this was the restrained version. We got a knock.
Unfortunately, I understand why these responses can't be neutered too much. You just never know.
collingreen
In this case, though, you COULD know, COULD verify with a human before pointing guns at people, or COULD not deploy a half finished product in a way that prioritizes your profit over public safety.
MisterTea
Happened to a friend of mine by an ex GF who said he was on psych meds (true though he is nonviolent with no history) and that he was threatening to kill his parents. NYPD SWAT no-knock kicked the door down to his apartment which terrorized his elderly parents as they pointed guns at their son (in his words, "machine guns".) BUT because he has psych issues and on meds he was forced into a cop car in front of the whole neighborhood to get a psych evaluation. He only received an apology from the cops who said they have no choice but to follow procedure.
edit should add sorry to hear that.
SoftTalker
Do the cops not ever get tired of being fooled like this? Or do they just enjoy the chance to go out in their army-surplus armored cars and pretend to be special forces?
bilbo0s
>And some teen may be traumatized.
Um. That's not really the danger here.
The danger is that it's as clear as day that in the future someone is gonna be killed. That's not just a possibility. It's a certainty given the way the system is set up.
This tech is not supposed to be used in this fashion. It's not ready.
neilv
Did you want to emphasize or clarify the first danger I mentioned?
My read of the "Um" and the quoting, was that you thought I missed that first danger, and so were disagreeing in a dismissive way.
When actually we're largely in agreement about the first danger. But people trying to follow a flood of online dialogue might miss that.
I mentioned the second danger because it's also significant. Many people don't understand how safety works, and will think "nobody got shot, so the system must have worked, nothing to be concerned about". But it's harder for even those people to dismiss the situation entirely, when the second danger is pointed out.
null
wat10000
I fully agree, but we also really need to get to a place where drawing the attention of police isn't an axiomatically life-threatening situation.
Zigurd
If the US wasn't psychotic, not all police would have to be armed, and not every police response would be an armed response.
krapp
Americans are killed by police all the time, and by other Americans. We've already decided as a society that we don't care enough to take the problem seriously. Gun violence, both public and from the state, is accepted as unavoidable and defended as a necessary price to pay to live in a free society[0]. Having a computer call the shots wouldn't actually make much of a difference.
Hell, it wouldn't even move the needle on racial bias much because LLMs have already shown themselves to be prejudiced against minorities due to the stereotypes in their training data.
[0]Even though no other free society has to pay that price but whatever.
akoboldfrying
> The danger is that it's as clear as day that in the future someone is gonna be killed.
This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.
I'm not downplaying the risks. I'm saying that we should remember that almost everything has risks and benefits, and as a society we decide for or against using/doing them based mostly on the ratio between those two things.
So we need some data on the rates of false vs. true detections here. (A value judgment is also required.)
GuinansEyebrows
> This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.
huh, i can think of at least one recent example of a popular figure using this kind of argument to extreme self-detriment.
froobius
Stuff like this feels like some company has managed to monetize an open source object detection model like YOLO [1], creating something that could be cobbled together relatively easily, and then sold it as advance AI capabilities. (You'd hope they've have at least fine-tuned it / have a good training dataset.)
We've got a model out there now that we've just seen has put someone's life at risk... Does anyone apart from that company actually know how accurate it is? What it's been trained on? Its false positive rate? If we are going to start rolling out stuff like this, should it not be mandatory for stats / figures to be published? For us to know more about the model, and what it was trained on?
EdwardDiego
And it feels like they missed the "human in the loop" bit. One day this company is likely to find itself on the end of a wrongful death lawsuit.
dfxm12
[flagged]
tartoran
"“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident,” the principal wrote. “Our counselors will provide direct support to the students who were involved.”"
Make them pay money for false positives instead of direct support and counselling. This technology is not ready for production, it should be in a lab not in public buildings such as schools.
xbar
Charge the superintendent with swatting.
Decision-maker accountability is the only thing that halts bad decision-making.
dekken_
> Make them pay money
It already cost money paying for the time and resources to be misappropriated.
There needs to be resignations, or jail time.
SAI_Peregrinus
The taxpayers collectively pay the money, the officers involved don't (except for that small fraction of their income they pay in taxes that increase as a result).
null
akoboldfrying
> Make them pay money for false positives instead of direct support and counselling.
Agreed.
> This technology is not ready for production
No one wants stuff like this to happen, but nearly all technologies have risks. I don't consider that a single false positive outweighs all of its benefits; it would depend on the rates of false and true positives, and the (subjective) value of each (both high in this case, though I'd say preventing 1 shooting is unequivocally of more value than preventing 1 innocent person being unnecessarily held at gunpoint).
neuralRiot
I think I’ve said this too many times already but the core problem here and with the “AI craze” is that nobody realy wants to solve problems, what they want is a marketable product and AI seems to be the magic wrench that fits in all the nuts and since most of the people doesn’t really knows how it works and what are its limitations the happily buy the “magic dust”.
Zigurd
In the US cops kill more people than terrorists. As long as you quantifying values take that into account.
jawns
I expect that John Bryan -- who produces content as The Civil Rights Lawyer https://thecivilrightslawyer.com -- will have something to say about it.
He frequently criticizes the legality of police holding people at gunpoint based on flimsy information, because the law considers it a use of force, which needs to be justified. For instance, he's done several videos about "high risk" vehicle stops, where a car is mistakenly flagged as stolen, and the occupants are forced to the ground at gunpoint.
My take is that if both the AI automated system and a "human in the loop" police officer looked at the picture and reasonably believed that the object in the image was a gun, then the stop might have been justified.
But if the automated system just sent the officers out without having them review the image beforehand, that's much less reasonable justification.
null
cyanydeez
Someday there'll be a lawyer in court telling us how strong the AI evidence was because companies are spending billions of dollars on it.
mothballed
Or they'll tell us police have started shooting because an acorn falls, so they shouldn't be expected to be held to higher standards and are possibly an improvement.
hinkley
Is use of force without justification automatically excessive force or is there a gray area?
mentalgear
Ah, the coming age of Palantir's all seeing platform; and Peter Thiel becoming the shadow Emperor. Too bad non-deterministic ml systems are prone to errors that risk lives when applied wrongly to crucial parts of society. But in an authoritarian state, those will be hidden away anyway, so there's nothing to see here: move along folks. Yes, surveillance and authoritarianism go hand in hand, ask China. It's important to protest these methods and push lawmakers to act against them; now, before it's too late.
MiiMe19
I might be missing something but I don' think this article isn't about palantir or any of their products
joomla199
This comment has a double negative, which makes it a false positive.
yifanl
You're absolutely right, Palantir just needs a different name and then they'd have no issues.
seanhunter
The article is about omnialert, not palantir, but don’t let the facts get in the way of your soapbox rant.
mzajc
Same fallible systems, same end goal of mass surveillance.
null
jason-phillips
[flagged]
mentalgear
Pre-emptive compliance out of fear - then my boy, the war is already lost.
polotics
"competition is for losers"... right? ;-)
jason-phillips
No, it just doesn't impact me, especially such that I feel the need to pen paranoid screeds about it.
null
rolph
Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
prioritize your own safety by not attending any location fitted with such a system, or deemed to be such a dangerous environment that such a system is desired.
the AI "swatted" someone.
etothet
The corporate version of "It's a feature, not a bug."
bilbo0s
Calling it today. This company is going to get innocent kids killed.
How many packs of Twizzlers, or Doritos, or Snickers bars are out there in our schools?
First time it happens, there will be an explosion of protests. Especially now that the public knows that the system isn't working but the authorities kept using it anyway.
This is a really bad idea right now. The technology is just not there yet.
mothballed
And then there's plenty of bullies who might put a sticker of a picture of a gun on someone's back, knowing it will set off the image recognition. It's only a matter of time until they figure that out.
withinboredom
When I was a kid, we made rubber-band guns all the time. I’m sure that would set it off too.
mrguyorama
>First time it happens, there will be an explosion of protests.
Why do you believe this? In the US, cops will cower outside of a school with an armed gunman actively murdering children, forcibly detain parents who wish to go in if the cops wont, and then re-elect everyone involved
In the US, an entire segment of the population will send you death threats claiming you are part of some grand (democrat of course) conspiracy for the crime of being a victim of a school shooting. After every school shooting, republican lawmakers wear an AR-15 pin to work the next day to ensure you know who they care about.
Over 50% of the country blamed the protesting students at Kent state for daring to be murdered by the national guard.
Cops can shoot people in broad daylight, in the back, with no justification, or reasonable cause, or can even barge into entirely the wrong house and open fire on homeowners exercising their basic right to protect themselves from strangers invading their homes, and as long as the people who die are mostly black, half the country will spout crap like "They died from drugs" or "they once sold a cigarette" or "he stole skittles" or "they looked at my wife wrong" while the cops take selfies reenacting the murder for laughs and talk about how terrified they are by BS "training" that makes them treat every stranger as a wanted murderer armed to the teeth. The leading cause of death for cops is still like heart disease of course.
Trump sent unmarked forces to Portland and abducted people off the street into unmarked vans and I think we still don't really know what happened there? He was re-elected.
nyeah
Clearly it did not prioritize human safety.
tencentshill
"rapid human verification." at gunpoint. The Torment Nexus has nothing on these AI startups.
palmotea
Why did they waste time verifying? The police should have eliminated the threat before any harm could be done. Seconds count when you're keeping people safe.
drak0n1c
The dispatch relayer and responding officers should at least have ready access to a screen where they can see a video/image of the raw footage that triggered the AI alert. If it is a false alarm, they will better see it and react accordingly, and if it is a real threat they will better understand the initial context and who may have been involved.
ggreer
According to a news article[1], a human did review the video/image and flagged it as a false positive. It was the principal who told the school cop, who then called other cops:
> The Department of School Safety and Security quickly reviewed and canceled the initial alert after confirming there was no weapon. I contacted our school resource officer (SRO) and reported the matter to him, and he contacted the local precinct for additional support. Police officers responded to the school, searched the individual and quickly confirmed that they were not in possession of any weapons.
What's unclear to me is the information flow. If the Department of School Safety and Security recognized it as a false positive, why did the principal alert the school resource officer? And what sort of telephone game happened to cause local police to believe the student was likely armed?
1. https://www.wbaltv.com/article/student-handcuffed-ai-system-...
wat10000
Sounds like a "better safe than sorry" approach. If you ignore the alert on the basis that it's a false positive, then it turns out it really was a gun and the person shoots somebody, you're going to get sued into the ground, fired, name plastered all over the media, etc. On the other hand, if you call in the cops and there wasn't a gun, you're fine.
Etheryte
Next up, a captcha that verifies you're not a robot by swatting you and checking at gunpoint.
proee
Walking through TSA scanners, I always get that unnerving feeling I will get pulled aside. 50% of the time they flag my cargo pants because of the zipper pockets - There is nothing in them but the scanner doesn't like them.
Now we get the privilege of walking by AI security cameras placed in random locations, hoping they don't flag us.
There's a ton of money to be made with this kind of global frisking, so lots of pressure to roll out more and more systems.
How does this not spiral out of control?
mpeg
To be fair, at least you can choose not to wear the cargo pants.
A friend of mine once got pulled aside for extra checks and questioning after he had already gone through the scanners, because he was waiting for me on the other side to walk to the gates together and the agent didn't like that he was "loitering" – guess his ethnicity...
stavros
How is it fair to say that? That's some "why did you make me hurt you"-level justification.
mpeg
No, it's not.
I have shoes that I know always beep on the airport scanners, so if I choose to wear them I know it might take longer or I might have to embarrassingly take them off and put them on the tray. Or I can not wear them.
Yes, in an ideal world we should all travel without all the security theatre, but that's not the world we live in, I can't change the way airports work, but I can wear clothes that make it faster, I can put my liquids in a clear bag, I can have bags that make it easy to take out electronics, etc. Those things I can control.
But people can't change their skin color, name, or passport (well, not easily), and those are also all things you can get held up in airports for.
franktankbank
> guess his ethnicity...
Not sure, but I bet they were looking at their feet kinda dusting off the bottoms, making awkward eye contact with the security guard on the other side of the one way door.
walkabout
I already adjust my clothing choices when flying to account for TSA's security theater make-work bullshit. Wonder how long before I'm doing that when preparing to go to other public places.
(I suppose if I attended pro sports games or large concerts, I'd be doing it for those, too)
hinkley
I was getting pulled out of line in the 90’s for having long hair. I don’t dress in shitty clothes or fancy ones, I didn’t look funny, just the hair, which got regular compliments from women.
I started looking at people trying to decide who looked juicy to the security folks and getting in line behind them. They can’t harass two people in rapid succession. Or at least not back then.
The one I felt most guilty about, much later, was a filipino woman with a Philippine passport. Traveling alone. Flying to Asia (super suspect!). I don’t know why I thought they would tag her, but they did. I don’t fly well and more stress just escalates things, so anything that makes my day tiny bit less shitty and isn’t rude I’m going to do. But probably her day would have been better for not getting searched than mine was.
JustExAWS
Getting pulled aside by TSA for secondary screening is nowhere in the ball park of being rushed at gunpoint as a teenager and told to lay down on the ground where one false move will get you shot by a trigger happy cop that probably won’t face any consequences - especially if the innocent victim is a Black male.
In fact, they will probably demonize the victim to find sn excuse why he deserved to get shot.
proee
I wasn't implying TSA-cargo-pant-groping is comparable. My point is to show escalation in public facing systems. We have been dealing with TSA. Now we get AI Scanners. What's next?
Also, no need to escalate this into a race issue.
hinkley
But it was a black man they harassed.
JustExAWS
Yes because I’m sure if a White female had been detected by AI of carrying a gun, it would have been treated the same way.
malux85
Speak up citizens!
Email the state congressman and tell them what you think.
Since (pretty much) nobody does this, if a few hundred people do it, they will sit up and take notice. It takes less people than you might think.
Since coordinating this with a bunch of strangers (I.e. the public) is difficult, the most effective way is to normalise speaking up in our culture. Of course normalising it will increase the incoming comm rate, which will slowly decrease the effectiveness but even post that state, it’s better than where we are, which is silent public apathy
anigbrowl
If that's the case, why do people in Congress keep voting for things their constituents don't like? When they get booed at town halls they just dismiss it as being the work of paid activists.
actionfromafar
Yeah, Republicans hide from townhalls. Most of them have one constituent, Trump.
jason-phillips
I got pulled aside because I absentmindedly showed them my concealed carry permit, not my driver's license. I told them I was a consultant working for their local government and was going back to Austin. No harm no foul.
oceanplexian
If the system used any kind of logic whatsoever a CCW permit would not only allow you to bypass airport security but also carry in the airport (Speaking as both a pilot and a permit holder)
Would probably eliminate the need for the TSA security theater so that will probably never happen.
mothballed
You can carry in the airport in AZ without a permit, in the unsecured areas. I think there was only one broo-ha-ha because some particularly bold guy did it openly with a rifle (can't remember if there's more to the story).
some_random
The point of the security theater is to assuage the 95th percentile scared-of-everything crowd, they're the same people who want no guns signs in public parks.
more_corn
Why don’t you pay the bribe and skip the security theater scanner? It’s cheap. Most travel cards reimburse for it too.
proee
I'm sure CLEAR is already having giddy discussion on how they can charge you to get pre-verified access to walk around in public. We can all wear CLEAR certified dog tags so the cops can hastle the non-dog-tagged people.
dheera
The TSA scanners also trigger easily on crotch sweat.
hsbauauvhabzb
I enjoy a good grope, so I’ll keep that in mind the next time I’m heading into the us.
Havoc
>the system “functioned as intended,”
Behold - a real life example of a "Not a hotdog" system, except this one is gun / not-a-gun.
Except the fictional one from the series was more accurate...
crazygringo
It sounds like the police mistook it as well:
> “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.’”
So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.
Fundamentally, this isn't really any different from a person seeing someone with what looks like a gun and calling the cops, only it turns out the person didn't see it clearly.
The main issue is just that with increased numbers of images, there will be an increase in false positives. Can this be fixed by including multiple images, e.g. from motion of the object, so police (and the AI) can better eliminate false positives before traumatizing some poor teen?
axus
Computer says it looks like a gun.
shaky-carrousel
He could have been easily murdered. It's not the first time by a far margin that a bunch of overzealous cops murder a kid. I would never ever in my life set foot in a place that sends me armed cops so easily. That school is extremely dangerous.
ggreer
I think the reason the school bought this silly software is because it's a dangerous school, and they're grasping at straws to try and fix the problem. The day after this false positive, a student was robbed.[1] Last month, a softball coach was charged with rape and possession of child pornography.[2] Last summer, one student was stabbed while getting off the bus.[3] Last year, there were two incidents where classmates stabbed each other.[4][5]
1. https://www.nottinghammd.com/2025/10/22/student-robbed-outsi...
2. https://www.si.com/high-school/maryland/baltimore-county-hig...
3. https://www.wbaltv.com/article/knife-assault-rossville-juven...
4. https://www.wbal.com/stabbing-incident-near-kenwood-high-sch...
5. https://www.cbsnews.com/baltimore/news/teen-injured-after-re...
evanelias
That certainly sounds bad, but it's all relative; keep in mind this school is in Baltimore County, which is distinct from the City of Baltimore and has a much different crime profile. This school is in the exact same town as Eastern Tech, literally the top high school in Maryland.
kayge
I wonder if the AI correctly identified it as a bag of Doritos, but was also trained on the commercial[0] where the bag appears to beat up a human (his fault for holding on too tight) and then it destroys an alien spacecraft.
> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.
And some teen may be traumatized. Again, unsafe.
Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.