Snapchat is harming children at an industrial scale?
140 comments
·April 16, 2025aylmao
exceptione
Bingo. If more people were carefully analyzing language, they could spot earlier that people are on the slippery slope of, lets call it, anti-human beliefs; as then they may help them to correct course.
If we don't, these narratives are getting normalized. A society is on a curve of collective behavior, there is no stable point. Only direction.
SamuelAdams
GitHub does the same thing with commits, displaying them on your profile. Is that remarkably different than what Snapchat is doing?
matsuui
I'd say so. Some obsess over their commit history but it is mostly out of the way and only a representation of how active you are. The snapchat streaks are a key feature and designed to keep you coming back every day, you can even pay a dollar to restore it if you miss a day.
adityapuranik
Back when I was graduating from Uni, one day I just decided that Snap streaks pressure was too much. I had streaks of 700 days+ with a person I barely talked to. But most of my streaks were with my best friends, people I talked to every day.
It was like a daily ritual, and I couldn't escape it for a while. I decided to go cold turkey, since it felt like the only option. All my friends moaned and complained for a while. They even tried to revive the 'streak' back, but I persisted. Feels really silly when I look back, but 700 days means I was sending snaps everyday for 2 years straight.
I still have the app and there are still few friends of mine, who send me snaps about their whereabouts, but I have stopped using it. Blocking the notifications was one of the best decision that I could have made, since that was the single biggest factor in not opening the app itself.
taraindara
> Blocking the notifications was one of the best decision that I could have made
I’ve done this for all social media, and more recently deleted all social apps. I’ll go on Facebook sometime through the web browser, mainly for marketplace.
Facebook was the first app I tested disabling notifications on. This had to be about 10 years ago, I noticed they would give me a new notification every 5-10 minutes. I was addicted to checking what the notification as. Usually garbage, and the less I used Facebook the more garbage the notice. Since I’ve stopped using Facebook for anything but marketplace my entire feed is now garbage. The algorithm doesn’t know what to do with me now and its former history.
Having no social apps has been a hard change to get used to. But I feel so much better not feeling like I need to scroll.
I only scroll on hacker news now… which is easy because the top page doesn’t get that many updates in a day, and after several minutes of browsing “new” I’m satiated I’ve seen all I might want to see
ensignavenger
So after doing this for 2 years, what were the negative effects other than a few seconds spent each day?
zonkerdonker
Anyone remember YikYak? I was in university at the time, the explosive growth was wild. After the inevitable bullying, racism, threats, doxxing, that came with the anonymous platform, YikYak enabled geofencing to disable the app on middle and high school grounds.
I think every social media platform with an "age limit" should be required to do this as well. And open it up, so that anyone can create their own disabling geofence on their property. How great would it be to have a snapchat free home zone? Or FB, or tiktok
nancyminusone
At my college, someone got kicked out for yikyacking "gonna shoot all black people a smile tomorrow" and everyone quickly realized exactly how anonymous it really was after the guy was found a few hours later.
Thing is, there was a comma between "people" and "a smile" which made his poorly thought out joke read a lot differently. Dumb way to throw away your education.
wilsonjholmes
Crazy Smart (;
Edit for clarity: /s - I went to the same university which had the above slogan.
pmarreck
So basically, if he hadn't added the comma, he'd still be at college.
So he got kicked out because of an extra comma, which he added to make it even more edgy, at the cost of reducing plausible deniability to nearly zero.
alwa
I’m not sure which college was involved here, but if I were the person adjudicating this, I imagine the outcome would not have hinged on the comma.
lotsofpulp
I don’t understand. The “joke” would be if there was no comma. Putting a comma seems like they wanted to cause panic, and feign ignorance later.
nancyminusone
Yes, that's what he tried to argue (it was a joke bro) in the lawsuit that followed, to try to get back in. He lost.
Personally, I think he just flubbed it. At the time, memes like "I'm gonna cut you <line break> up some vegetables" were popular. Can't expect a dumbass edgelord to have good grammar.
Either way, it was a stupid thing to do and he paid for it.
myko
I'm being obtuse but I don't see the comma thing making the "joke" come off differently, what am I missing?
LeifCarrotson
The phrase "shooting a smile at someone" means to briefly or quickly glance at someone while smiling. Perhaps "shot a glare in his direction" is more familiar?
Depending on the location of the comma, the speaker is either planning to make happy gestures at people, or killing people with a firearm which makes them happy.
koolba
To shoot a smile means to smile at someone. So the pun is that he is going to smile at every black person he sees.
jmathai
We block a number of online properties including Snapchat and YouTube using NextDNS.
We have different profiles for different devices to allow, for example, YouTube on the television but not on kids tablets or phones.
soperj
that's only good for the devices using your internet though no? not if they have data.
jmathai
I install a configuration profile on their devices which forces NextDNS regardless if they're on my wifi, LTE or their friend's wifi.
pmarreck
why would you give a kid data? (as in cell data, presumably) I guess, to be able to helicopter them from anywhere...
Apple devices would still have parental controls in that case, though, I think?
Cellphone companies should really step up, here.
dang
One past thread: Thank You, Yakkers - https://news.ycombinator.com/item?id=14223199 - April 2017 (108 comments)
Lots of comments: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
btown
Ah, a world where this is taken to an extreme might even bring back the mythical https://en.wikipedia.org/wiki/Third_place rapidly disappearing in the American suburb and city alike... because it becomes the only place in the community where property owners don't geofence to forbid social media use!
https://theweek.com/culture-life/third-places-disappearing
But of course, social media companies will pour incredible amounts of money into political campaigns long before they let anything close to this happen.
rollcat
Technological solutions to societal problems just don't work.
Some $EVIL technology being fashioned to harm individuals isn't to blame - the companies behind that technology are. You can pile up your geofencing rules, the real solution lies somewhere between you deleting the app and your government introducing better regulation.
Swenrekcah
By this logic technological “progress” can not cause societal problems?
Which of course it can so why can’t a part of the solution be technological?
vacuity
It can be, but I think practically it can't be. Maybe that doesn't fit into a nice logical statement, but there you have it. Or: when you build yourself a constantly-accelerating, never-stopping racecar and get on it, it's hard to build a steering wheel or brake pedal for it. Or or: it's a lot easier to get into a deep hole than to get out of one.
kennywinker
Geofencing around schools is the kind of thing you might see if government attempted to regulate this
ClumsyPilot
Don’t we geofence sale of alcohol and tobacco around schools?
palmotea
> Technological solutions to societal problems just don't work.
Ehhh, that's just a poorly thought out slogan whose "truth" comes from endless repetition. Societal problems can have technical origins or technical enablers. In which case a technical solution might work to make things better.
So no, there's no technical solution to "people being mean to each other," but there is a technical solution to, say, "people being meaner to each other because they can cloak themselves with anonymization technology."
rollcat
> Societal problems can have [...] technical enablers.
That was my point.
> [...] there is a technical solution to, say, "people being meaner to each other because they can cloak themselves with anonymization technology."
I've never used (or even heard of) YikYak before, but what solution are you suggesting exactly? De-anonymisation? How would you achieve that? Suppose you have a magical^W technological de-anonymising wand, how would that not cut both ways?
So YikYak enabled geofencing, to alleviate the problem they've caused in the first place? But let's suppose they didn't do that.
How could I, as an average parent trying to protect my child, employ such a solution on my own? Could my tech-savvy neighbor help me somehow? Is there a single person outside of YikYak who can build a solution that any parent could use?
kgwxd
Geo fencing requires constantly sharing location data.
neilv
(Since the TikTok post was swapped out with this one, I'll repost my late comment here, since it applies to a lot of companies.)
> As one internal report put it: [...damning effects...]
I recall hearing of related embarrassing internal reports from Facebook.
And, earlier, the internal reports from big tobacco and big oil, showing they knew the harms, but chose to publicly lie instead, for greater profit.
My question is... Why are employees, who presumably have plush jobs they want to keep, still writing reports that management doesn't want to hear?
* Do they not realize when management doesn't want to hear this?
* Does management actually want to hear it, but with overwhelming intent bias? (For example, hearing that it's "compulsive" is good, and the itemized effects of that are only interpreted as emphasizing how valuable a property they own?)
* Do they think the information will be acted upon constructively, non-evil?
* Are they simply trying to be honest researchers, knowing they might get fired or career stalled?
* Is it job security, to make themselves harder to fire?
* Are they setting up CYA paper trail for themselves, for if the scandal becomes public?
* Are they helping their immediate manager to set up CYA paper trails?
kridsdale1
My team at Facebook in the 2010s made many such reports.
We did that work because our mandate was to understand the users and how to serve them.
We did that with full good natured ethical intent.
We turned the findings in to project proposals and MVPs.
The ones that were revenue negative were killed by leadership after all that work, repeat cycle.
neilv
Interesting. Any sense whether that system was consciously constructed? (Like, Task a group to generate product changes appealing to users, and then cherrypick the ones that are profitable, to get/maintain profitable good product.)
Or was it not as conscious, more an accident of following industry conventions for corporate roles, and corporate inefficiency&miscommunication?
kridsdale1
It was extremely scientifically methodical. Everything is designed from UX and other sources of holistic research. Then validated with the most built-out AB test system you can imagine. Only winners are kept.
Meta is doing this thousands of times per month, all the time.
lupusreal
> Why are employees, who presumably have plush jobs they want to keep, still writing reports that management doesn't want to hear?
They hire people on the autism spectrum who are inclined to say things out loud without much regard/respect for whether they are "supposed to" say it. *cough* James Damore.
neilv
I didn't guess that autism was involved in that case, and I'm a little uncomfortable with something that might sound like suggesting that autistic people might be less corporate-compatible.
There are plenty of autistic people who wouldn't say what Damore did, and there are non-autistic people who would.
I also know autistic people who are very highly-valued in key roles, including technical expert roles interfacing directly with customer senior execs in high-profile enterprise deals.
People are individuals, and we tend to end up treating individuals unfairly because of labels and biases, so we should try to correct for that when we can.
lupusreal
On the contrary, autistic people who don't hesitate to speak uncomfortable truthes are vital to the health of organizations, and society as a whole. You would all be lost without us.
(Note my indifference to your discomfort with my comment.)
WaitWaitWha
I do want to note a tangential topic on social media harming children and young adults.
In my personal experience, kids and young adults particularly those who grew up immersed in social media (born after ~1995–2000), seem to struggle with recognizing appropriate, undistorted social cues and understanding the real-world consequences of their actions.
To Snapchat harming kids, I think it is more than just evil people doing "five key clusters of harms".
Even adults often expect the same instant reactions and flexible social dynamics found online, which blinds them to the more permanent, harsher outcomes that exist outside of digital spaces.
Anecdotally, the utter shock that shows on some people's face when they realize this is sad, and very disconcerting. (At an extreme think "pranksters", that get shot or punched in the face, and they are confused why that happened, when "everyone loves it online".)
How to fix this? the suggested solutions will not solve this problem, as it does not fit the "clusters of harms".
Nevermark
The social media business model is predicated on scaling up obvious and huge conflicts of interest. To scales unfathomable a couple decades ago.
Basic ethics, and more importantly the law, need to catch up.
Surveilling, analyzing, then manipulating people psychologically to mine them for advertisers is just as real a poison as fentanyl.
And when it scales, that mean billions of dollars in revenue, actual trillions of dollars in market value unrelentingly demanding growth, playing whack-a-mole with the devastating consequences isn’t going to work.
Conflicts of interest are illegal in many forms. Business models incorporating highly scalable conflicts of interest need to be illegal.
We could still have social media in healthier forms. They wouldn’t be “monetizing” viewers, they would be serving customers.
Facebooks army of servers isn’t required to run a shared scrapbook. All those servers, and most of Facebook’s algorithms and now AI, are there to manipulate people to the maximum extent possible.
rattlesnakedave
This all seems like obvious byproducts of an ephemeral photo based platform. Beyond these, there's also the shitty "explore" feature that pushes sexually explicit content that can't be disabled. Surprised that's not mentioned here.
burningChrome
With both of these articles, are we finally getting to a tipping point with social media and its negative effects on people?
zonkerdonker
People knew smoking killed for decades. Do you think that with no policy change and no regulation, that Marlboro and Philip Morris would have let their market tank?
Advertising - banned, smoking indoors - banned, and most importantly, taxing the hell out of them (every 10% increase in cigarette prices results in a 4% decrease in adult consumption and a 7% decrease in youth consumption).
There isn't really directly comparable policy to taxing these free social media platforms., however, and the whole thing is a bit stickier. Before any policies can stick, the public needs to be aware of the issues. That is tough when most people's 'awareness of issues' comes directly from social media.
isk517
I think part of it is that social media has now been around long enough that it is becoming possible to study the long term effects on our monkey brains from being constantly exposed to the lives and opinions of millions of strangers on a global level.
fazeirony
for sure. but if ANY of that kind of thing gets in the way of profits, well then that's not OK. in capitalism, profit is the only thing that matters. CSAM? drugs? underage use? pfft.
until this country gets serious about this stuff - and don't hold your breath on that - this is the absolute acceptable norm.
cbruns
Some readers here presumably work at Snap. How do you feel about this and your work? Do you sleep soundly at night?
stickfigure
I don't work for Snap, but they do use some software I wrote, so I guess that's close enough.
I find all of these "social media is bad" articles (for kids or adults) basically boil down to: Let humans communicate freely, some of them will do bad things.
This presents a choice: Monitor everyone Orwell-style, or accept that the medium isn't going to be able to solve the problem. Even though we tolerate a lot more monitoring for kids than adults, I'm still pretty uncomfortable with the idea that technology platforms should be policing everyone's messages.
So I sleep just fine knowing that some kids (and adults) are going to have bad experiences. I send my kid to the playground knowing he could be hurt. I take him skiing. He just got his first motorcycle. We should not strive for a risk-free world, and I think efforts to make it risk-free are toxic.
cbruns
Pouring the resources of a company the size of Snap into addicting as many kids into their app as deeply as possible is not the same letting them communicate freely. Besides that, I don't know of any parent that would want ephemeral and private communication between their child and a predatory adult. Snap is also doing nothing to shield them from pedophiles, drug dealers, and arms dealers that are using the same app as a marketplace.
The damning part is that these companies know they harm they are doing, and choose to lean into to it for more $$$.
Thanks for your response. Your open source contributions are perhaps less damned than those of an actual Snap employee ;)
some_random
Are you not willing to even entertain the notion that communication platforms could influence the way that it's users communicate with each other? That totally ephemeral and private image based social media could promote a different type of communication compared to something like say, HN, which is public and text based? Sure you take your kid skiing, but presumably you make them wear a helmet and have them start off on the bunny hill, I agree that a risk-free world is an insane demand that justifies infinite authoritarian power but there is a line for everyone.
stickfigure
Yes, I make my kid wear a helmet. I make sure his bindings are set properly. I make sure he's dressed warmly. I make sure he's fed and hydrated.
I am the parent. The ski resort provides the mountain, the snow, and the lifts.
He's a bit too young to be interested in taking pictures of his wang but I'd like to think this is a topic I can handle. Teaching him to navigate a dangerous world is sort of my job. I'm not losing sleep over it.
ClumsyPilot
> Let humans communicate freely, some of them will do bad things.
That’s just normal phone calls - no one is complaining about those.
But social networks have algorithms that promote one kind of content over another.
I keep getting recommended YouTube videos of gross and mostly fake pimple removal, on Facebook AI generated fake videos of random crap like Barnacle removal, and google ads for an automated IoT chicken coop.
I have never searched for these things and no living person has ever suggested such things to me. The algorithm lives its own life and none of it is good.
stickfigure
You have a very different experience than I do! My Youtube algorithm suggestions are wonderful, full of science and engineering and history and food and travel and comedy and all kinds of weird esoteric things that would never have been viable in the broadcast TV I grew up with. I am literally delighted.
Maybe you're starving the algorithm and it's trying random things? Look up how to reset the YT algo, I'm sure it's possible. Then try subscribing/liking a few things that you actually like.
If you're within a standard deviation or two of the typical HNer, look up "Practical Engineering" and like a few of his videos. That should get you started.
braza
I’ve worked there, maybe my 2 cents: at the end of the day I have mouths to feed and honestly I used to be idealistic regarding employer moral compass and so on, but coming from the bottom in socio-economic terms I will exercise my right to be cynical about it.
I have some support to the Trust&Safety team at the same period of the whole debate about the section 230; and from what I can tell Snap has some flagging mechanisms quite good related with people selling firearms, drugs and especially puberty blockers.
The thing that I can say is that a lot of parents are sleeping at the wheel with teenagers and not following what is going on with their child.
Frieren
Each generation of parents fails on something.
This generation is failing at recognizing the dangers of social media.
Teenagers and even children are being radicalized on-line, sold dangerous diets, manipulated by state sponsored creators, lied by companies, taught anti-science, and the list goes on and on.
How is all this not heavily regulated? Even adults need protection from scammers, fake products, misleading ads, hidden product promotions that look like personal opinions...
We have gone back a 100 years when it comes to consumer rights, and children are the ones that are paying the highest price.
cmrdporcupine
As a parent, I never failed to recognize it.
I just failed to be able to do anything about it.
You were a teenager once, I'm sure you can remember how little influence your parents actually had over how you actually spent your time. Or at least saw that in your friends.
This is a society wide thing. Parents are pretty much powerless.
So yes, regulation. But you'll see how discussion of any proposal for this goes down in this forum. Just imagine across the whole polis.
glitchc
We can always take the phone away. As a parent of a teenager, sometimes I have to make hard choices. This is one of them.
Kids don't need cellphones. We want them to have one often because of our own insecurities.
ceejayoz
> We can always take the phone away.
Kids are… resourceful.
https://www.cbsnews.com/news/teen-goes-viral-for-tweeting-fr...
Last week, a 15-year-old girl named Dorothy looked at the smart fridge in her kitchen and decided to try and talk to it: "I do not know if this is going to tweet I am talking to my fridge what the heck my Mom confiscated all of my electronics again." Sure enough, it worked. The message Dorothy said out loud to her fridge was tweeted out by her Twitter account.
(And before that, she used her DS, her Wii, and a cousin's old iPod. There's always a friend's house, too.)
itishappy
> You were a teenager once, I'm sure you can remember how little influence your parents actually had over how you actually spent your time.
Actually, I remember the opposite. I had problems with screen time so my parents put a password on the computer. It wasn't 100% effective, of course, but it was closer to 90% than 0%.
aylmao
> You were a teenager once, I'm sure you can remember how little influence your parents actually had over how you actually spent your time.
There might be bias here if one remembers one's own teenage years, because I'm sure many teenagers _think_ their parents don't have influence over them. If you ask the parents though I'm sure many would agree aren't fully in control, but do notice they have a lot of influence still.
Personally, the older I grow, the more I realize how much influence in general my parents actually had over me.
yamazakiwi
I want to add it is important to show that you are against those things as well, too many people react by shifting blame when they stand to gain more by saying, "Yeah, I don't like that either."
radicaldreamer
Regulation of social media probably polls pretty well, I think polls have even found that most high schoolers want to reduce or end their usage of it
cmrdporcupine
Phone use during class time is banned in my kid's high schools.
Makes no difference -- it's completely unenforced by the teachers. They're practically physically adults, teachers don't want to risk the confrontation, etc. And the kids suffer for it.
And my youngest uses no social media but their mind is still eaten by constant phone usage.
More than social media, the problem is the device. The form factor.
The "smartphone" is a malevolent technology.
basisword
Genuinely asking - is it impossible to just enforce a no phones until 16+ rule with your kids? The reasons against it I see are either “it’s too hard for the parents” or hypothetical (“they would have no social life”). There were tonnes of things I wanted to do as a teenager that my parents prevented me from doing. Including things my friends were allowed to do by their less strict parents. There was of course things I did despite them but phones seem like a simple one for parents to control given teenagers can’t afford them otherwise until they start working at 16+. Allowing instant messaging via a computer seems like a nice middle ground.
cardanome
I would have strongly agreed with you if we were talking ten years ago but with everything using two-factor authentication these days it pretty much a requirement to have a phone. Even for children to do school work.
Like there are parental control systems and all that you could set up but that requires you to be pretty tech savy as a parent. I think you are already doing great if you keep your child away from phones and tablets until they are of school age but keeping teenagers away from smart phones seems very unrealistic if you don't live in a remote commune or something.
I really, really wish it weren't the case.
ceejayoz
Only if you're willing to ban them from ever going to friends' houses, where they'll use their friends' devices to do it.
basisword
>> How is all this not heavily regulated?
It isn’t properly regulated because the CEO’s and founders just moan that it isn’t possible to regulate so much user generated content. I’m of the opinion that, in that case, their sites shouldn’t exist but people seem to have convinced themselves that Facebook et al provide too much value to stand up to.
Frieren
> I’m of the opinion that, in that case, their sites shouldn’t exist
I totally agree with this.
If,for example, hydrogen cars exploded all the time that will not be a reason to not regulate them but a reason for a complete ban.
ClumsyPilot
> CEO’s and founders just moan that it isn’t possible to regulate
Surely if a CEO with a billion dollar budget can’t regulate it, neither can a parent?
olyjohn
A parent only needs to regulate their child. Not 25 billions daily posts.
BlueTemplar
I'm curious as to when was the generation that failed to recognize the dangers of invitingly large (treasure) chests. :3
(I finished watching the last episode just as you posted this comment, still giddy about it. :D
I tried to spread out watching the season for the first time over more than a week, and failed miserably...)
ThrowawayTestr
What exactly do you want relegated? What powers do you want Trump to have to control the speech of Americans?
peterbecich
There is a statistic that the average teenager gets 240 smartphone notifications a day: https://www.michiganmedicine.org/health-lab/study-average-te...
Young people have more time ahead of them than anyone. Consequently, in my opinion, young people should be receiving information with a long time period of usefulness. Smartphone notifications have a very short half-life.
dang
There was a related thread on the front page: TikTok is harming children at an industrial scale - https://news.ycombinator.com/item?id=43716665
Since that article is several months old and this one is new, we swapped it out. I assume it makes more sense to discuss the new one. Also, there were lots of criticisms of the other article for supposedly focusing only on TikTok, and those criticisms seem supplanted by this piece. (I'm not arguing whether it's right or wrong, nor have I read it.)
pelagicAustral
You can essentially just wildcard the social network name and everything still applies. That's the status quo
graemep
Except FB, which mostly harms the middle aged.
morkalork
It was harming kids on an industrial scale back when it was new, before Instagram et al cannabalized their audience
WhereIsTheTruth
and countries, at a secret service scale
https://www.amnesty.org/en/latest/news/2022/09/myanmar-faceb...
null
null
burningChrome
The same outlet did the TikTok story:
Following the format of our previous post about the “industrial scale harms” attributed to TikTok, this piece presents dozens of quotations from internal reports, studies, memos, conversations, and public statements in which Snap executives, employees, and consultants acknowledge and discuss the harms that Snapchat causes to many minors who use their platform.
> “Think it would be interesting to investigate how healthy Snapstreak sessions are for users… If I open Snapchat, take a photo of the ceiling to keep my streak going and don’t engage with the rest of the app, is that the type of behavior we want to encourage? Alternatively, if we find that streaks are addictive or a gateway to already deep engagement with other parts of Snapchat, then it would be something positive for “healthy” long term retention and engagement with the product.”
For I second I thought this employee was talking about what's healthy for the user. Certainly not though; they mean what's healthy for the "user-base". I find very interesting how this sort of language leads to certain employee behaviour. Using the concept of "health" to mean retention and engagement, might overcast thinking about health from a user's perspective— it's similar terminology but very different, and sometimes even opposite, goals.