The Candid Naivety of Geeks
87 comments
·March 29, 2025disambiguation
andrei_says_
The other important fact is that there are multiple entities keeping an ever-growing dossier on you which you cannot see, delete, or correct.
Every step you take, every word you write, every picture or video you look at, every single thing you pay for, every place you go, every person you call/message, or are physically near to. And the times when you left your phone behind. For everyone, combined, analyzed, searchable, with machine-learning-everything thrown at it to find and predict patterns and of course to stalk exes, political activists, anybody.
Imagine what the Stasi police could do with all of this and know that your current administration is doing it.
It’s a bit depressing even if put to the music of I’m Watching You by hmmm… the Police.
danaris
> Everything is tracked, everything is logged, its been this way for a long time and there's nothing you can do about it. You have zero privacy in the internet and you're an idiot if you think otherwise.
But this nihilistic, all-or-nothing attitude is another kind of naïvety.
There are absolutely still ways for people to keep large chunks of their online presence and activity private. Using E2EE services like Matrix and iMessage, for instance. De-Googling sharply reduces the amount of information Google has about you. Etc, etc.
It may not be accessible—or even understandable—for everyone, but the idea that absolutely everything we say, do, want, and think will be collected and tracked and there's nothing we can do about it is just not true.
disambiguation
I said nothing you can do about logging. Of course you can stop using large parts of the internet (or stop using it altogether) but that kind of defeats the purpose if there's no good alternative, regardless of accessibility. The point is that perfect privacy on the web is impossible, so why should OP worry about amazon storing voice data when tons is already logged in google meet, microsoft teams, zoom, webex, discord etc.
Lio
I’ve always been puzzled by the proposition of E2EE when you have no control of the ends.
If Meta wants to read your WhatsApp messages they’d just do it on your device. How would you know?
like_any_other
Plus, privacy leaks are plugged one at a time - we'll never get to private computing if we immediately give up because there's more than one thing spying on us, so fixing any one thing won't solve everything.
And it's a lot harder to spy en masse if for each act of spying you risk exposing your chip-level backdoor, instead of just asking Facebook for the data.
1vuio0pswjnm7
"You have zero privacy in the internet and you're an idiot if you think otherwise."
This type of statement usually comes from the "schizo" who is "living on the internet", not the "normie" who has a life away from the internet and only uses it occasionally. It is a common "all-or-nothing" perspective that has been shared on HN for at least a decade. Meanwhile so-called "tech" companies spend millions on lobbying against privacy regulation and pay millions in fines and settlements for violations.
Perhaps whether one has "privacy" is not as important as whether one believes they might be able to get it. As long as the possibility of "getting it" exists in people's minds, and people take action toward that end, then so-called "tech" companies face a potentially existential threat. It interferes with the progression of their only "business model". The "all-or-nothing" view of "privacy" seen in HN comments is particularly suspect when one considers that those invested in so-called "tech" companies have a financial interest in erasing the _possibility_ of "privacy", i.e., the motivation to take action, however small and seemingly insignificant, from people's minds.
There are ways to use the internet that send minimal useful data to so-called "tech" companies and there are ways to use the internet that send maximum amounts of data to so-called "tech" companies. Neither is "internet privacy" in the absolute sense. But each has a different effect on the "business model" of so-called "tech" commpanies.
More importantly, less use of the internet may result in less data being shared with so-called "tech" companies. Good luck getting the "schizo" to reduce their internet use. It is not suprising the "schizo" would suggest an absolutist standard of "internet privacy" where achieving it is impossible. For the "schizo" who is wedded to the computer, this is probably true.
margalabargala
Despite your assertions that someone who thinks that lacking privacy online is indicative of a specific hallucinatory mental illness, it may surprise you to learn that plenty of people lacking that mental illness think similarly.
It's less a function of mental illness or time spent online, and more simply pointing out the old-as-writing state of affairs where companies will do whatever they can do to you to make more money. See the "Complaint tablet to Ea-nāṣir" for a timeline of this.
It is not hallucinatory to observe that companies take advantage of people given the opportunity to do so, and it is not indicative of mental illness to be unhappy about being on the receiving end.
1vuio0pswjnm7
NB. The term "schizo" and "normie" are the parent commenter's choice of words, not mine. Hence each is in quotes. No reference to actual, medically-diagnosed mental illness is made; I presumed the parent commenter's terms are simply another attempt to divide internet users into groups. This "us" and "them" perspective, e.g., "technical" and "non-technical", "schizo" and "normie", etc., as silly as it is, pervades a majority of HN comments.
disambiguation
Au contraire, the vast majority of normies are married to the internet via TikTok and Instagram, the difference is the schizo knows it. The divisive hyperbole is my own dramatic flair bc of my own frustration with the issue, not a conspiracy to divide.
boppo1
>NSA dragnet Can you tell me more? What specifically, are they collecting? The porn I've looked at on /s/?
>always-on backdoors embedded in consumer routers and CPUs Are we talking about IME? That's more of a 'theoretically it could execute code, but they'd need a crazy amount of software engineering to really use it to monitor you'. Besides, you can MITM your network traffic and see if it's phoning home. And I'm sure people do, and I've read no cases of it actually being used that way.
Not that I'm okay with this or happy about it. It's just less dire than could be since, although the infra is there, it's not being used.
disambiguation
Yeah im memeing, i personally dont care but think "privacy enthusiasm" is a cope. Even signal I wouldn't trust, which OPs whole sales pitch. Exchanging a OTP in person is probably your best bet, but i personally have no use case. So yeah hardware telemetry might be an extreme example, but its certainly happening at every other layer.
keybored
Not all are naive on some kind of layer. That would imply sticking your neck out and proposing solutions. There are also the fatalists. They are immune to being wrong because they think it is hopeless. That means never being able to be called an idiot, a very important protection for some.[1] They also literally don’t care so there’s that too.
[1] There’s so much surveillance that it is very easy to fall into the trap of being a naive idiot. Not to worry. We can be fatalist and not have to worry about that charge.
poincaredisk
GP basically called himself a schizo so they know, at least subconsciously, they went too far down that rabbit hole. Even though they just called us idiots.
My particular hot take, speaking partially from experience, is that indeed, there are insurmountable amounts of data collected - but it's collected by hundreds of disconnected, inhomogeneous and incompetent organizations with conflicting goals. There's no global all-knowing conspiracy, even governmental organizations in a single country have poor data sharing capabilities. And this should be trivial - it gets much harder when you take into account commercial organizations that store user data but it's not their main focus, commercial organizations that want to sell user data, but not for free, commercial organizations that really want to pretend they care about user privacy, foreign organizations, foreign organizations from hostile countries, and everreaching bureaucracy related to getting data from basically any of those.
disambiguation
Not a hot take at all, this is pretty realistic. But notice at no point is pure privacy a serious option, instead we count on hopium. Sure they DO have a lot of our data and they COULD be up to no good, but they wont, because.. reasons. I also never said anything about a conspiracy, just simple facts like how VPNs and social media sites are legally required to keep usage logs in case of illegal activity.
cavisne
Google has been around since 1998 with people putting every dark secret and worry into search queries. Not once has someones searches been leaked by employees, despite highly political people working at these companies with a lot to gain by leaking to a journalist or similar.
Likewise with private messages on Facebook, order history on Amazon.
Big Tech has way more to lose than the small "privacy focused" alternatives, and clearly for them to go this long with this many employees its through design not luck.
thakoppno
> In the case of one 15-year-old boy Barksdale met through a technology group in Seattle, Washington, he allegedly tapped into the boy's Google Voice call logs after the boy refused to tell him the name of his new girlfriend. Barksdale then reportedly taunted the boy with threats to call the girl.
This doesn’t quite refute your assertion of no leaks of searches but it is just as egregious. You are right about incentives too.
afc
The gcreep was a huge wake up call for Google, back in ~2010 when this happened. I can tell you that Google took very drastic action internally as a direct result of this (my estimate is that it spent about 100 engineer years, including some very senior SWEs, in this, though that's just a guess) to make it very unlikely for this (an SRE abusing production access) to ever happen again. I'd still say that one incident in 20+ years is a huge success given the scope.
thakoppno
> Google revealed to TechCrunch that a second Google employee had also been caught violating user privacy and was also dismissed. Google didn't reveal any details about what or when this occurred, other than to say the other case didn't involve minors.
I agree largely that the efforts have been successful but there’s more than one incident. I’ll even speculate wildly without any evidence that there have been more than two.
philo_sophia
Upvote. Employees are the actual enforcement of tech ethics. See Google project maven pushback
prophesi
It's a moral race to the bottom when the salary's good and competition is fierce. It's great that Google pushed back against Project Maven, but nothing stopped the other 21+ companies from participating.
9dev
Oh wow. Do you really want to die on that hill? Big tech is responsible for a HUGE MOUNTAIN of absolutely hideous shit, destroyed lives and dead people—and you’re telling me that the engineers with a freely adjustable consciousness that built this machinery uphold some sort of ethics? The same guys that created algorithms optimised to glue children to screens for most of the day, driving young people into depression and suicide, and allowing the emergence of „alternative facts“?
And yes, Amazon employee #18447272. You are just as responsible for empowering Bezos to fuck over the WaPo as the rest of them. Employees of big tech corporations are the silent accomplices of the tech oligarchy we’re headed into.
zitterbewegung
There is a lot of discussion here about the motivation of companies and what they can share or remain private in relation to their objectives and making a profit. But, I think to provide a much better argument is what are their actual motivations to have privacy in all of the systems to begin with.
Apple and Amazon will at minimum compromise your privacy to improve their products. And since they have no extra motivation since they don't make more or less money (because Siri and Alexa are loss leaders) they will have no extra considerations of privacy regardless.
Comparing Signal to protonmail is a much more interesting problem and you can go on to what has been subpoena from Signal and protonmail. Since there was one actually disclosed we can see the information (or really lack of) that was given by Signal [1] . We have a statement by proton mail on what can be subpoena [2] but there have been arguments against it.
[1] https://signal.org/bigbrother/cd-california-grand-jury/ [2] https://proton.me/legal/law-enforcement [3] https://protos.com/protonmail-hands-info-to-government-but-s...
troupo
Apple used to have privacy as the differentiating and profit-driving factor.
This will immediately get thrown out of the window when it hurts profit (and may have been already been thrown out of the window, see OpenAI partnership).
On top of that, at this point all they have to do is to just be ever so slightly better than the rest when it comes to privacy. The bar is so low as to be non-existent
kbenson
> Apple used to have privacy as the differentiating and profit-driving factor.
> This will immediately get thrown out of the window when it hurts profit
This is the important thing I'm always trying to note to people that think incentives are enough (as I used to). You can never know what the incentives of the company will be 5, 10, 15 years from now, or whether that company or division will exist or have been sold to some other company.
Incentives based on current conditions only matter for outcomes that don't have ramaifications far into the future. That's definitely not data collection and privacy, where you could find that 10 years worth of collected information about you has been sold at some future date.
And lest anyone think they can predict the stance a company will have on a topic a decade or two later, all I can say is that any example someone can point to of a company that has stayed the course we can easily look at point in history where a series of events could have gone the other way and they would be close to being bought out if not defunct. Even Apple had a period where they were bailed out by investment from Microsoft, and many other large names of that period were gobbled up.
Always keep in mind, Sun was an amazing company with amazing products and engineers that embraced open source and competed with Microsoft in the enterprise market, and eventually after declining they got bought by Oracle.
int_19h
OpenAI integration in current Apple products needs to be enabled to begin with, and then it still prompts you before sending anything to OpenAI servers, so I'd say it's in line with their practices so far.
The reason why I trust Apple a little bit more than, say, Google on something like this is that Apple is pitching their products as luxury goods - a way to be different from the hoi polloi - so they need features to plausibly differentiate along these lines. And privacy is one thing that they know Google will never be able to offer, given its business model, so it makes perfect sense for Apple to double down on that.
(Ironically, this means that Apple users benefit from Android being the dominant platform.)
9dev
That is kind of proving OPs point, though: the differentiating factor isn’t actual privacy, it’s an impression of privacy that you’re sold; the warm, fuzzy feeling that you’re using a superior product because you’re special and this phone is for special people that have important data that needs to be protected, and as a manufacturer of special people devices, Apple obviously takes care of this—because you’re important, duh!
If they can get away with appearing to care about privacy instead of actually doing so, they will. That’s all it takes to look better than Google.
kleton
> The quokka, like the rationalist, is a creature marked by profound innocence. The quokka can't imagine you might eat it, and the rationalist can't imagine you might deceive him. As long they stay on their islands, they survive, but both species have problems if a human shows up
> In theory, rationalists like game theory, in practice, they need to adjust their priors. Real-life exchanges can be modeled as a prisoner's dilemma. In the classic version, the prisoners can't communicate, so they have to guess whether the other player will defect or cooperate.
> The game changes when we realize that life is not a single dilemma, but a series of them, and that we can remember the behavior of other agents. Now we need to cooperate, and the best strategy is "tit for two tats", wherein we cooperate until our opponent defects twice
> The problem is, this is where rationalists hit a mental stop sign. Because in the real world, there is one more strategy that the game doesn't model: lying. See, the real best strategy is "be good at lying so that you always convince your opponent to cooperate, then defect"
> And rationalists, bless their hearts, are REALLY easy to lie to. It's not like taking candy from a baby; babies actually try to hang onto their candy. The rationalists just limply let go and mutter, "I notice I am confused". This is also why they are poly.
namaria
Repeated games are studied in game theory, and the winning strategy is to "trust, retaliate, and trust again". Logic isn't limited to first order logic.
verisimi
12+ years of instilling faith in schooling and science does work. So when people realise that this is a corporate world, that money directs everything, even what they have been trained to think, it comes as a bit of a shock. But mostly people carry on anyway, as they have the habit.
swagmoney69
[dead]
skybrian
Having better priors can be useful, but they're no substitute for evidence. These assertions about how naive you supposedly should have been about some company are less useful than sharing evidence about what the company actually did.
ConspiracyFact
I came here to say this. Does the author not think that dozens of these “geeks” have enough technical acumen to figure out what information their Echo is sending to Amazon’s servers? It would have been a noisy scandal.
poincaredisk
I agree so far, but:
>But was is happening in your inbox, really? >Most spam is not "black hat spam". It is what I call "white-collar spam": perfectly legitimate company, sending you emails from legitimate address. You slept in a hotel during a business trip?
This is pure survivor paradox. This is true for your Gmail account, because all "black hat spam" was already filtered! I own two unfiltered email accounts that were sadly scraped from the internet, and the spam is - by far - almost completely malspam, romance scam, cryptocurrency spam, scam attempts, spoofs, phishings.
buyucu
I have a protonmail account with close to 0 spam. In fact, I get more spam on my legacy gmail account.
Email spam is not the huge problem people make it out to be. Common sense goes a long way.
saagarjha
There’s actually a meta-naivety in geeks where they write stuff like this and then get on the soapbox to shill some random project that is somehow better and that we can actually trust some random service for reasons that fall apart if you apply the exact same scrutiny to them. Really, they’re just smug about being smarter.
Let’s take the two advertisements that the author has. I call them advertisements because, despite being sure neither Signal or Protonmail paid this guy, he fell into the obvious trap of “xyz sucks, here’s what to use instead”.
Amazon is bad, Apple is bad, Kagi is bad because they all take your money. But Protonmail is good because they…take your money? They take your money and if they did something bad you wouldn’t pay them. Ok? And this obviously has happened to all the secure apps that people continue to use despite them being hacked, or the companies that rebrand after it’s found out they were leaking your information? If Protonmail was found out tomorrow to be a front for the CIA, who suffers? What’s stopping the people running it from just making Electronmail tomorrow and claiming that they aren’t a front for the CIA?
Meanwhile Signal is an open source project and that means everyone has reviewed the code and trusts it. What happened to giving companies money so that they suffer when they violate your privacy? If Signal makes it so that you don’t want to use it, what harm do they suffer? If they add new code that backdoors the app or sells ads or harvests your contacts, what are you going to do about it? Will you publish a blog post explaining how you were the geek who got conned this time?
I actually have nothing against these specific projects, just as I generally don’t have specific vendettas against the other dozen things these blog posts tend to shill (DuckDuckGo, Brave, Quebes, GrapheneOS, Firefox, whatever). My point is that the geek is perpetually vulnerable to thinking his choices are good because of some technical reasons, when in reality we choose what we associate with based on trust and human factors. You probably choose your software because your buddy from IRC told you it is good. A lot of people choose their software because they saw an advertisement showing that this company actually cares about their privacy. Neither of you is dumber than the other and making people feel bad for not keeping up with the evolving landscape of privacy is generally not productive.
luqtas
so what's your point? rant about the rant?
if Proton or whatever gets behind their mission, people will move on to the next company they can 'humanely can trust'... chosing the best option among all the bad options still is progress
robwwilliams
Guy on a good soapbox, but heavy on assertions and high emotion. Those damn trade-offs!
itsanaccount
Except the trade offs only go one way. Its a rachet. You never get more free, open systems. Over time your life and the life of your children only gets more bound. It's frog boiling.
You're displaying the exact naivety he's trying to point out.
robwwilliams
Maybe, but these types of complaints have a very long history. Photographs are an invasion of privacy. And many forms of technology. We are embedded in complex societies. The idea of being a sacrosanct private silo is a Western mode of thinking, and not an old one either.
delusional
the "Naivety" of Geeks has less to do with "trusting the marketing" and more to do with having to navigate a society increasingly indifferent to the issues brought up.
A decade ago I ranted about Facebook to my technical friends. They all agreed that it was a terrible privacy nightmare, that eventually it would start selling that data to generate a profit, that we really ought to use something else, but in the end I had no alternative. As one of them said "If you don't have anything to hide, you have nothing to fear". I was ready with the counter, but before I could even get to the counter point he retorted "Yeah that's obviously not true, but it is the argument". At the time I didn't understand, but now I do. Fighting against these systems is meaningless for the individual. I can't stop Facebook from gobbling up all my data any more than I can dictate that the petrol in my car must be ethically sourced from Sweden.
You can't distrust your way out of Google, Amazon, and Apple storing your voice.
It was a lot easier to be a counterculture rebel when what you were counter cultural about was the driver for the printer at your research institution. When I want to pay my taxes (which I can do electronically, imagine that) I need my phone and browser and weird authentication app to work. I need them to be the ones that everybody else uses, because if I'm using some niche application, nobody is going to help me when it breaks. When an important email doesn't arrive in my mailbox, the sender isn't going to be understanding that I want my mail on protonmail that for some reason has a technical problem that day. He's going to ask me why I'm being difficult.
kortilla
The cognitive dissonance in the article when it comes to Signal and Protonmail is the same thing the author is deriding with people trusting Apple et al.
Apple has far more to lose monetarily than protonmail if it comes out that Apple sells off iMessage contents or similar.
I agree with the ideal of the article and the plight, but the shilling of Signal and Protonmail absolutely destroy the message because it goes right back to who you decide to trust to run a closed source service for you.
A corporation betraying a relationship with a customer is not a magic property of a corporation. It can happen just as easily with non-profits, coops, and any other org structure.
They are all groups of people in the end who you don’t know and fundamentally cannot trust to be acting as an agent of your interests.
100% we need more of Stallman or someone pushing actual open source.
Signal and Protonmail are not that. They are just other SaaS providers that you have to trust the marketing of.
rini17
Even if you self-host on own domain using libre software, you have to trust the registrar and certificate authorities. Thinking you can cut out all trusted third parties is naive,too.
kortilla
No you don’t. The CAs are only necessary if you want random public devices to be able to validate your domain.
And you don’t have to trust the registrar because of what I just said. You don’t need to depend on PKI.
philo_sophia
As a tech employee who has worked on software privacy controls for consumer devices at amazon I have a couple thoughts. First, let me clarify that I am still highly skeptical about any tech companies privacy promises. That being said, the privacy control I worked on for one of Amazon's devices was a pita. It was a hardware switch which completely powered down all sensors, and modifying code related to it required extensive testing to preserve customer privacy. Amazon at least emphasizes to employees earning and retaining customer trust. The real reason I actually semi trust tech companies privacy policies is the ethics of individual employees. Maybe I'm projecting my disgust at privacy infringements onto my coworkers, but I generally believe these large corps can't hire sufficient teams of devs to build privacy compromising systems without at least one person whistleblowing.
My $.02
saagarjha
Sure they can. Just build something that allows for abuse, set up controls that prevent individual employees from abusing it, then have an exec secretly do whatever they want without your involvement.
egometry
The ethics of individual employees only lasts until the next firing, unfortunately.
ValentineC
> But most importantly, Signal sole existence is to protect privacy of its users. It’s not even a corporation and, yes, this is important.
"Not a corporation" means little if there's no transparency to how the nonprofit's board members are appointed or elected.
See the controversy that is the WordPress Foundation, which is also a 501(c)(3): https://www.pluginvulnerabilities.com/2024/09/24/who-is-on-t...
I guess geeks should be sceptical of legal structures that might get passed off as "feel good" marketing too. :)
Naivety has layers. You either die a normie who doesn't care about privacy, or you live long enough to become a schizo who knows how bad things really are. Try bringing up at the dinner table the NSA dragnet or always-on backdoors embedded in consumer routers and CPUs.
Everything is tracked, everything is logged, its been this way for a long time and there's nothing you can do about it. You have zero privacy in the internet and you're an idiot if you think otherwise.