Skip to content(if available)orjump to list(if available)

Using computers more freely and safely (2023)

layer8

Original discussion (67 comments) https://news.ycombinator.com/item?id=36113115

rpdillon

This approach has been my central philosophy for years, and is why I dislike central app stores so much: they put significant downward pressure on hobby coders releasing their work, which leaves the store with a bunch of primarily commercial software, which aligns incentives around extracting data or money from users. F-Droid is a great counter-example that highlights this.

I think my decisions have panned out pretty well, in a /r/stallmanwasright sort of way. I caught a lot of side glances for using Linux back in the late 90s and early 00s (I wanted to load it onto some computers on LHD-4 during my tour aboard, but the command said it was a "hacker" operating system), but these days, looking at what Apple and Microsoft are doing, I'm thrilled to be using System76 machines for me and my kids.

Emacs has been a consistent friend over the years, and I still go back to it for anything text-centric. It's made the transition to the LLM-era quite gracefully. Tiddlywiki has also been a reliable source of value over the years.

I tend to not install apps for sites on my phones. They offer less control than a browser I can add uBlock to and just visit the site. Not always (I use the Amazon app, for example), but mostly.

In general, I've cultivated an attitude of reverse-entitlement: sometimes I really want things, but I have to stay real with myself that I don't need them. Some examples that folks will probably argue with, but are good illustrations of the idea:

I'm a huge fan of VR, and have had amazing times in Beat Saber and a few other games. I bought Quest and Quest 2, but when Meta locked me out due to a SNAFU with the Oculus/FB account mess up, and I was unable to file a ticket to get the account unlocked (because I couldn't log in), I lost $1000 in hardware and a couple thousand in VR software, but I just walked away. I realize the relationship was abusive, and that I didn't need Meta in my life. That was 2 years ago, and I still miss Beat Saber, but it was a good decision.

I had a LinkedIn account, and gave my name and email when I signed up. When my phone fried and I didn't have backup MFA, they demanded my state-issued ID to let me back in (rather than, say, verifying by email). I don't trust MS with my ID - they said they would delete it, but I didn't believe them (prior data breaches at ID vendors motivated me). But more importantly, it was an escalation: they didn't verify my identity when I signed up. So they should be trying to confirm that I'm the person who signed up. But they instead wanted me to verify I'm rpdillon, which is moving the goalposts. They're doing it as a transparent data grab. So I walked away. That was a few years ago; turns out I don't need LinkedIn!

There are probably dozens of examples like this, but I'll stop here, since this is already too long.

My core point here is: it turns out I don't need most of the stuff these companies offer, and they do seem to be getting increasingly abusive. I read about the WebRTC backdoor in Meta's apps last night, but I quit Facebook in 2009, because the writing was on the wall. I think the article offers a good perspective. This is quite at adds with opinions I read here all the time ("Libreoffice is a useless replacement for Excel", "It's literally impossible to program unless I have my liquid retina display", "Unless I'm rendering at 144Hz, it's like a slideshow", etc.), so it might be a _highly_ individual thing, but I thought it was worth mentioning, since it might be a fun discussion about how folks think of these tradeoffs.

fifticon

intereatingly, for apps you mention '..they offer less control ..'

Maybe for the user, but for the corporation, they offer more control..

tempodox

> they do seem to be getting increasingly abusive.

Positively, I can confirm. Cory Doctorow's Lecture on Enshittification describes it perfectly:

https://doctorow.medium.com/my-mcluhan-lecture-on-enshittifi...

GMoromisato

I'm sure this article resonates with many people; it doesn't resonate with me.

I get value out of (and even enjoy) lots of software, commercial and otherwise (except for Microsoft Teams--that's an abomination).

Ultimately, everything (not just software) is a trade-off. It has benefits and hazards. As long as the benefits outweigh the hazards, I use it. [The one frustration is, of course, when an employer forces a negative-value trade-off on you--that sucks.]

I'm suspicious of articles that talk about drawbacks in isolation, without weighing the benefits: "vaccines have side-effects", "police arrest the wrong people", "electric cars harm the environment".

Ironically, the best answer to many of the article's suggestions (thousands rather than millions, easy to modify, etc.) is to write your own software with LLMs. The future everyone wants is, I think, one where users can ask the computer to do anything, and the computer immediately complies. Will that bring about a software paradise free from the buggy, one-size-fits-none, extractive software of today? I don't know. I guess we'll see.

We live in interesting times.

maegul

> Ironically, the best answer to many of the article's suggestions (thousands rather than millions, easy to modify, etc.) is to write your own software with LLMs.

Not sure exactly irony you mean here, but I’ll bite on the anti-LLM bait …

Surely it matters where the LLM sits against these values, no? Even if you’ve got your own program from the LLM that’s yours, so long as you may need alterations, maintenance, debugging or even understanding its nuances, the nature of the originating LLM, as a program, matters too … right?

And in that sense, are we at all likely to get to a place where LLMs aren’t simply the new mega-platforms (while we await the year of the local-only/open-weights AI)?

GMoromisato

> Surely it matters where the LLM sits against these values, no?

Yes, I agree, but it's all trade-offs. The core problem is this:

1. Software is very expensive to write

2. So, you need to sell to as many people as possible

3. So, you need to add lots of features to attract as many people as possible

4. And you need to monetize it with ads, data-selling, and SaaS subscriptions.

5. But that makes software complicated, brittle, and frustrating.

LLMs can break the cycle if they make it cheap to write software. Instead of buying a mass-market product with 10x more features than you need, you create custom software that does exactly what you need and no more.

But aren't we trading one master of another? Instead of bowing down to Microsoft/Meta/Google, we bow down to OpenAI/Anthropic/Meta/Google? Maybe, but when an LLM writes code for you, you own the code. The code runs outside of the LLM (usually) on an open platform.

But what if you have to modify the code? Then you ask an LLM (maybe not the original LLM) to modify the code. That's far easier than asking Google to modify Gmail.

If you believe in the suggestions of the author, then I don't think there is a better answer than LLMs. We don't live in a world where everyone can solve their software problems by forking some code, much less modifying it themselves.

And the reason I think it's ironic is because I suspect the author hates LLMs.

akkartik

> 1. Software is very expensive to write

I disagree with this, right at the start. I think software is cheap to write but expensive to maintain when you try to sell to as many people as possible. It's the OpEx that kills you, not the CapEx. I go into this more in the current state of https://akkartik.name/about

So I wrote OP to encourage more exploration of the alternative path. If you build something and don't keep adding features to it in a futile attempt at land-grabbing "users" who will for the most part fail to pay you back for the over-investment your current VC-based milieu causes you to think is the only way to feel a sense of meaning from creating software -- if you don't keep adding features to it and you build on a substrate that's similarly not adding features and putting you on a perpetual treadmill of autoupdates, then software can be much less expensive.

I plan to just put small durable things out into the world, and to take a small measure of satisfaction in their accumulation over the course of my life. The ideal is a rock: it just sits inert until you pick it up, and it remains true to its nature when you do pick it up.

> LLMs can break the cycle if they make it cheap to write software. Instead of buying a mass-market product with 10x more features than you need, you create custom software that does exactly what you need and no more.

That's the critical question, isn't it. Will LLMs yield custom software that does exactly what you need and stabilizes? Or will they addict people to needing to endlessly tweak their output so AI companies can juice their revenue streams?

What skills does it take to nudge an LLM to create something durable for you? How much do people need to know, what skills do they need to develop? I don't know, but I feel certain that we will need new skills most people don't currently have.

Another way to rephrase the critical question: do you trust the real AIs here, the tech companies selling LLMs to you. Will the LLMs they peddle continue to work in 10 years time as well as they do today? If they enshittify, will you be prepared? Me, I'm deeply cynical about these companies even as LLMs themselves feel like a radical advance. I hope the world will not suffer from the value capture of AI companies the way it has suffered from the value capture of internet companies.

bevr1337

> As long as the benefits outweigh the hazards, I use it.

You and the author may be in agreement but with differing risk tolerance.

> Ironically, the best answer to many of the article's suggestions... is to write your own software with LLMs.

I don't think it's ironic but I do think it's false. How do LLMs satisfy a single requirement from the author's punchline list?

nothrabannosir

LLMs don’t but the problems you write for yourself using LLMs, do.

GMoromisato

> You and the author may be in agreement but with differing risk tolerance.

I agree. And I'm not saying the author is wrong--they have their preferences. I'm just saying that for me the benefits outweigh the risks, and I'm betting most people are like me (at least outside HN).

> I don't think it's ironic but I do think it's false. How do LLMs satisfy a single requirement from the author's punchline list?

The main suggestion from the author is to write your own custom software tuned to your needs instead of relying on a mass-market, one-size-fits-all piece of complex, expensive software that has to be monetized by dark patterns.

I guess in a world where everyone knows how to program, and has the time and desire to do so, that would work. But in the real world, the only way to get the bulk of humanity to write their own software is with LLMs.

I think it's ironic because I bet the author does not like LLMs.

akkartik

Don't bet too much, because I'm still undecided on LLMs.

Interestingly, I have no memory of ever thinking about LLMs as I wrote this. Part of it is I slaved over this talk a lot more than my usual blog posts, for about six months, after starting the initial draft in Dec 2022 (https://akkartik.name/post/roundup22). ChatGPT came out in Nov 2022. So I was following (and starting to get annoyed by) the AI conversations in parallel with working on this talk, but perhaps they felt like separate threads in my head and I hadn't yet noticed that they can impact one another.

These days I've noticed the connections, and I feel the pressure to try to rationalize one in terms of the other. But I still don't feel confident enough to do so. And my training has always emphasized living with ambiguity until one comes up with a satisfying resolution.

It took us 200 years from the discovery of telescopes[1] to attain some measure of closure on all the questions they raised. There's no reason to think the discovery of LLMs will take any less time to work through. They'll still be remembered and debated in a hundred years. Your hot takes or mine about LLMs will be long forgotten. In the meantime, it seems a good idea for me to focus on what I seem uniquely qualified to talk about.

[1] https://web.archive.org/web/20140310031503/http://tofspot.bl... is a fantastic resource.

akkartik

> I'm betting most people are like me (at least outside HN).

Oh yes. If you want to be like most people, you should stay right where you are.

The question I'm interested in is: where should most people be? Are they where they should be? Is the current world the best we can do?

I have no illusions about converting a large following. I just have different priorities than you, it seems.

safety1st

The fundamental tradeoff a lot of consumer software seems to be based on these days is they will offer you stuff you want (i.e. be feature rich), in exchange for stuff you don't (i.e. steal your data, show you ads). Whereas a FOSS author is a lot more likely to take the "do one thing well" approach.

What I settled for was an approach where I try to minimize the use of commercial software in my personal life, but in my business if we need what the commercial software does we'll just license it and get on with things. For the most part in my life I don't really NEED some feature or another, it might be nice to have, but with any type of commercial software or service there's always going to be the risk that they'll push some update that shoves ads down my throat or introduces microtransactions or something, so I'm OK to just go without and use the FOSS alternative.

In business though, we'll be at a competitive disadvantage if I force everyone to use only FOSS. There are many times where I've looked at the open source equivalent of some big SaaS and it was just going to be more work to set up and maintain a less featureful open source equivalent. So, I'm more inclined to do a deal with the devil because at the end of the day our time and resources need to be focused elsewhere.

gsf_emergency_2

That's because (on paper) B2Bs get much more out of their clients (sorry!)

gsf_emergency

>I get value out of

That's the rub. Does it benefit the developers much more than the users, in aggregate?

It's not an easy question to answer even for vaccines..

But I'd wager it's a no, because, e.g. Pfizer(plus or minus BioNTech) probably could not have learnt enough* from their deployment..

*I.e. gain nonfungible knowhow

jay_kyburz

Upvote for Love and Lua!