Philosophy Eats AI
42 comments
·January 19, 2025scoofy
I have multiple degrees in philosophy and I have no idea what this article is even trying to say.
If anyone has access to the full article, I’m interested, but it sounds like a lot of buzzwords and not a ton of substance.
The framing of ai through a philosophical lens is obviously interesting, but a lot of the problems addressed in the intro are pretty much irrelevant to the ai-ness of the information.
moffers
I was about to be very excited that my bachelors in Philosophy might become relevant on its face for once in my life! But, I’m not sure that flexing that professionally is going to get me at the top of any neat AI projects.
But wouldn’t that be great?
scoofy
Philosophy will help you in ways that don't directly get you paid. Ultimately philosophy is the study of how to think.
The number of arguments I've had about "AI" with friends has me facepalming regularly. Understanding why LLM's don't equate to "intelligence" is a direct result of that training. Still admitting that AGI might actually be an algorithm we haven't figured out yet is also a direct result of that training.
Most deep philosophical issue come from axiom consensus (and the lack there of), the reflexive nature between deductive and inductive reasoning, and conceptions of Knowledge and Truth itself.
It's pretty rare that these are pragmatic problems, but occasionally they are relevant.
rvense
Once I'd started a new job and was asked to write "a little bit" about myself for a slide for the first company meeting. There were a couple of these because we were a bunch of new people and my little bit was in a font like half the size of all the others, because I have a humanities degree so I can and will write something when you ask me to.
readyplayernull
The article is about mapping Philosophy into AI project management.
> Philosophical perspectives on what AI models should achieve (teleology), what counts as knowledge (epistemology), and how AI represents reality (ontology) also shape value creation. Without thoughtful and rigorous cultivation of philosophical insight, organizations will fail to reap superior returns and competitive advantage from their generative and predictive AI investments.
rvense
Doesn't that hold for all other applications of software and really technology? Without further context that just seems to be saying you have to, like, think about what the AI is doing and how you're applying it?
Terr_
> what counts as knowledge (epistemology), and how AI represents reality (ontology) also shape value creation.
As a skeptic with only a few drums to beat, my quasi-philosophical complaint about LLMs: we have a rampant problem where humans confuse a character they perceive out of a text-document with a real-world author.
In all these hyped-products, you are actually being given the "and then Mr. Robot said" lines from a kind of theater-script. This document grows as your contribution is inserted as "Mr. User says", plus whatever the LLM author calculates "fits next."
So all these excited articles about how SomethingAI has learned deceit or self-interest? Nah, they're really probing how well it assembles text (learned from ones we make) where we humans can perceive a fictional character which exhibits those qualities. That can including qualities we absolutely know the real-world LLM does not have.
It's extremely impressive compared to where we used to be, but not the same.
anxoo
> In all these hyped-products, you are actually being given the "and then Mr. Robot said" lines from a kind of theater-script. This document grows as your contribution is inserted as "Mr. User says", plus whatever the LLM author calculates "fits next."
and we are creating such a document now, where "Terr_" plays a fictional character who is skeptical of LLM hype, and "anxoo" roleplays a character who is concerned about the level of AI capabilities.
you protest, "no, i'm a real person with real thoughts! the character is me! the AI 'character' is a a fiction created by an ungodly pile of data and linear algebra!" and i reply, "you are a fiction created by an ungodly mass of neuron activations and hormones and neurotransmitters".
i agree that we cannot know what an LLM is "really thinking", and when people say that the AIs have "learned how to [X]" or have "demonstrated deception" or whatever, there's an inevitable anthropomorphization. i agree that when people talk to chatGPT and it acts "friendly and helpful", that we don't really know whether the AI is friendly and helpful, or whether the "mind" inside is some utterly alien thing.
the point is, none of that matters. if it writes code, it writes code. if it's able to discover new scientific insights, or if it's able to replace the workforce, or if it's able to control and manipulate resources, those are all concrete things it will do in the real world. to assume that it will never get there because it's just playing a fancy language game is completely unwarranted overconfidence.
TJSomething
That's one of the things. Even in human-written fiction, the depths of any character you read about is pure smoke and mirrors. People regularly perceive fictional characters as if they are real people (and it's fun to do so), but it would be impossible for an author to simulate a complete human being in their head.
It seems that LLMs operate a lot like I would in improv. In a scene, I might add, "This is the fifth time you've driven your car into a ditch this year." I don't know what the earlier four times were like. No one there had any idea I was even going to say that. I just say it as a method of increasing stakes and creating the illusion of history in order to serve a narrative purpose. I'll often include real facts to serve the verisimilitude of a scene, but I don't have time to do real fact checking. I need to keep the momentum going and will gladly make up facts as a suits the narrative and my character.
exe34
> it would be impossible for an author to simulate a complete human being in their head.
unless it's a self-insert? or do you reckon even then it'll be a lofi simulation, because there real world input is absent and the physics/social aspect is still being simulated?
jdietrich
Humans just aren't very good at understanding their own motivations. Marketers know this implicitly. Almost nobody believes "I drink Coca-Cola because billions of dollars of advertising have conditioned me to associate Coke with positive feelings on a subconscious level", even if they would recognise that as a completely plausible explanation for why other people like Coca-Cola.
og_kalu
As long as it affects the real world, it doesn't matter what semantical category you feel compelled to push LLMs into.
If Copilot will no longer reply helpfully because your previous messages were rude then that is a consequence. It doesn't matter whether it was "really upset" or not.
If some future VLM robot decides to take your hand off as some revenge plot, that's a consequence. It doesn't matter if this is some elaborate role play. It doesn't matter if the robot "has no real identity" and "cannot act on real vengeance". Like who cares ? Your hand is gone and it's not coming back.
Are there real world consequences ? Yes ? Then the handwringing over whether it's just "elaborate science fiction" or "real deceit" is entirely meaningless.
antonkar
How can you create an all-understanding all-powerful jinn that is a slave in a lamp? Can the jinn be all-good, too? What is good anyways? What should we do if doing good turns out to be understanding and freeing others (at least as a long-term goal)? Should our AI systems gradually become more censoring or more freeing?
alganet
Philosophy is mostly autophagous and self-regulating, I think. It's a debug mode, or something like it.
It's not eating AI. It's "eating" the part of AI that was tuned to disproportionally change the natural balance of philosophy.
Trying to get on top of it is silly. The debug mode is not for sale.
redelbee
So we’re back to the idea that only philosopher kings can shape and rule the ideal world? Plato would be proud!
Jests aside, I love the idea of incorporating an all encompassing AI philosophy built up from the rich history of thinking, wisdom, and texts that already exist. I’m no expert, but I don’t see how this would even be possible. Could you train some LLM exclusively on philosophical works, then prompt it to create a new perfect philosophy that it will then use to direct its “life” from then on? I can’t imagine that would work in any way. It would certainly be entertaining to see the results, however.
That said, AI companies would likely all benefit from a team of philosophers on staff. I imagine most companies would. Thinking deeply and critically has been proven to be enormously valuable to humankind, but it seems to be of dubious value to capital and those who live and die by it.
The fact that the majority of deep thinking and deep work of our time serves mainly to feed the endless growth of capital - instead of the well-being of humankind - is the great tragedy of our time.
alganet
There is a lot of this "philosopher king" stuff. Prophets, ubermenchs, tlatoanis. It seems foreign to the concept of philosophy. As I see it, this comes more from the lineage of arts than the lineage of thinkers (it's not a critic, just an observation).
I think this is very obvious and both artists and philosophers understand it.
I'm worried about the mercantilist guild. They don't seem to get the message. Maybe I'm wrong, I don't really know much about what they think. Their actions show disgerard for the other two guilds.
Hammershaft
> The fact that the majority of deep thinking and deep work of our time serves mainly to feed the endless growth of capital - instead of the well-being of humankind - is the great tragedy of our time.
I'm not blind to when this goes horribly wrong, or when needs go unaddressed because they aren't profitable, but most of the time these interests are unintentionally well aligned.
XorNot
What's the philosophy department at the local steel fabricator contributing exactly?
apsurd
To ponder whether there's any value in doing anything beyond maximizing steel fabrication output.
if it's absurd to you to think that a steel fabrication company should care about anything other than fabricating more steel, well that's your philosophy.
there are other philosophies.
polotics
I strongly disagree with the article on at least one point: ontologies, as painstakingly hand-crafted jewels handed down from aforementioned philosophers, are the complete opposite of what LLM's are bottoming-up through their layers.
kelseyfrog
Philosophy eats AI because we're in the exploration phase of the s-curve and there's a whole bunch of VC money pumping into the space. When we switch to an extraction regime, we can expect a lot of these conversations to evaporate and replaced with, "what makes us the most money" regardless of philosophic implication.
SequoiaHope
If that ever comes to pass. This is not guaranteed.
null
tomlockwood
Philosophy postgrad and now long time programmer here!
This article makes a revelation of the pretty trivially true claim that philosophy is an undercurrent of thought. If you ask, why do we do science, the answer is philosophical.
But the mistake many philosophers make is extrapolating philosophy being a discipline that reveals itself when fundamental questions about an activity are asked, into a belief that philosophy, as a discipline, is necessary to that activity.
AI doesn't require an understanding of philosophy any more than science does. Philosophers may argue that people always wonder about philosophical things, like, as the article says, teleology, epistemology and ontology, but that relation doesn't require an understanding of the theory. A scientist doesn't need to know any of those words to do science. Arguably, a scientist ought to know, but they don't have to.
The article implies that AI leaders are currently ignoring philosophy, but it isn't clear to me what ignoring the all-pervasive substratum of thought would look like. What would it look like for a person not to think about the meaning of it all, at least once at 3am at a glass outdoor set in a backyard? And, the article doesn't really stick the landing on why bringing those thoughts to the forefront would mean philsophy will "eat" AI. No argument from me against philosophy though, I think a sprinkling of it is useful, but a lack of philosophy theory is not an obstacle to action, programming, creating systems that evaluate things, see: almost everyone.
laptopdev
Is this available in full text anywhere without sign up?
Sleaker
I'm confused on the premise that AI is eating software. What does that even mean and what does it look like? AI is software, no?
jdietrich
There are a whole bunch of software problems where "just prompt an LLM" is now a viable solution. Need to analyse some data? You could program a solution, or you could just feed it to ChatGPT with a prompt. Need to build a rough prototype for the front-end of a web app? Again, you could write it yourself, or you could just feed a sketch of the UI and a prompt to an LLM.
That might be a dead end, but a lot of people are betting a lot of money that we're just at the beginning of a very steep growth curve. It is now plausible that the future of software might not be discrete apps with bespoke interfaces, but vast general-purpose models that we interact with using natural language and unstructured data. Rather than being written in advance, software is extracted from the latent space of a model on a just-in-time basis.
grey-area
A lot of the same people also recently bet huge amounts of money that blockchains and crypto would replace the world's financial system (and logistics and a hundred other industries).
How did that work out?
jdietrich
A16z and Sequoia made some big crypto bets, but I don't recall Google or Microsoft building new DCs for crypto mining. There's a fundamental difference between VCs throwing spaghetti against the wall and established tech giants steering their own resources towards something.
Hoasi
It is even less meaningful than "software is eating the world". But it sounds catchy, and people can remember it.
MattPalmer1086
The software that powers LLM inference is very small, and is the same no matter what task you ask it to perform. LLMs are really the neural architecture and model weights used.
freedum
[dead]
null
Procians bothered by the cost and status of Halikaarnian work. Its not about what "AI" can do, its a about what you can convince people AI can do (which to the Procian is one and the same)