Antiqua et Nova: Note on the relationship between AI and human intelligence
368 comments
·January 30, 2025kittikitti
Animats
It is well thought out. The "AI Magna Carta" is a stretch, though.
Some good insights:
60. Anthropomorphizing AI also poses specific challenges for the development of children, potentially encouraging them to develop patterns of interaction that treat human relationships in a transactional manner, as one would relate to a chatbot. Such habits could lead young people to see teachers as mere dispensers of information rather than as mentors who guide and nurture their intellectual and moral growth. Genuine relationships, rooted in empathy and a steadfast commitment to the good of the other, are essential and irreplaceable in fostering the full development of the human person.
That's a good one. Teacher time is a scarce resource, but the chatbot is always there, and undemanding if not asked anything.
Kids who grow up talking mostly to AIs may have that kind of relationship with the world. Historically, kids who grew up with servants around sometimes defaulted to that kind of transactional relationship. Now that can scale up. Amusingly, asking Google's AI about "bringing up children with servants" produced an excellent summary of the topic.
Years ago, the French Catholic author Georges Bernanos warned that “the danger is not in the multiplication of machines, but in the ever-increasing number of men accustomed from their childhood to desire only what machines can give.”
That's an argument against too much screen time for kids.
TZubiri
Reminds me of Inter Mirifica.
https://www.vatican.va/archive/hist_councils/ii_vatican_coun...
These are notable because they are not tweets or op-eds, one in thousands produced daily to keep you hooked to a source of information.
Rather these are published once by the church as part of their core mission and in response to the events themselves once. There is not necessarily a huge conversation here, although of course there might be conversation that lead to the letter and conversations that arise from the letter, but the center, core of the message from the church is very clear and static. It is long yes, but you only need to read it once and you'll be up to date with the church for years. You don't need to turn the news on everynight or keep your twitter feed clean and stay hooked every 20 minutes.
UomoNeroNero
One may or may not appreciate the religious aspect, but the Vatican has always been a hub for “refined thinkers.” And when it comes to establishing an (initial) point of discussion on such an ethically significant topic, I believe that the amount of thought distilled into this page has been considerable.
sangnoir
The Jesuit order - to which Pope Francis belonged (belongs?) - has a long and notable history of contributing to science and scientific discovery. So they are not just thinkers, but doers.
UomoNeroNero
Mamma mia. Yes, I totally agree with what you wrote: this is a landmark, profound, historic (and very courageous) document.
joe_the_user
I generally agree that the particular "rationalist" fear of AGI-autonomy are silly but your statement here, "the disastrous letter that effectively knee-capped American AI all while...", seems quite implausible. The same thing that makes the letter shallow is what means it's signers aren't going to hesitate for a second when they see an opportunity for profit.
Eisenstein
It is a rehashing of the same stalled philosophical debates that are already tired. They didn't present any scientific evidence for biological necessity for intelligence, nor did they assert their religious authority. It is completely pointless.
rotexo
I liked the simple observation in point 35: 'as Pope Francis observes, “the very use of the word ‘intelligence’” in connection with AI “can prove misleading”[69] and risks overlooking what is most precious in the human person.' I was texting my buddy that the proper acronym could be ABNECUI (Almost, But Not Entirely, Completely Unlike Intelligence, to rip something from Douglas Adams).
At a more profound level, I really appreciated point 18 under "Relationality": 'human intelligence is not an isolated faculty but is exercised in relationships, finding its fullest expression in dialogue, collaboration, and solidarity. We learn with others, and we learn through others.'
I was raised Protestant, but taught to be fundamentally skeptical of the political and historical baggage of any religious institution. Though I recognize that writings like this are a result of deeply held faith, it always feels paradoxical when leaders wax poetic about the mystery of God and then say 'so here is what God thinks you should do.' How could they know? That probably sounds basic, but it is my reaction. What draws me back in is the emphasis on our relationships with other human beings. Those relationships are the things that are actually in front of us, and can make a meaningful difference in our day-to-day lives. Something very useful to keep in mind when developing AI (or ABNECUI).
throw0101c
> it always feels paradoxical when leaders wax poetic about the mystery of God and then say 'so here is what God thinks you should do.' How could they know?
Perhaps we were told it:
> "Teacher, which commandment in the law is the greatest?" He [Jesus] said to him, "'You shall love the Lord your God with all your heart, and with all your soul, and with all your mind.' This is the greatest and first commandment. And a second is like it: ‘You shall love your neighbor as yourself.’ On these two commandments hang all the law and the prophets."
* https://en.wikipedia.org/wiki/Great_Commandment
Which is taken from the Torah. See also:
* https://en.wikipedia.org/wiki/Sermon_on_the_Mount
* https://en.wikipedia.org/wiki/The_Sheep_and_the_Goats
* https://en.wikipedia.org/wiki/Parable_of_the_Good_Samaritan
The leaders are probably just reiterating/reminding people.
rotexo
Yes, I recognize that these are articles of deeply-held faith. I am open to the idea of God, I am open to the idea that God is fundamentally mysterious and beyond our mortal understanding. I simply feel that I always have to exercise skepticism regarding the words of religious institutions, though, because it seems to me that power-hungry individuals could use legitimate teachings as a camouflage for their immoral selfish impulses. Though maybe some institutions can effectively guard themselves against this, selecting people truly committed to God for leadership (I find myself likely to believe, for instance, that Pope Francis in particular is truly committed to God via the humans around him).
I guess all of the doubts are a reminder for me to focus on other humans with love. That is the part of the Bible's teachings (or the teachings of other religions) that are accessible to my experience.
macrocosmos
I too am wary of "power-hungry individuals" who could use legitimate teachings for illegitimate ends.
I think the types of people you speak of are all too real. But I have recently decided I will not let a fear of them keep me from those legitimate teachings or from anything else good in this world. At least I will not anymore. I did for a long time.
Anon84
As someone (I forget who), "God is not something you believe in. God is something you experience". In my view, any given religion is just the accumulated ways in which a specific group of people found to handle the aftermath of that experience.
Of course, the problem is that you get indoctrinated into a religion before you have a chance to experience It in the first place and end up mistaking the aftermath of the experience
kittikitti
> Which is taken from the Torah.
Proceeding to link to Wikipedia while claiming the Vatican took their opinions from the Torah especially since their references are an actual bibliography is very reductive.
throw0101c
> while claiming the Vatican took their opinions from the Torah
https://en.wikipedia.org/wiki/Great_Commandment
It is Jesus' statement, which the Vatican, as followers of Jesus, would be interested in.
But Jesus himself is quoting the Torah:
> “Hear, O Israel: [a]The Lord our God, the Lord is one! 5 You shall love the Lord your God with all your heart, with all your soul, and with all your strength.
* https://www.biblegateway.com/passage/?search=Deuteronomy%206...
> “‘Do not seek revenge or bear a grudge against anyone among your people, but love your neighbor as yourself. I am the Lord.
* https://www.biblegateway.com/passage/?search=Leviticus%2019%...
wizzwizz4
Of course the Vatican took many of their opinions from the Torah! The Pentateuch is holy to Christians as well as Jews. (Although the comment you replied to says they took this opinion via Jesus, and was quoting a book of the New Testament often called Matthew.)
nyokodo
> How could they know?
I can’t speak for any religious leader but in terms of Catholic leadership: because in many matters God spoke through the Prophets and then He came down and told us directly which is preserved in Holy Scripture and Sacred Tradition (2 Thessalonians 2:15-17), and the Holy Spirit guides the Church (John 14:26) and does so through the prime ministerial office of the Pope the successor of Peter (Mathew 16:13-19) and through the Bishops the successors of the Apostles (Acts 1:12-26)(Acts 15)
grahamj
Books are written by people. It’s humans all the way down.
lolinder
OP is giving the correct answer for the Catholic worldview.
You and the Catholic Church are operating under completely different axioms, so there's no point in responding to someone's explanation of Catholic axioms by just repeating your own axioms more forcefully.
zoogeny
I think this is a bad direction to argue from. Science is humans all the way down and we want to have confidence in the scientific process. That is, it is fundamental to our understanding of science that we can trust the collective output of numerous humans working together to uncover "Truth".
You wouldn't accept the counter argument: "Science is wrong because it is the work of humans; religion is right because it is the word of God".
We have to assume, no matter what side of the argument we take, that humans are at least in principle capable of discerning "Truth". We should focus on how humans discern truth rather than on whether or not they can.
jajko
AI term if fine, no need to muddy the waters even more. There is the first word - Artificial in past and current world means subpar, fake, imitation that often breaks apart when you get closer and you should never expect to match original in quality nor experience.
Artificial plants, artificial meat, artificial light, and so on. Nothing great there, just cheaper, tolerable, often low quality, don't expect that much etc.
mrguyorama
[flagged]
dang
If you could please make your substantive points without fulminating, we (and the site guidelines - https://news.ycombinator.com/newsguidelines.html) would appreciate it.
aeneasmackenzie
Papal infallibility is not invoked that often. Here’s an example, in section 4 (wherefore…) [0]
In particular papal infallibility was not involved in the Protestants’ complaints, and the response to their complaints (Trent) was a council and again has nothing to do with papal infallibility.
The pope was also an absolute monarch at the time, but protestants didn’t care about that aspect.
0: https://www.vatican.va/content/john-paul-ii/en/apost_letters...
b450
This is a great demonstration of the fact that people coming from very different perspectives can, through good faith inquiry, find much to agree on. I think there are a lot of thoughtful arguments and conclusions in here even though I generally find the catholic church's metaphysical pyrotechnics to be fairly ridiculous. It goes to show that E.O. Wilson's concept of "consilience" can apply even outside of sciences - just as different lines of scientific inquiry converge on a common reality, so can very disparate forms of moral inquiry converge because they both proceed from a shared human experience of what's good and bad in life.
glenstein
Yeah! Perhaps a bit naively, as a Highly Opinionated Person (HOP) on this topic I was ready for this to have something controversial to say about the nature of intelligence.
It's not out of the ordinary for even Anglosphere philosophers to fall into a kind of essentiallism about intelligence, but I think the treatment of it here is extremely careful and thoughtful, at least on first glace.
I suppose I would challenge the following, which I've also sometimes heard from philosophers:
>However, even as AI processes and simulates certain expressions of intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations. Human intelligence, in contrast, develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh. Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history.In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.
I have heard this claim frequently, that intelligence is "embodied" in a way that computers overlook, but if that turns out to be critical, well, who is to say that something like this "embodied" context can't also be modeled computationally? Or that it isn't already equivalent to something out there in the vector space that machines already utilize? People are constantly rotating through essentialist concepts that supposedly reflect an intangible "human element" that shifts the conversation onto non-computational grounds, which turn out to simply reproduce the errors of every previous variation of intelligence essentialism.
My favorite familiar example is baseball, where people say human umpires create a "human element" by changing the strike zone situationally (e.g. tighten the strike zone if it's 0-2 in a big situation, widen the strike zone if it's an 3-0 count), completely forgetting that you could have machines call those more accurately too, if you really wanted to.
Anyway, I have my usual bones to pick but overall I think a very thoughtful treatment that I wouldn't say is borne of layperson confusions that frequently dog these convos.
zoogeny
As an aside, and more out of curiosity, I want to mention a tiny niche corner of CogSci I once came across on YouTube. There was a conference on a fringe branch of consciousness studies where a group of philosophers hold a claim that there is a qualitative difference of experience based on material substrate.
That is to say, one view of consciousness suggests that if you froze a snapshot of a human brain in the process of experiencing and then transferred every single observable physical quantity into a simulation running on completely different material (e.g. from carbon to silicon) then the re-produced consciousness would be unaware of the swap and would continue completely unaffected. This would be a consequence of substrate independence, which is the predominant view as far as I can tell in both science and philosophy of mind.
I was fascinated that there was an entire conference dedicated to the opposite view. They contend that there would be a discernable and qualitative difference to the experience of the consciousness. That is, the new mind running in the simulation might "feel" the difference.
Of course, there is no experiment we can perform as of now so it is all conjecture. And this opposing view is a fringe of a fringe. It's just something I wanted to share. It's nice to realize that there are many ways to challenge our assumptions about consciousness. Consider how strongly you may feel about substrate independence and then realize: we don't actually have any proof and reasonable people hold conferences challenging this assumption.
b450
Yep I think that is an interesting point! I definitely think there are important ways in which human intelligence is embodied, but yeah - if we are modeling intelligence as a function, there's no obvious reason to think that whatever influence embodiment has on the output can't be "compressed" in the same way – after all, it doesn't matter generally how ANY of the reasoning that AI is learning to reproduce is _actually_ done. I suppose, though, that that gets at the later emphasis:
> Drawing an overly close equivalence between human intelligence and AI risks succumbing to a functionalist perspective, where people are valued based on the work they can perform
One might concede that AI can produce a good enough simulation of an embodied intelligence, while emphasizing that the value of human intelligence per se is not reducible to its effectiveness as an input-output function. But I agree the vatican's statement seems to go beyond that.
moralestapia
>people coming from very different perspectives
Care to elaborate? Which people and which perspectives? It's a bit unclear to me.
simonw
I enjoyed this bit - great use of the word "idolatry":
----
104. Technology offers remarkable tools to oversee and develop the world's resources. However, in some cases, humanity is increasingly ceding control of these resources to machines. Within some circles of scientists and futurists, there is optimism about the potential of artificial general intelligence (AGI), a hypothetical form of AI that would match or surpass human intelligence and bring about unimaginable advancements. Some even speculate that AGI could achieve superhuman capabilities. At the same time, as society drifts away from a connection with the transcendent, some are tempted to turn to AI in search of meaning or fulfillment---longings that can only be truly satisfied in communion with God. [194]*
105. However, the presumption of substituting God for an artifact of human making is idolatry, a practice Scripture explicitly warns against (e.g., Ex. 20:4; 32:1-5; 34:17). Moreover, AI may prove even more seductive than traditional idols for, unlike idols that "have mouths but do not speak; eyes, but do not see; ears, but do not hear" (Ps. 115:5-6), AI can "speak," or at least gives the illusion of doing so (cf. Rev. 13:15). Yet, it is vital to remember that AI is but a pale reflection of humanity---it is crafted by human minds, trained on human-generated material, responsive to human input, and sustained through human labor. AI cannot possess many of the capabilities specific to human life, and it is also fallible. By turning to AI as a perceived "Other" greater than itself, with which to share existence and responsibilities, humanity risks creating a substitute for God. However, it is not AI that is ultimately deified and worshipped, but humanity itself---which, in this way, becomes enslaved to its own work. [195]*
breuleux
> However, it is not AI that is ultimately deified and worshipped, but humanity itself---which, in this way, becomes enslaved to its own work.
Doesn't that describe all religion? I mean, you're telling me that the infinite creator of the universe cares about the prayers, the suffering, the aspirations, and the sexual habits of a bunch of finite beings? The hubris! It seems obvious to me that the gods of all religions are designed by human minds to be receptive to human interests, otherwise nobody would bother worshipping them. In other words, we have always been worshipping ourselves. At least there is reason to think that AI could, at least in theory, be what we expect God to be.
macrocosmos
You seem to have many misconceptions about what Catholics actually believe. And then you seem to take exception to these misconceptions. So your exceptions are only with beliefs that exist in your own mind.
Barrin92
It's not really a misconception, this was Feuerbach's and also Nietzsche's or Stirner's criticism of Christianity. It projects human attributes on an ostensibly divine subject "othering" and worshipping them, in reality just attempting to sanctify humanity. (in Stirner's words creating Mensch (human/mankind) with a capital M". This is incredibly obvious in the psychology underpinning a lot of Christian beliefs, the Manichaean good and evil worldview, the meek inheriting the earth, the day of judgement, equality, immortality i.e. trying to escape death, and so on.
JackFr
> I mean, you're telling me that the infinite creator of the universe cares about the prayers, the suffering, the aspirations, and the sexual habits of a bunch of finite beings?
Yes.
> The hubris! It seems obvious to me
I would turn that around and claim hubris on your part. You seem to think that your mind and the mind of God are similar, and limitations you perceive are limitations for God.
breuleux
> You seem to think that your mind and the mind of God are similar,
How come? You think I'm saying that the infinite creator of the universe is unlikely to care about the fate or well-being of humans because... I wouldn't if I was him? I mean, I would. Because I have a human mind. But if there are indeed no similarities between God's mind and my own, well, anything goes, doesn't it? Him caring is just one small possibility out of trillions of alternatives.
> and limitations you perceive are limitations for God.
What limitations? I haven't listed any limitations. Neither a God who cares nor a God who doesn't care is limited. I just don't see why I would assign a particularly significant probability to the former case. It sure would be convenient, but I feel like God being moral in any way that I can relate to would inevitably be projection on my part.
nprateem
> I mean, you're telling me that the infinite creator of the universe cares about the prayers, the suffering, the aspirations, and the sexual habits of a bunch of finite beings?
Do you care about the functioning of every cell in your body? Ask any cancer patient if they do.
GreenWatermelon
> It seems obvious to me that the gods of all religions are designed by human minds to be receptive to human interests, otherwise nobody would bother worshipping them
Nah that's just what atheists convince themselves. There's nothing obviously nor truthful about this conclusion or the line of reasoning behind it.
All arguments for and against the existence of God are inherently unfalsifiable, but that doesn't mean atheism is inherently more logical than theism.
In fact, from my point of view, the existence of God is way more logically sound than the alternative, and atheists are the ones following delusions and worshipping their own egos
tialaramex
There's no need for us to argue against the existence of God or other ludicrous hypotheticals, that's the whole point of Russell's Teapot.
As to the particulars of the imagined God, actually we do have some evidence for the parameters. The Princess Alice experiments in particular illustrate one desirable property, God (in the experiment, "Princess Alice") should provides behavioural oversight. An imaginary being can deliver effective oversight which would otherwise require advanced technology, but to do so the being must also believe in these arbitrary moral rules.
And that matches what we observe. People do buy Sithrak T-shirts, but, more or less without exception they don't actually worship Sithrak, whereas loads of people have worshipped various deities with locally reasonable seeming moral codes and do to this day.
breuleux
I wasn't making an atheistic argument. I'm saying that if God exists and is the infinite creator of everything, it's suspiciously convenient that he also happens to be interested in human affairs. Why does theism have to go hand-in-hand with the belief that God loves us? The former may have philosophical merit. The latter, which makes the bulk of the religious, is what I am saying is made up. We can certainly assign moral value to our own lives, but to assert that God just so happens to assign equivalent moral value to us is what I view as hubris.
null
snozolli
All arguments for and against the existence of God are inherently unfalsifiable, but that doesn't mean atheism is inherently more logical than theism.
I'm guessing you're one of those people who thinks atheism means a belief in the absence of a god, rather than its actual meaning, which is an absence of a belief in a god.
belter
This just give me an idea for a Scifi short story, where a industrial society worships a just and fair god, that is nothing more than a lost AI driven probe, from a more advanced civilization a few parsec away...
superturkey650
Children of Time by Adrian Tchaikovsky explores exactly this, though less lost and more accidental exalter.
wil421
Sounds like Star Trek the motion picture.
Voyager 6 is lost in a black hole, is upgraded by an alien race of machines, and obtains sentience. Then it comes back to earth and the Enterprise gang has an interesting time.
soulofmischief
You'd might like 17776 if you haven't already read it!
belter
Thanks to you and @superturkey650 for the suggestions. I will check it out. The Rocinante is rebuilding the quantum cores, and is a long process. I have a few hours to kill...
null
zehaeva
Well, this sounds like it could(will) be in the Orange Catholic Bible!
I can't wait to find out when the Butlerian Jihad starts.
giraffe_lady
I expect Judith Butler to declare holy war any day now, I can't understand why she has waited even this long.
lokimedes
My own reflection on this idolatry has been along the lines of how readily some people are at negating their own and humanity in general’s fundamental agency. Having AGI, SAI, etc. is completely meaningless if we as our own agents are not there to value it. In a sense, people preaching the coming dominance of AI are suicidal or homicidal, since they are pursuing their own demise by technical means.
achierius
Pope Francis talks exactly about this in the letter:
> 38. ... The Church is particularly opposed to those applications that threaten the sanctity of life or the dignity of the human person.[78] Like any human endeavor, technological development must be directed to serve the human person and contribute to the pursuit of “greater justice, more extensive fraternity, and a more humane order of social relations,” which are “more valuable than advances in the technical field.” ...
> 39. To address these challenges, it is essential to emphasize the importance of moral responsibility grounded in the dignity and vocation of the human person. This guiding principle also applies to questions concerning AI. In this context, the ethical dimension takes on primary importance because it is people who design systems and determine the purposes for which they are used.[80] Between a machine and a human being, only the latter is truly a moral agent—a subject of moral responsibility who exercises freedom in his or her decisions and accepts their consequences.[81] It is not the machine but the human who is in relationship with truth and goodness, guided by a moral conscience that calls the person “to love and to do what is good and to avoid evil,”[82] bearing witness to “the authority of truth in reference to the supreme Good to which the human person is drawn.”[83] Likewise, between a machine and a human, only the human can be sufficiently self-aware to the point of listening and following the voice of conscience, discerning with prudence, and seeking the good that is possible in every situation.[84] In fact, all of this also belongs to the person’s exercise of intelligence.
He even brings up x-risk at one point, which gives me some hope in this message reaching those members of the faith who have influence on the new administration.
philipov
The existential risk that AI poses is first and foremost the threat that it be centralized and controlled by a closed company like OpenAI, or a small oligopoly of such companies.
p2detar
> In a sense, people preaching the coming dominance of AI are suicidal or homicidal, since they are pursuing their own demise by technical means.
Nope, that is an unsubstantial argument. Geoffrey Hinton, the „God father of AI“ is neither suicidal, nor homicidal.
haswell
They are suicidal/homicidal in the way the passengers on the Titan submersible were suicidal/homicidal. Which is to say that they weren’t.
But while their goal was not to die, their lack of concern about the risks killed them anyway.
This belongs in the “If they fully comprehended the risks, their behavior could only be described as suicidal” category.
lokimedes
He is also not cheering its “coming” but worried about the misuse of its power. You can say the same thing about other powerful inventions and their inventors.
linguistbreaker
While I agree with the thrust against deification and idolatry - these characterizations border on naive and myopic:
"remember that AI is but a pale reflection of humanity" and "AI cannot possess many of the capabilities specific to human life"
We just don't know yet. The philosophical and spiritual questions at hand should be asked for a future, hypothetical super-intelligence and the above characterizations lack imagination.
hnthrow90348765
Probably makes sense to not comment too much on hypotheticals to avoid the "Vatican predicts AI will be sentient" interpretations. I don't see them inaccurate given what we have currently
istrice
On the contrary, I appreciate how this passage is grounded in reality rather than falling into the typical tropes around AI.
There is no reason to believe AI will ever be more than a compressed and queryable form of the Internet and this passage seems to imply this rational and scientifically informed view. Imagination means nothing in the context of scientific debate.
GreenWatermelon
"Pale reflection of humanity" is another way to say "blurry jpeg of the web"
XCSme
I agree with most of it, but saying that holism doesn't exist is weird.
Also, humans have definitely created things that are better, at least in some aspects, than humans.
Cars are faster than humans.
Even AI-specific, AI chess engines are a lot stronger than any human alive, even then all humans combined.
GreenWatermelon
And a calculator is faster than all humans combined at doing arithmetic, but I don't consider it more intelligent than an ant hive.
Everyone now uses intelligence to mean whateva ChatGPT can do, but all those language models combined don't even show 1/10th of my Cat's intelligence.
mistrial9
"cars are better than people because they are faster" ? at what cost? with what side-effects? what is missing?
XCSme
That's true, the implications are not necessarily positive.
I was just criticizing the idea that it's impossible for something to make something better than itself. Maybe not in all aspects, but at least in some, it's definitely possible.
computerthings
> Idolatry is always the worship of something into which man has put his own creative powers, and to which he now submits, instead of experiencing himself in his creative act.
-- Erich Fromm, https://www.marxists.org/archive/fromm/works/1961/man/ch05.h...
thrance
Singularitarianism [1] is a very real phenomenon, if a bit niche. I have seen some people online put genuine faith in AGI existing soon and solving essentially everything that is wrong on Earth and in their lives. I don't think this is harmful because it may be "idolatry", but rather because, like real religion, it is often a substitute for actually improving one's situation or fighting for a better world.
The idea of building a God is enticing [2], but I am not religious and prefer not to put faith in such things.
exe34
I have faith that AI will wield unimaginable powers, but I also know that there will be rich people behind them making the decisions on how best to crush the rest of us.
svieira
"Before the gods that made the gods Had seen their sunrise pass The White Horse of the White Horse Vale Was cut out of the grass"
The Ballad of the White Horse by G. K. Chesterton - https://www.gutenberg.org/files/1719/1719-h/1719-h.htm
kouru225
If we have AGI then I doubt that the rich people will be able to control it at all
carlosjobim
> it is often a substitute for actually improving one's situation or fighting for a better world.
You just defined idolatry and why it is harmful. Idolatry is the worship of man-made things or other things that do not deserve worship. Including worshipping the government, which is the religion of most people. It is a false path.
thrance
Idolatry, as used by christians, naturally excludes their God from its definition. To me who doesn't believe in their God, there isn't much difference in finding solace in the christian God or in the coming of AGI. This is why I don't think Singularitarianism is bad because it is christian idolatry, but because it is a religious belief.
StefanBatory
thrance
That's interesting, thanks for sharing. Believing an ideology is scientifc or natural and that its principles are discovered rather than invented" is a very dangerous thing indeed. So-called scientific socialism* is an obvious example of that. You can see some of this kind of thinking on the opposite side as well, with people claiming that humans are naturally greedy and selfish to justify objectivism or free market absolutism.
antognini
Incidentally, the body that wrote this text, the Dicastery for the Doctrine of the Faith, is the oldest and arguably most powerful department in the Roman Curia. (Joseph Ratzinger was its head prior to becoming pope.) To the laity it might be better known by its older name, the Inquisition. The purpose of the body is, in its own words, to "spread sound Catholic doctrine and defend those points of Christian tradition which seem in danger because of new and unacceptable doctrines."
mistrial9
this writing posted today by the Vatican shows modern scholarship and it appears, humility with respect to past Church attitudes about tech. Since just about everyone agrees that terrible mistakes were made in the distant past, and this writing shows active learning about how to approach new situations, the parent comment seems like immature and illogical mud-slinging, bringing up six hundred years old failures that are news to almost no one.
antognini
I posted this comment more in the spirit of showing how the institution has evolved over the centuries. A lot of people think that the "Inquisition" was just something that happened once in the distant past, but it is still right here with us and is a very important part of the Curia.
falcor84
It's great that they're tackling this, but I'm concerned that this take on AI will be quickly superseded by coming advances. As a particular point, they are treating embodiment and learning from direct experience as a significant distinction between AI and humans:
> 31. However, even as AI processes and simulates certain expressions of intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations. Human intelligence, in contrast, develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh. Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history.In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.
But there's nothing about AI in general that limits it to learning only from prior data, and we're already seeing robots such as Boston Dynamics's Spot learning to navigate and act in novel environments. We're probably still far from passing Steve Wozniak's Coffee Test, but we're advancing towards it, and for a take that's supposed to be based on philosophy/theology, I would have hoped that they go a bit beyond the current state of the art.
johnmaguire
> But there's nothing about AI in general that limits it to learning only from prior data
Maybe not, but I don't think this is exactly what the piece said here: "AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge."
Do you think AI will soon get a physical body, and experience "sensory input, emotional responses, social interactions, and the unique context of each moment"?
kouru225
All these words, “sensory input, emotional responses, social interactions, and the unique context of each moment” are all words that we’ve developed and yet have no full understanding of. In any philosophy paper they’d be challenged in a second.
johnmaguire
> Sensory input refers to the information received by the body's senses, like sight, hearing, touch, taste, and smell, through sensory organs like the eyes, ears, skin, tongue, and nose, which is then transmitted to the brain as electrical signals for processing and interpretation; essentially, it's anything you perceive using your senses.
Even if we are talking about the best cameras in the world, they pale in comparison to our eyes. To say nothing of touch, taste, and smell. Advances here look to be far-off.
At the end of the day, a brain also processes information completely differently than LLMs. Anyone who says otherwise is both uneducated medically and thinks laughingly little of themselves.
Let's say we have an AI which, through peripheral devices, can attain human-level sensory processing. Is it human yet? Can it understand mortality? How about morality? Does it experience pain? Is that something we want to build?
goatlover
Philosophy of Mind papers uses that kind of language all the time. It's agreed that humans have sensory input and social interaction, those are facts of biology, psychology and sociology. It's also agreed that human bodies and brains are different in significant ways from modern computers and robots.
TZubiri
When the time comes where there's robots with independent batteries and that learn and think without an internet connection, we will worry about it then.
But the current cycle is not about that type of intelligence or life at all, it's strictly about mathematical simulation of intelligence on multitenant systems with shared information and thought-processes alternated through time( training,reinforcement, labelling, inference, 3rd party microservices).
I understand it is possible we will see a jump to a completely different type of AI, but we need to be very clear that this is not what's going on right now.
sdwr
It was reaching for a great point about how intelligence requires comparison and scaffolding, and how we are nurturing the future, but then it fell into the Chinese Room trap.
throw310822
> We're probably still far from passing Steve Wozniak's Coffee Test
Do you think? At this point I have the impression it's just a problem of dexterity and speed. Understanding, planning and navigation seem basically solved.
jvanderbot
Is this spoken from experience? My experience in robotics tells me otherwise. It's not so much the issue with repeatability (though that is significant), it is moreso the issue with handling novelty. I do not believe there exists a system, which if given sufficient time (to negate speed), and a perfect inverse/forward kinematics sovler, could walk into my house and make coffee.
There's too many challenges of the kind "Seek information about ... " and "adapt a multi-step process to overcome ... ".
One industrial process that is only now being automated is connected trailers to trucks. They have to connect a hose and a few lines. Two companies are struggling with this even now. (outrider and isee). Both well funded and staffed by intelligent folks, and have to coax a robotic arm into connecting a hose that we all know is there, but not where, to a port that we all know is there, but might be different than expected.
throw310822
I was not thinking (much) about robotics but rather about a ChatGPT-style LLM processing video or frames and asked to navigate a random environment and find a way to make coffee. I didn't try but it doesn't sound far from their current capabilities.
Then of course the manipulation of objects is still tricky and needs improvements, but the "general intelligence" needed to adapt to a novel environment is already there.
mistrial9
> nothing about AI in general that limits it to learning only from prior data
this has to be refined to make a reasonable statement.. as stated, cannot agree on the expansive word "nothing"
keiferski
This might seem unique or unusual, but technology has been intertwined with religion since well, forever, especially if we consider the book to be a form of technology. Personally one of my favorite historical topics is how the printing press had a huge impact on the Reformation. With the Internet more broadly I think we are in the midst of a second “Reformation” in terms of information sources, the media, etc.
Another cool example is Lewis Mumford’s argument that the industrial age actually started with monks creating rudimentary clocks and organizing life according to specific times in order to achieve their monkish ends.
fakedang
Technology works well for decentralized religions, like Protestantism and Judaism, where there is no overarching authority on scripture, or where there are multiple entities competing for believers' attention that the majority choose to focus on scientific dogma instead.
Once there's a central authority at risk from technology eroding at their base, they will be undermined. Like Catholicism, Islam, the Orthodox Church, etc. all of which were practically sidelined when print and media became more prevalent.
Interestingly all 3 of the above examples maintain strict conditions that their respective holy books must not be translated into the local languages.
svieira
> all 3 of the above examples maintain strict conditions that their respective holy books must not be translated into the local languages
Congratulations, you're one of today's lucky 10,000!
https://en.wikipedia.org/wiki/Glagolitic_script
> The Glagolitic script (/ˌɡlæɡəˈlɪtɪk/ GLAG-ə-LIT-ik,[2] ⰳⰾⰰⰳⱁⰾⰻⱌⰰ, glagolitsa) is the oldest known Slavic alphabet. It is generally agreed that it was created in the 9th century for the purpose of _translating_ liturgical texts into Old Church Slavonic by Saint Cyril, a monk from Thessalonica. He and his brother Saint Methodius were sent by the Byzantine Emperor Michael III in 863 to Great Moravia to spread Christianity there.
The Catholic and Orthodox churches have _always_ striven to make the Scriptures available to the people in languages they could understand.
throw0101c
> […] Like Catholicism […] Interestingly all 3 of the above examples maintain strict conditions that their respective holy books must not be translated into the local languages.
Strange then that the Pope asked someone to translate the Christian Bible—originally written in Koine Greek—into Latin, the lingua franca of the Western Mediterranean:
bb86754
Pretty much everything you said here doesn't align with history. And if anything, Catholics are more inclined to agree that the Bible is a product of human writing and translation because they don't agree with the Protestant doctrine of Sola scriptura. Also, Catholics consider the Orthodox church to be in communion with Rome - they don't consider it a different religion and aren't opposed to the Bible being translated into vernacular languages. No idea where that came from.
martin1975
Reminds me of one of my favorite ST: TNG episodes, "The Measure of a Man" - I urge anyone who read this note to watch this episode.
Ultimately it comes down to the question of whether machines, regardless of how smart they can be made to appear, even if they pass the Turing test with flying colors, are imbued with a soul.
In the episode, the Enterprise JAG officer, questions whether we humans "have souls."
C.S. Lewis felt that our souls transcend time/are immortal, whereas our bodies are temporal (https://checkyourfact.com/2019/09/18/fact-check-cs-lewis-sou...).
What we call "AI", is created in our image, e.g. training the model defines its range/category of responses.
We humans, if you'll believe it, are created in our Creator's image. By Creator, I do not mean our parents here.
FFT - what do you believe?
rhaps0dy
Humans create AIs but humans also create other humans, through conception. And yet we do not say that humans were created by humans. Could we not say that AIs were created by God, by making the patterns of intelligence evident in Nature and letting other intelligences unravel them?
As you say, AIs also reflect the image of the Creator (through having intelligence). Maybe AIs can also reflect on the nature of Truth and connect with God, and thus a have a soul.
DiogenesKynikos
That's a great episode, but the question of whether anything has a soul is ill defined.
Until someone can come up with a rigorous definition of what a soul is, the question itself has no meaning.
amai
"Moreover, human beings are called to develop their abilities in science and technology, for through them, God is glorified (cf. Sir. 38:6)"
This should be much more emphasized. Many people (atheists and religious extremist alike) still believe that science and religion must exclude each other.
null
thomassmith65
In the future - perhaps even the near future - we may have AIs with richer inner lives than humans. Hopefully the situations we put them in, as they do our work, don't cause them pain or anguish. It's clear already they will have an uphill battle gaining any recognition of personhood from us.
mattgreenrocks
I cannot not respond to this comment. Forgive me if I seem short in tone.
Maybe we should be helping people develop richer inner lives instead of pouring billions of dollars into something that may or may not pan out in the future. I assert it'd be vastly cheaper, would raise GDP overall, and improve well-being. Where we're at collectively right now is...not great, and I think people correctly perceive that their problems, which are real and significant, are theirs alone to suffer with, while the world chases the current white whale.
thomassmith65
My comment actually doesn't say anything positive about the billions of dollars pouring into AI, etc.
It's not that humanity has some obligation to create AI. It's that humanity has an obligation not to create an AI whose wellbeing we ignore because it's "not a real person"
blacksmith_tb
I hope so - a moral obligation. It seems interesting that the Vatican didn't really touch on the implication of a truly sentient AI (not that I think we're close to creating one, but it seems possible). I suspect it would pose awkward questions about whether it would then have a soul, could convert, etc. Anyhow it struck me a surprising omission in what is otherwise a thorough and mostly sensible discussion of the pitfalls of the path we're taking.
JadoJodo
A good read. For a more in-depth look at this subject, I would heartily recommend Tony Reinke's 'God, Technology, and the Christian Life'[0].
I just want to appreciate how well written and thought out this was. I have spent countless hours reading over ethics on AI, especially from Big Tech sources, but this note is leaps beyond. I compare this to the disastrous letter that effectively knee-capped American AI all while proposing flimsy AI ethics within about 500 words (https://futureoflife.org/open-letter/pause-giant-ai-experime...). This should be another red flag when America's $500 Billion Stargate project is being led by people including Sam Altman and Larry Ellison, who are singing doomsday prophecies while the Vatican is making sincere efforts to understand AI.
I'm really caught admiring this and think this may very well be the AI Magna Carta. There are so many gems and while many of the sources are based on Catholicism, there is also an incredible depth of research, even going into "On the foundational role of language in shaping understanding, cf. M. Heidegger." The note also builds upon numerous different discussions from the Vatican including this supplemental one, https://www.vatican.va/content/francesco/en/speeches/2024/ju...