Skip to content(if available)orjump to list(if available)

A new proposal for how mind emerges from matter

skissane

Articles like this annoy me: it seems to want to comment on philosophy of mind, but shows zero awareness of the classic debates in that discipline - materialism vs idealism vs dualism vs neutral monism, and the competing versions of each of those, e.g. substance dualism vs hylemorphic dualism, eliminativist vs reductionist/emergentist materialism, property dualism, epiphenomenalism, panpsychism, Chalmers’ distinctions between different idealisms, such as realist vs anti-realist and micro-idealism vs macro-idealism…

Add to that the typical journalistic fault of forcing one to read through paragraph after paragraph of narrative before actually explaining what the thesis they are presenting is. I’d much prefer to read a journal article where they state their central thesis upfront

mcswell

As a linguist, article like this also annoy me by the claims that "X [whales, dolphins, parrots, crows...] uses language." We have known since 1957 that there is a hierarchy of "grammars", with finite state "languages" being near the bottom, and transformational grammars at the top. Human languages are certainly at a minimum at the context free phrase structure grammar level. My point is that by using the word "language" loosely, almost anything (DNA codons, for example) can be considered to be a language. But few if any other animals can get past the finite state level--and perhaps none gets even that far.

And an article or book that uses the word "communicate" is even more annoying, since "communicate" seems to mean virtually anything.

End of my rant...

teekert

As a scientist, articles like this also annoy me. Because it, right off the bat, assumes that there are no degrees in consciousness. Just because we don't experience those degrees, and animals can't convey what they feel through language, we assume it is "emergent", or "suddenly there". I think we are too caught up in believing that consciousness must be something really special or some magic discontinuity of spacetime.

I don't believe it is. Somewhere inside our brain there is some perception of self related to the outside world. So we can project self into the future using information from the now and make better choices and survive better ("Information is that which allows you [who is in possession of that information] to make predictions with accuracy better than chance"- Chris Adami). Why do we need all these difficult words?

I bet animals also have some image of self inside there somewhere, and make decisions based on simulated scenarios. Perhaps to a lesser degree, perhaps because of a lack of language they experience it in a different way? Not being able to label any of the steps in the process... Who knows?

Perhaps when we get to simulate a whole brain we can get some idea. But then there is the ethics. We do attribute great value to organisms that have this image of self.

vidarh

Add to this that a lot of people presume that peoples experience is the same.

E.g. I have aphantasia - I don't picture things in my "inner eye". Through discussions with people about that, a lot of people described own differences from the perceived norm, and it is clear a not insignificant numbers also have other differences, such as not thinking consciously in words.

A lot of people then tend to express disbelief, and question the whole thing on the basis of a belief that if people's inner life does not match theirs, people couldn't possibly be conscious or reason.

People make far too many assumptions about the universality of their own experience of consciousness.

IsTom

> assumes that there are no degrees in consciousness

And I don't get why, it seems to me quite self evident that you yourself can experience reduced-consciousness states, be it being half-asleep or quite drunk.

readyplayeremma

The article’s real contribution is in highlighting evidence of complex behavior in living systems that often get excluded from definitions of "intelligence". In doing so, it invites deeper philosophical reflection, even if it doesn’t mount that reflection itself.

whymeogod

> We do attribute great value to organisms that have this image of self.

My impression is we attribute great value to organisms that can effectively push back against us.

respect based on force.

Not saying I want things to be that way.

wordpad25

Does that mean LLMs can already be considered conscious at some level since they are able to reason and self reflect?

garden_hermit

tbf, many materialists dislike the "degrees of consciousness" idea because a theory that posits "consciousness is on a spectrum" is one that starts to resemble panpsychism, which they consider magical woo.

ggm

This. it nicely encapsulates why AI aficionados use words like "hallucinate" which become secret clues to belief around the G part of AGI. If it's just a coding mistake, how can the machine be "alive" but if I can re-purpose "hallucinate" as a term of art, I can also make you, dear reader, embue the AI with more and more traits of meaning which go to "it's alive"

It's language Jim, but more as Chomsky said. or maybe Chimpsky.

I 100% agree with your rant. This time fly likes your arrow.

crooked-v

The correct term isn't "hallucinate", it's "bullshit". I mean that in the casual sense of "a bullshitter" - every LLM is a moderately knowleable bullshitter that never stops talking (In a literal sense - even the end of responses is just a magic string from the LLM that cues the containing system to stop the LLM, and if not stopped like that it would just keep going after the "end".) The remarkable thing is that we ever get correct responses out of them.

taneq

> This time fly likes your arrow.

And fruit flies like bananas. :)

gizajob

Yes, it's such a waffle. Instead of the unnecessary title "A Radical New Proposal For How Mind Emerges From Matter" – a more appropriate one would be "On plant intelligence (and possible consciousness)" given the entirety of the article is devoted to plant intelligence. We don't have anything radical, nor is it very deeply related to the mind/matter problem. If an author can't get something simple like that correct, then they don't deserve our time. Shame one has to get paragraphs deep into the article to find out we have a spiel about plants, not about mind.

Xmd5a

Here lies the promised land: the possibility of a precise and concise nomenklatura that assigns each thing a unique name, perfectly matching its unique position in the world, derived from the complete determination of the laws governing what it is and how it interacts with others. The laws of what is shall dictate how things ought to be named. What a motivating carrot—let’s keep following these prescriptions, for surely, in the end, the harmony of their totality will prove they were objective descriptions all along. Above all, let’s not trust our own linguistic ability to distinguish between the subtle nuances hidden within the same word, or at least, let’s distrust the presence of this ability in our fellow speakers. That should be enough to justify our intervention in the name of universality itself.

Imagine this: language is an innate ability that all speakers have mastered, yet none are experts in—unless they are also linguists. And what, according to experts, is the source of such mastery? A rigid set of rules capturing the state of a language (langue) at a given time, in a specific geographical area, social class, etc., from which all valid sentences (syntactically and beyond) can supposedly be derived. Yet this framework never truly explains—or at best relegates to the background—our ability to understand (or recognize that we have not understood) and to correct (or request clarification) when ill-formed sentences are thrown at us like wrenches into a machine. Parsers work this way: they reject errors, bringing communication to an end. They do not ask for clarification, they do not correct their interlocutors, let alone engage in arguments about usage, which, under the effect of rational justification, hardens into "rules."

Giving in to the temptation of an objective description of language as an external reality—especially when aided by formal languages—makes us lose sight of this fundamental origin. In the end, we construct yet another norm, one that obscures our ability to account for normativity itself, starting with out own.

Perhaps this initial concealment is its very origin.

null

[deleted]

umanwizard

I’m a linguistics layman, but can’t you make an even stronger claim about human language? Apparently there are certain constructs in Swiss German that are not context-free.

skissane

From another viewpoint, all human language is only at the finite-state level - a finite state automaton can recognise a language from any level in the Chomsky hierarchy provided you constrain all sentences to a finite maximum length, which of course you can - no human utterance will ever be longer than a googolplex symbols, since there aren’t enough atoms in the observable universe to encode such an utterance

Really the way people use the Chomsky hierarchy in practice (both in linguistics and computer science) is “let’s agree to ignore the finitude of language” - a move borrowed from classical mathematics

scotty79

It's philosophy. It doesn't concern itself with facts or knowledge.

NoGravitas

So, I think you' are wrong about this: the article is discussing a change in perspective that would largely make the classic debates in philosophy of mind irrelevant in the same way that heliocentrism made classic debates within the geocentric paradigm (about epicycles, and such) irrelevant. Highly worthwhile, if insufficiently in-depth.

photonthug

It seems like this might be missing or just talking past parents point. Sure, new paradigms might always make old lines of research irrelevant. But otoh, something like idealism vs materialism vs dualism can’t just be dismissed because the categories are exhaustive! So new paradigms might shed light on it but can’t just cancel out the question. So yes, to parents point some stuff is fundamental, and if you’re not acquainted with the fundamentals then it’s hard to discuss the deep issues coherently. It’s possible I guess that a plant biologist or a journalist or a ML engineer for that matter is going to crack the problem wide open with new lines of inquiry that are completely ignorant about historical debates in theory of mind, but in that case it will still probably take someone else to explain what /how that actually happened.

SilasX

I agree in spirit, that it's possible for a field to be so lost that a new paradigm fundamentally obviates it and frees you from having to recapitulate the entire thicket. But still, a minimal test for whether you've actually obviated them would be whether you can double back and show how the new paradigm resolves/makes them look naive and confused. And so I'd expect an author to do that for at least one such school of thought as part of their exposition.

protocolture

I bailed 10 words in and came to the comments to see if the article was worth reading. Thanks for confirming its a skip.

felizuno

So many adjectives... and "radical" in the title is a doozy considering this article is essentially a stoner summary of LeDoux's "Deep History of Ourselves" with the science replaced with thesaurus suggestions.

metalmangler

I held out a bit longer, and then skimmed, but started to become offended by the whole ,thing, but couldnt be bothered to be that annoyed by so many different things. That said I am very much into bieng outside in the company of many animals, working with them, and just bieng in wild environments, but language fails, when it comes to describing how species interact, and in the end, we cant describe our own mental proceseses in a clear, intellible by all, way. There is yet to be a genius of the mind, where the definition of genius is,:an idea that once explained, everyone else goes "of course" My main issue is that these sort of philisophical projects, reek, of money and hidden agenda's, and shade very quickly into, policy decisions, and quasi religious beurocratic powers.

rramadass

The gp is wrong. They are just being supercilious with a word salad of their own which is best ignored.

The article is pretty good making you rethink all your preexisting concepts of Mind/Intelligence based on what we know from Biology today. It is not an article on various theories of Mind but on how scientific research (conveyed through pointers from various researchers) is advancing rapidly on so many fronts that we are forced to confront our very fundamental beliefs.

Absolutely worth reading at least a couple of times.

dsjoerg

Yes.

The article is really about conceptual framing — how clinging to outdated or vague definitions prevents progress in understanding biological and cognitive processes.

People keep forcing everything into the vague, overloaded concept of "intelligence" instead of just using the right terms for the phenomena they’re studying. If we simply describe behaviors in terms of adaptation, computation, or decision-making, the whole debate evaporates

https://chatgpt.com/share/67c07325-3140-8007-8177-c56a89b257...

globnomulous

Is there a philosopher or philosophical school that identifies intelligence as a being's capacity to deploy a capability towards some end with intention? If so, what is this called? Or who is associated with it?

Edit: I'd expect this thinker/school also to argue that the being needs to be able to experience its intention as an intention (as opposed to a dumb, inarticulate urge); in other words, intelligence would require an agent to be aware of itself as an intelligent agent.

Edit 2: I strongly recommend Peter Watts' Blindsight to anybody who's on the market for sci fi that deals with these issues.

jcgl

This is _not_ about defining intelligence. But it's about supposing intentionality as a tool for explaining the behavior of a thing.

https://en.wikipedia.org/wiki/Intentional_stance

FrustratedMonky

Agree. This last couple years of AI has been a wellspring of people/articles re-inventing philosophy, like all these subjects haven't been debated and studied for a few hundred years. At least acknowledge them, even if we aren't going to try and build on what has come before.

kordlessagain

Wait. If words fail to capture the truth, why do we keep making more words about it?

geuis

[flagged]

skissane

Your position is self-refuting: you are dismissing philosophy, yet simultaneously making philosophical claims in doing so-claiming that all knowledge is empirical is itself a philosophical claim (empiricism)

geuis

This is a philosophical argument, therefore dismissible by science.

Unless you can rephrase your argument as something testable, it's philosophy and thereby not relevant.

scotty79

There's no contradiction. Philosophy is something everybody does after a beer. No point in pretentding that it's a relevant profession for hundreds of years already.

antihipocrat

A lot of philosophy is testable and the inspiration for scientific enquiry. Philosophy contains logic as a major area of study and the results of this work are core tenets in mathematics which enables the ability to conduct rigorous science.

Science itself is the product of philosophical enquiry.

kazinator

Science is the product of philosophical inquiry if you ask philosophers.

Just like the Internet is the work of Al Gore, if you ask Al Gore.

mmooss

> I don't care what your ideas are. If they aren't testable, they're your opinion.

Most of the life and the world involves beliefs that aren't testable. The lack of testability doesn't mean it's arbitrary; testing is just one tool (the most effective one, I believe). But if you restricted yourself to that, if we had no other qualification or judgment, we couldn't achieve anything.

tweaqslug

Philosophy is precisely the domain of things that cannot be objectively testable because they are grounded in experience. You can not prove that you are conscious (especially in a world of LLMs) but you know it to be true. Should I assume you are an automaton simply because I can’t prove you are conscious?

geuis

Consciousness is not philosophy. In the past, it was. People didn't have the tools or theoretic background to even approach it in an empiric way.

However we now have at least the basics to hypothesize and perform experiments. So it's no longer philosophy and is in the realm of understanding what actually are the mechanisms of consciousness.

DrFalkyn

So you’re an empiricist.

Is science “reality” or is it just a series of models/ conceptual frameworks ?

geuis

Keep walking into a glass door a few dozen times. Is the door a reality even though you can't see it, or just a conceptual framework you keep bouncing off of?

At some point you have to stop thinking in your head why you can't walk through the window and start testing and figuring out what keeps breaking your nose.

perching_aix

You're describing the scientific method. Philosophy is not for "understanding the universe". Surely it cannot be blamed for not fulfilling a goal you only imagine it to have.

scotty79

It can be blamed for being pretentious useless ramblings.

null

[deleted]

ARandomerDude

> Philosophy has little to do with reality.

Translation: I’ve never read Aristotle.

xg15

Maybe the definition of what "intelligence" is could be sharpened by having a look at LLMs and "traditional" computer programs and asking what exactly the difference between the two is.

Almost all the traditional criteria of intelligence - reasoning, planning, decisionmaking, memory etc - are exhibited pretty trivially by standard computer programs. Nevertheless no one would think of them as "intelligent" in the sense that humans or animals are.

On the other hand, we now have LLMs, that sent the entire tech world into a multi-year frenzy, precisely because they appear to possess that human-like intelligence.

And that is even though they perform worse than classical programs in some of the "intelligence" measures: For the first time, we have to worry that a computer program is "bad at math". They cannot reflect on past decisions and are physically unable to store long-term memories. And yet, we're much more likely to believe that an LLM is "intelligent" than a classical program.

This makes me think that our formal decisions of "intelligence" (the ones that would also qualify fungal networks, swarms, cells, societies, etc) and what we intuitively look out for, are really two different things.

thelamest

>This makes me think that our formal [definitions] of "intelligence" […] and what we intuitively look out for, are really two different things.

Just two? You can name so many more terms in this concept cloud, e.g.: personhood, moral agency, consciousness, self-awareness, processing power, wit, autonomy, feeling-and-experiencing capacity, and so on… We don’t seem to agree on what’s separate from what, and yes, it would be useful.

AndrewKemendo

The only proof of intelligence that humans accept are where they are utterly dominated by the more intelligent “thing.”

In my experience it comes down to speed of processing and response

Nobody views trees as intelligent because despite having extremely complex interdependencies with their ecology, including fungus and all kinds of other organisms, they don’t move quickly or appear to respond to input (even though they do, just slowly).

Meanwhile the mantis shrimp because of it speed, superhuman vision and flexibility is considered extremely intelligent.

The only thing that humans will accept as more intelligent than them is (something) that they cannot control and can dominate them.

This is why AI is “the things we haven’t done yet” - once a technology is pervasive and integrated it is “just computing.”

bloomingkales

Nobody views trees as intelligent

Because we take freely from trees. They must be stupid or something because they give things freely. It's kind of why many employers view labor as stupid. They give their labor for very little. Humans dabble in arrogance.

empath75

We need to break the concept of "intelligence" down into more well defined components, probably.

energy123

But the discussion is not about intelligence, human or otherwise.

naasking

The article mentions intelligence a lot actually. It's also conjecture that consciousness and intelligence are unrelated. Qualia could be functional, and emerge naturally from the development of ever more sophisticated intelligence.

energy123

> It's also conjecture that consciousness and intelligence are unrelated.

That's true. We shouldn't casually conflate the two but also shouldn't make the assumption that they're independent.

raindeer2

The difference between traditional software and LLMs is the generality of their intelligence, and there are formal definitions of general intelligence such as https://en.m.wikipedia.org/wiki/AIXI

AIXI is a definition of the optimal agent and is hence uncomputable but LLMs are approximations which are approaching AIXI. I recommend Fridman's interview with Hutter.

bryanrasmussen

intelligence is a property of the species and the property of the individual, even unintelligent individuals (except for very pronounced extremes) still have the species property of intelligence.

The species property of intelligence encompasses stupidity.

andoando

I wouldn't say that's true at all for traditional computer programs. They're doing explicitly what they are designed to do, there is no adaptation/learning.

ben_w

Code vs. data.

The code needed create, train, and perform inference on a Transformer is quite short. How short depends on how you count the `import` statements in https://github.com/openai/gpt-2/blob/master/src/model.py and https://github.com/openai/gpt-2/blob/master/src/sample.py etc.

Spreadsheets performing linear regression etc., — Do they learn? Sure!

If you accept that Transformers adapt and learn then you must accept that a spreadsheet also does, because someone implemented GPT-2 in Excel: https://github.com/ianand/spreadsheets-are-all-you-need

Do polymorphic computer viruses adapt? Border Gateway Protocol? Exponential backoff? Autocomplete? And that's aside from any algorithmic search results or "social" feeds, which are nothing but that.

andoando

I am confused, a spreadsheet running the code for what a neural network does, sure. But a traditional computer program isn't just excel.

szvsw

How do you define adaptation and learning? What about say, an autoscaler which is programmed to just track the the load for every hour over the last week, and use the average of the last 7 days at 8am to pre-emptively auto-scale? Is that learning and adapting?

Alternatively, neural networks are also just doing explicitly what they are designed to do… sure there is a larger computational graph with lots of operations, but it’s all deterministic… backprop is not really much different on a procedural level than the simple fitting algorithm that I outlined above, in as much as it is just a specific well-defined algorithm or sequence of steps designed to compute some parameters from data.

bloomingkales

What if we define adaptation and learning as the ability to concentrate. Our single cell ancestors would have to concentrate and deliberately store the first memory. Otherwise it would have just taken in the world with its sensors but never do anything with it.

Adapting and learning means it chose to concentrate on packing the world into retrievable storage.

When do we not adapt and learn? When we ignore our inputs and do nothing with it (don’t store it, don’t retrieve it).

In the example you gave, those classical programs cannot concentrate, it’s a one and done.

empath75

You can construct a "computer" that learns and adapts with some matchboxes and buttons.

https://en.wikipedia.org/wiki/Matchbox_Educable_Noughts_and_...

I wouldn't say that's more "intelligent" than, I don't know, a workplace scheduling system.

Nevermark

A structured program analyzing data as a graph, and optimizing access, is interacting with phenomenon, updating its working knowledge of the phenomenon, and can produce results that are very non-intuitive.

Likewise, any symbolic mathematical system that accumulates theorems that speed up future work as it solves current tasks, seems like a high intelligence type of activity.

Deep learning is “just” structured arithmetic.

I think different kinds of intelligence can look quite different, and they will all be “structured” or “tropic” at their implementation levels.

Stepping away from the means, I see at least four “intelligence” dimensions:

1. Span: The span of novel situations for which it can create successful responses.

2. Efficiency: The efficiency of problem solving.

I.e. When vast lookup tables, exhaustive combination searches, and indiscriminate logging of past experience can be matched instead by more efficient Boolean logic, arithmetic, pattern recognition and logging, we consider the latter more intelligent.

3. Abstraction: The degree to which solving previous different novel situations improves the success or efficiency in solving new problems. I.e. generalizable, composable learning.

4. Social education: Ability to communicate and absorb learned information from other entities.

Plants, and I expect all surviving life forms, are very high in intelligence types 1 and 2.

Adaptive nervous systems and especially brains excel at 3.

Many animals, but most profoundly humans (whose languages for communicating are themselves actively adapted for compounding effects), excel at 4.

Today’s humans are effectively more intelligent than 10,000 year ago humans, not because of 1-3, but because of 4. Learning as a child to read/write, do arithmetic, understand zero and negative numbers, and countless other information processing activities and patterns, from others, profoundly impacts our intellectual abilities.

Deep learning, as with the human species, non-trivially spans, and continues to improve, on all 4 types of intelligence.

giorgioz

I've been thinking as well that from some perspective a human being isn't actually a single life but rather itself a multitude of separate tiny-life forms that cooperate to survive (the cells). The voice in our head is the emerging consciouness that act as a captain, it's useful for the captain to think of itself as one being. Now this said, I feel the article is jumping a bit too much on the Animism hippie bandwagon: https://en.wikipedia.org/wiki/Animism

Of course, there is some intelligence in any life form behaviour, but if you want to say that a tomato plant is intelligent than you need to use another word or set of words for more advanced life forms. Putting in the same bag a tomato plant and a dolphin clearly makes the word intelligence so vague it loses almost every practical meaning.

To clarify also the part where it talks about earth being an organism. I've thought of this as well, the whole universe could be a life form, each planet and star in it a cell in its body. It's a possibility. Or maybe even our whole universe is just a cell in someone's body.

It's possible, but I fear there is little science in the people of that article and just old fashion Animism and protect "the mother earth" natural spiritual thinking that has existed for thousands of years. Those people see the world as if they are druids in a fantasy novel. I see myself more as a wizard. We might have different opinions. I will take them seriously when they can LITERALLY speak with animals daily in useful manners and tell vines to move. Until then is just (a) Fantasy. I can summon electricity and fireballs (with technology), if they want to say they are druids they better step up their enchantments. Telling "sit" to a dog and then writing long articles on how dogs are so intelligent doesn't cut it for me.

dimal

The term I’ve seen used the most is “basal cognition”. It’s used to describe agentic problem solving behavior seen at lower scales and in different problem spaces where we normally have trouble imagining intelligence. Michael Levin’s paper, “Technological Approach to Mind Everywhere: An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds”[0] is a good, readable explanation of the concept. He does a ton of YouTube talks explaining it as well.[1] Very watchable, and pretty mind-blowing. It’s not wishy washy animism. He and his lab are doing rigorous experiments and finding some very unusual things.

[0] https://www.frontiersin.org/journals/systems-neuroscience/ar...

[1] https://youtu.be/StqX-LH0IN8?si=NhwMdWxBLZrghUom

null

[deleted]

truculent

While it shares some similarities with animism, it’s a fundamentally materialist viewpoint, which puts it at odds with animism. Materialism is so rooted within our culture that I think it can be hard to fully grok alternative viewpoints.

bloomingkales

Trees giving you the very thing you need to live every minute wasn’t enough of a compromise for you? They’ve quite negotiable with us.

Gotta watch The Happening again.

giorgioz

To be clear, trees did not "give" us oxygen. Trees evolved from bushes which evolved from algaes, which evolved from some mono-cell life forms which are a common ancestors to us as well. They were shaped by natural selection, it was indeed awesome for them to get out of the water and start consuming carbon dioxide in the atmosphere. They did not give us oxygen though, there was no high level consciousness of gifting life to others as your sentence is implying. This said, trees are awesome and we should definitely not cut them or burn them for no reason or pure sadism. We should though also consider them as a parte of the environment where live. If we need them and there is plenty and there aren't other constraints we can use them to build things. Having too much carbon dioxide built up in the atmosphere is a constraint very important to value. We don't want to literally destroy our home. I can keep this thoughts rationally in my brain without having to come up with metaphors where the giant rock we happened to be on is a "mother" nurturing us. The earth is not our "mother", we evolved here because simply all other rocks did not have the right conditions to sustain life. The universe is mostly a harsh and cold place, we should save/improve/increase the places that can be home. I feel though the people that start to lean into Animism are just not very scientific, they mean well, but they won't learn all the technology tools that could help me preserve our home planet more effectively.

Ending on a positive note, many scientists both love nature and are rationally scientific (rather than romantic). All the awesome work being done with using LLM to speak with whales and recognize better dogs emotions and pain levels is very inspiring.

bloomingkales

It’s not that we’re unscientific, it’s just that we’re putting science aside unless it corroborates things. It’s totally a terrible process, there’s no evidence.

Some people over time cannot put down the fact that there’s just way too many coincidences. If you just pile up what it took for you to be here (let’s include your one in a million chance of out swimming the other sperm along with a perfect planet, possibly perfect parents, list goes on), I think you’d at least agree it’s one miracle after another.

I can sit here and measure the anger on your face based on the metric of how red your face is but I’ll never know why you are angry. You can tell me, but even then, is it true? That’s the issue with science, that measuring and collecting observations won’t reveal the true nature. So it must be put aside, at least to investigate the other possibility.

I don’t see how people aren’t open minded about there being more than a clinical understanding of this experience.

From my other comment:

https://www.sciencedirect.com/science/article/abs/pii/S01676...

Is it so hard to believe there is a global frequency?

timewizard

You perceive one voice and give it a position of authority.

What if it isn't?

What if it's many voices and they compete for control?

phito

This is 100% my experience. There are different captains, but since they share the same memory, there's an illusion that they are one. When you look into who is in the driver seat now at different times, it is clear that there are multiple drivers. Which one is driving now depends on the context and state of my brain.

myflash13

Here's an interesting thought experiment. Take any definition of consciousness or intelligence that is not based on biological components. For example "reacts to stimuli, exhibits anger". You can apply that definition to other entities like the United States. Does the United States react to stimuli (i.e. invasion) and exhibit anger? Yes (e.g. Pearl Harbor). Therefore the United States is conscious?

If a person argues that an LLM is conscious or intelligent based on how it responds, is the United States conscious or intelligent?

skissane

There's a great philosophy paper making this argument: Schwitzgebel, Eric. “If Materialism Is True, the United States Is Probably Conscious.” Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, vol. 172, no. 7, 2015, pp. 1697–721. https://faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconscious...

Discussed previously on HN:

July 2015 (210 comments): https://news.ycombinator.com/item?id=9905847

Feb 2021 (78 comments): https://news.ycombinator.com/item?id=26217834

Dec 2023 (1 comment): https://news.ycombinator.com/item?id=38814769

naasking

> There's a great philosophy paper making this argument: Schwitzgebel, Eric. “If Materialism Is True, the United States Is Probably Conscious.”

Maybe, but this strikes me as a borderline category error, like saying, "If databases can order results, then they are probably sorting algorithms". The argument makes assumptions that consciousness is transitive or "sticky" in some sense, where if a property applies to a part of the system it must also apply to some aggregate of those parts.

skissane

> The argument makes assumptions that consciousness is transitive or "sticky" in some sense, where if a property applies to a part of the system it must also apply to some aggregate of those parts.

I think the argument is really about substrate-independence. If consciousness is just about functional properties, then why can’t a social collectivity exhibit those functional properties?

Many materialists do in fact endorse substrate-independence - the common belief that an AI/AGI could in principle be as conscious as we are (even if current generations of AI likely aren’t) depends on it - and I think substrate-independent materialism likely does fall victim to this argument. Now, maybe not, if there is some functional property we can point to that individual humans and animals possess but which their social collectivities lack-but then, what is that property?

Other viewpoints don’t endorse substrate-independence. For example, Penrose-Hameroff’s orchestrated objective reduction, if you would call that materialism - I think you can interpret it as a materialist theory (e.g. the in principle empirically testable claim that neurons have a physical structure with certain unique quantum properties, plus the less testable claim that those properties are essential for consciousness) or as a dualist theory (e.g. these alleged unique quantum processes as a vehicle for classical Cartesian interactionism). The more materialist reading could be viewed as a substrate-dependent materialism which escapes Schwitzgebel‘s argument. But I don’t think most materialists want to go there (seems too dualism-adjacent), and the theory’s claims about QM and neurobiology are unproven and rather dubious.

myflash13

The difference here is we know the definition of sorting algorithms, but we don't have a working definition for "consciousness". The argument is, if we use materialist definitions, then lots of unexpected things fit the definition.

Joker_vD

> "reacts to stimuli, exhibits anger".

Nitroglycerin comes to mind.

koakuma-chan

> Therefore the United States is conscious?

Assuming the United States react to stimuli and exhibit anger, and assuming that the definition of being conscious is reacting to stimuli and exhibiting anger, yes, the United States are conscious.

null

[deleted]

cjfd

One part of consciousness is the 'stream of consciousness'. I.e., a single-threaded, if you will, sequence of observations and/or language that is extracted from all the parallel processing that is happening in the brain. The US does not have that. If there were one single news broadcast that all people were listening to all the time and were basing their action on, one might start considering that the US is conscious.

Also, considering who the current president is, the US is quite the opposite of intelligent.

svantana

This is one of the themes of the classic 1979 book "Godel, Escher, Bach".

timewizard

If I break a bone. It heals itself. Are my bones intelligent?

FeepingCreature

I think the US is conscious.

carlosjobim

What is the god damned problem with writers today?

> From a snarl of roots that grip dry, shallow soil, the knobbly trunk of an ancient olive tree twisted into a surprisingly lush crown of dense, silvery-green leaves. Far above, the retrofuturistic pattern of a geodesic dome framed the blue sky outside. Dan Ryan considered the tree: “It’s probably close to 1,800 years old.”

Why do articles always have to start this way? These writers are writing the blandest and most boring clichés imaginable and everybody hates it, yet they keep doing it. Is it so fucking important for their egos to be perceived as some kind of antiquated stereotypical writer that they have to continue to humiliate themselves and their readers with this, just to impress who?

No wonder everybody is watching short form stuff on TikTok, that gets to the point.

No wonder people are listening to three hour podcasts, where the people who are being interviewed can talk uninterrupted, instead of having their quotes salted and peppered between some musings from the writer.

Where are the writers that make articles for normally intelligent people to read, that aren't filled with fluff, but aren't filled with equations either? Are these all making YouTube videos now?

gregwebs

I don't see the word "consciousness" in the article. I thought that was the thing to figure out to understand the emergence of the mind.

sctb

My general understanding is that "mind" is an objective concept; people have minds that cognize and think and learn and so on. Some minds are apparently more capable of those things than others. When speaking about intelligence, it makes sense to associate that with the mind.

Consciousness, on the other hand, is (even) less well-defined and is usually considered to be subjective. Being subjective, it tends to resist all of the usual objective approaches of description and analysis. Hard problem and all that.

mirekrusin

I don't understand why people have problem with simply stating that it is emergent phenomenon and that's it.

Similarly to how computer is computer and half sized computer is half of its bigger friend – you can keep halving it until there is no "computer" left in it.

Or pencil – you have pencil that you call pencil; what about pencil half size of it? and so on until you hit single atom. You had pencil, now you don't, where on this line there was pencil and then there wasn't?

heyjamesknight

Because that's the same as giving up and saying "we don't understand."

What is mind emerging into? When a video game experience emerges from the combination of processing, display, sound, and controller input, it emerges into a level of organization that a mind can participate in. It emerges into a system of organization emanating downward from the mind experiencing it. It can't just "emerge" into existence on its own. If a game falls in the woods, its not a game.

If you call the mind an emergent phenomenon but can't describe the context into which it emerges, you've added nothing to our understanding.

LiquidHelium

It depends on what you mean by consciousness, if we are talking about the intelligence or self awareness or thoughts then I don’t see any problem with it being emergent. But if we are talking about conscious experience/qualia (not something that thinks or interacts, but something that just experiences) then I think it’s incoherent for it to be emergent. That there is a consciousness that is experiencing something is the only thing we can know as 100% true, and the world itself is something we can never know is 100% true: we could be a brain in a vat, we could be dreaming, in the matrix, a demon making us hallucinate everything etc. It seems a bit silly to say the 100% true thing is an illusion or is dependent because something that we don’t know is true tells us it is.

BriggyDwiggs42

Pencil is just an idea, minds objectively have qualia (measured internally).

Edit: you can’t measure “pencilness,” but you can’t help but know whether or not you’re in pain.

Symmetry

There's a whole scientific study of consciousness that actually comes out of behaviorism. The thought is, if I have a conscious experience I can then exhibit the behavior of talking about it. From this developed a whole paradigm of investigation including stuff like the research of subliminal images.

Stanislas Dehaene's book Consciousness and the Brain does a great job of describing this, though it's 10 years old now.

wat10000

Trouble is that you can also exhibit the behavior of talking about it just by being exposed to the idea, even if you don't have the experience. If you were never exposed to the idea and you started talking about it, then I'd be convinced you had the experience, but nobody is actually like that. The fact that the idea exists at all proves to me that at least one human somewhere had conscious experience, and I know there's at least one more (me), but that's it.

Aardwolf

The mind concept here then could apply to computers as well since after all those can also be configured to learn things and behave in certain intelligent ways

baddash

mind = container of values

consciousness = meta-attention

card_zero

I read it all, for a certain value of "read". It's very long, and heavy on examples and fascinating facts, but skimps on getting to the point. I enjoyed the line about plant biologists suffering from brain envy. The article gets better from about halfway through as skeptical views begin to be introduced, but eventually it lets go of that and turns back into lot of hand-wavy awe about mycorrhizal networks, and I missed what the "new proposal" is. If it's only saying that intelligence is an emergent property of connections, and could therefore emerge in swarms or societies, we've had that idea since at least Hofstadter and his sentient ant nests.

SubiculumCode

'Get to the point' is my primary response to the article..

null

[deleted]

hikarudo

> we've had that idea since at least Hofstadter and his sentient ant nests.

A similar idea is present in Herbert Simon's 'The sciences of the artificial', where he describes a sentient city.

ivan_gammel

Intelligence is the ability of a system to make observations and adjust itself based on them (like our brain is changing while learning or our environment is changing with technological progress). It’s definitely not binary state. If we put the internal complexity of the system on one axis and external complexity (what can be observed and meaningfully processed) on another axis, there’s a circle on the plane representing what humans perceive as being intelligent. It intersects with a few other species so we do think now that they are intelligent. Everything else outside that circle is either too primitive or too complex for us, so we do not see e.g. plants as intelligent, but we also may not recognize aliens as intelligent because their existence is too complex for us to even notice it. Humans unlocked evolutionary path not based on DNA, so we evolve much quicker now through science and culture. Our circle is thus expanding and we start realizing how more primitive systems think and create new intelligent systems.

marcus_holmes

I found it fascinating how this discussion dovetails with the discussion around free will.

In both cases, defining the actual thing under discussion is hard. If you can accurately predict a decision in advance, is that "free will"? If an organism reacts to a stimulus in an appropriate manner, is that "intelligence"?

In both cases, we're complex chemical organisms doing all of this with complex chemistry. If we rule out souls and spirits and similar, then it's just chemistry.

If we're following a predefined set of chemical rules in response to a set of stimuli, then how is that "free willed" or "intelligent"? The arbitrary line between tropism and intelligence seems very arbitrary.

But on the other had, we are made of meat. We experience free will and intelligence. We think, and make decisions, seemingly unrestricted by the method we use to do that. We are clearly intelligent, and clearly make decisions that we are apparently free to make.

anon291

One's observations of one's own agency is really the only thing one can be assured of. And this is where these arguments that seek to reduce intelligence to purely mechanistic processes breaks down. For sure, everything the article is saying is true, and indeed, the system could be classified as intelligent, but this is a wholly different question from 'agency' or 'free will'. Even exceptionally dumb people have free will, and exceptionally intelligent computers (ChatGPT, DeepSeek, et al) have no free agency.

NoGravitas

> One's observations of one's own agency is really the only thing one can be assured of.

Can one? One of the most disturbing short stories I've ever read is "Love Is the Plan, the Plan Is Death" by Alice Sheldon (writing as James Tiptree). It is the first-person narrative of an unusually self-aware member of a non-technological intelligent species trying to make a life different from "the plan", the species' instincts, in the face of an oncoming slow-motion multigenerational catastrophe. His efforts end up, all for contextually rational and agentive reasons, reiterating his species instinctual lifecycle.

My takeaway from that, from other readings, and from self reflection, is that we are puppets that may or may not become aware of our strings; but if we cut them, we die.

LiquidHelium

This is an argument for anecdote so feel free to ignore me but if you meditate for long enough or take certain substances you can experience that the conscious experience we are having doesn't actually control things in the way we think it does - you don't actually think your thoughts, you just observe them, like we don't control the sounds we hear - and same with everything else we do. The "only thing one can be assured of" is the experience, not the control of the experience.

This is completely contradicted by the fact I could talk about that experience, which does imply some control from the observer to the physical world. Which makes the whole thing paradoxical. The only way I can square it is with my religious beliefs.

Miraltar

I feel like what you are describing is like letting go of the wheel while driving. Then the car does its own thing but that doesn't mean you don't have control. It's just that you decided (or sometimes were forced) to let go. I agree that we're never fully in control but I don't think we're simple observers either.

null

[deleted]

koakuma-chan

> Even exceptionally dumb people have free will

“Free will” does not exist because the world is deterministic. In other words, if you have made a decision, you couldn’t have made any other decision, so there wasn’t any choice in the first place. A persons IQ has nothing to do with this.

marcus_holmes

I always get stuck with this. That yes, I could not have made any other decision, but that decision is still mine to make.

It's like being able to predict what someone is going to do, because you know them and how they think, and what decision they will make when presented with the choice, doesn't stop that from being their choice.

The universe may be deterministic, but my personality is still my personality. It is encoded into the chemical make-up of my brain, and so that complex chemistry behaves in ways that align with my personality. My personality is shaped by my previous experience, but it's still my personality. I still choose, even though the universe can predict all of my choices, because the thing I do the choosing with is part of the process.

And this seems very like the argument about intelligence and instinct. If I respond in a certain way to an event, is it because I am intelligent and "thinking" about my response, or is it instinctual and coded into my meat to respond this way? How would I tell the difference?

Same with free will, how would I tell the difference between a choice I freely made and one I didn't?

NoGravitas

One may object that at the quantum level, the world really is nondeterministic. Epicurus also argued this over 2000 years ago - that sometimes atoms "swerved" unpredictably in their movements, accounting for free will. Of course, the counterpoint to this argument is that randomness is not free will any more than determinism is; neither offers any space for agency as something that's causal rather than just experienced.

infinitifall

You haven't defined this property you call "agency". How then can you definitely determine whether you posses it or that someone else doesn't? The only thing I can be assured of is my own existence.

anon291

Well agency is the feeling I have of being able to impact the world. I cannot know if you have it but I have extrapolated that based on my impression of you. For all I know, I'm the only one to exist.

ilaksh

Maybe the increased status management, self pattern persistence, or general problem solving abilities of intelligence comes from processes that integrate data from multiple colony members and use the integration, broadcast and storage of information over a series of immediate steps and long time frames to synthesize concepts and plans at a higher level than individuals can manage.

But the intelligent work is in the connections, exchange, and integration of information.

I think that security for human groups should be thought of in this context. Many strong communication links or maybe a holistic network are required for preventing sub-colonies from becoming "other".

ilaksh

Reminds me of concepts like a Metasystem Transition https://en.m.wikipedia.org/wiki/Metasystem_transition or Global Brain https://en.m.wikipedia.org/wiki/Global_brain

keernan

When I finished reading this article, several of its resource articles, and the HN comments, I returned to my HN feed, and a few HN posts down, I come across: DARPA Large Bio-Mechanical Space Structures. What an interesting intersection.

https://news.ycombinator.com/item?id=43185769