Skip to content(if available)orjump to list(if available)

Mind-reading devices can now predict preconscious thoughts

guiand

Split brain experiments show that a person rationalizes and accommodates their own behavior even when "they" didn't choose to perform an action[1]. I wonder if ML-based implants which extrapolate behavior from CNS signals may actually drive behavior that a person wouldn't intrinsically choose, yet the person accommodates that behavior as coming from their own free will.

[1]: "The interpreter" https://en.wikipedia.org/wiki/Left-brain_interpreter

brnaftr361

Split brain experiments have been called into question.[0]

[0]: https://www.sciencedaily.com/releases/2017/01/170125093823.h...

comboy

> The patients could accurately indicate whether an object was present in the left visual field and pinpoint its location, even when they responded with the right hand or verbally. This despite the fact that their cerebral hemispheres can hardly communicate with each other and do so at perhaps 1 bit per second

1 bit per second and we are passing complex information about location in 3d space?

pinkmuffinere

Wow this is fascinating, and gets rid of one of my eldritch memetic horrors. Thanks for sharing, I’m going to submit it as its own post as well!

empath75

That's a great paper, but I don't think it calls into question anything about post-hoc rationalizations, and it might actually put that idea on more solid ground.

debo_

Maybe you are just rationalizing it.

bryanrasmussen

I guess I will start paying attention when it can predict word choice in my internal monologue.

Terr_

From some dystopic device log:

    [alert] Pre-thought match blacklist: 7f314541-abad-4df0-b22b-daa6003bdd43
    [debug] Perceived injustice, from authority, in-person
    [info]  Resolution path: eaa6a1ea-a9aa-42dd-b9c6-2ec40aa6b943
    [debug] Generate positive vague memory of past encounter
Not a reason to stop trying to help people with spinal damage, obviously, but a danger to avoid. It's easy to imagine a creepy machine argues with you or reminds you of things, but consider how much worse it'd be if it derails your chain of thought before you're even aware you have one.

fainpul

This reminds me of "Upgrade" – a sci-fi movie about a paralized man who gets an AI brain implant, which can move his body for him. It's pretty decent.

https://www.imdb.com/title/tt6499752

callamdelaney

Can you imagine having chatgpt in your brain to constantly police wrongthink? Would save the British media a job.

iberator

You should make text based game

zh3

AI following the Libet ([0]1983) paper about preconscious thought apparently preceding 'voluntary' acts (which really elevated the question of what 'freewill' means).

* [0] https://pubmed.ncbi.nlm.nih.gov/6640273/

Lerc

The prima facie case for free will* is that it feels free. If you can predict the action before the feeling it removes that argument (unless you want to invoke time travel as an option)

*one of the predominant characterisations of free will, anyway. I'm a compatiblist, so I have no issue with caused feelings of decision making being in conflict with free will. I also have a variation of Tourette's, so I have a different perception of doing things wilfully when compared to most people. It's really hard to describe how sometimes you can't tell if something was tic or not.

kelseyfrog

There are a lot of things I feel that end up not being "real," like embarrassment, a failure. and anxiety. Why should free will not be like any of those?

criddell

Well, what does freewill mean to scientists?

mostertoaster

Ok does anyone else’s mind just immediately go to “The Minority Report” is soon going to no longer be just a sci fi dystopia?

handedness

> is it time to worry?

Shouldn't the device be the judge of that?

rpq

I think the real danger lies in how many will accept that output as the unadulterated unmistakable truth for actions, for judgment. Talk about a sinister device.

smilebot

You don’t need a sinister device. This is essentially how propaganda works.

amarant

I find the take a quirk in how the state of the art assistive technology works is reason for privacy fear mongering to be tired, unimaginative, and typical of today's journalism that cares more for clicks than reporting fact.

It's a very interesting quirk of a immensely useful device for those that need it, but it's not an ethical dilemma.

I for one am sick and tired of these so-called ethicists who's only work appear to be so stir up outrage over nothing holding back medicinal progress.

Similar disingenuous articles appeared when stem-cell research was new, and still do from time to time. Saving lives and improving life for the least fortunate is not an ethical dilemma, it's an unequivocally good thing.

Quit the concern trolling nature.com, you're supposed to be better than that

fjfaase

I wonder how much this experience is similar to the Alien Hand Syndrome, where people experience that part of their body, usually a hand, act on their own.

null

[deleted]

j45

Maybe skulls will need a faraday cage.

aareselle

We already have tinfoil hats for that.

cma

Rather than the Karpathy thing about in class essays for everything, maybe random selections of students will be asked to head to the school fMRI machine and be asked to remember the details of writing their essay homework away from school.