Skip to content(if available)orjump to list(if available)

Knowing less about AI makes people more open to using it

paulgb

This seems to be largely predicated on the idea that the more you understand how AI works the more the magic disappears[1], but I think the opposite is true -- I like to think I understand this stuff from the logic gates to backpropagation to at least BERT-era attention, and the fact that all of those parts come together into something I now spend hours a day conversing with in plain English to solve real problems is absolutely awe-inspiring to me.

[1] from the source article “we argue that consumers who perceive AI as magical will experience feelings of awe leading to greater AI receptivity”

fumeux_fume

I don't know. Maybe you're more in the magical group than you think. I spend hours a day helping my client with prompt engineering for customer service training and it's my experience that most of the awe and positivity I see from people comes from a lack of rigorously evaluating the output they're getting back.

blooalien

Also, for many people I know who don't properly understand "AI", their reaction is less "awe" and more "fear". It's a real challenge to get them to take any of the actual benefits of using it properly at face value, because of all the things they worry about due to Sci-Fi nightmare scenarios and Facebook disinformation (not to mention some of the genuine and valid fears piled on top of all that; mis-use / abuse by corporations and governments, as well as other "bad actors" like spammers and scammers, etc).

aprilthird2021

I find it's the opposite. A lot of people think it's like a Google / Wikipedia summarizer, and don't realize it can be wrong because they think it's something like the Watson computer from Jeopardy

blooalien

Yeah, that's definitely another sub-set of the "completely misinformed about / totally misunderstand AI" crowd. :)

resonious

I think there are a lot of technological advancements that are easy to "not like" when you know some select few details about them. Including literal sausage making as another commenter on here mentioned.

T-shirts are great until you hear about the conditions of where they're made and how their disposal is managed. Social media is great until you realize how much they know about you and how they use that knowledge. Modern medicine is easy to not like when you look at the animal experiments that made it happen. And again, sausages - I know some vegetarian folks who are vegetarian in protest of how most meat is produced.

I kind of wonder if there is a subset of comfortable modern society where every aspect is easily likable no matter how much you know about it. Bonus points if that society is environmentally sustainable.

voidhorse

Sure, but I think there is a key diff between those examples and LLMs. In those cases, people don't necessarily mind the machines involved in the process, they dislike the socioeconomic and labor structures around those machines, or the animal cruelty—this is a side effect of the social organization of labor, not of the machine itself. Here, contrarily, the claim is that the machine itself is the thing that becomes less impressive once you know how it works-this is also likely true of any machine, but the point is not about the demystification of the technical function in the tshirt and sausage examples.

dartos

The difference is that once you know how a t-shirt is made, it doesn’t change your perception of the functionality of a t-shirt.

Laymen will hear AI and imagine I, robot and terminator. They hear “neural networks” and think AI acts like a physical brain.

Once you understand how AI works, your perception of the functionality changes as well (eg. From skynet to reality)

I hear “sweat-shop factories” and, while it’s a disgusting practice, the t-shirt still covers my torso.

metadat

Life is [generally] inherently exploitative. Vegetarianism and veganism is a first world luxury, the bottom line is animal protein consumption is a readily available high-quality caloric intake option to encourage children to grow into healthy adults*.

Learning to exist in the world and hold the uncomfortable parts of being a human being has been a valuable and useful skill in my life across many dimensions.

* I'm not advocating for eating excessive meat, but going to extremes to avoid it does suggests something other factor is in play.

aziaziazi

> Vegetarianism and veganism is a first world luxury

"Luxury" makes me choke : animal protein has always been seen as a luxury meal in most cultures, one that you don’t have every day, as found in any authors from XIX or early XX and you’ll never seen animal protein as a poor food, let alone meat [0]

For 1kg of plant protein you’ll get 230gr of milk protein, 220gr egg’s prot 150gr chicken prot, 120gr porc prot or 70gr beef prot. There’s some difference in proteins availability but in a way smaller scale. Regarding the quality : most traditional societies have combined plant proteins in their daily diets : corn and beans (Latin American), cheakpeas and bulgur (Middle East) rice and lentils (India), soyfood and rice (China, Japan, Korea), millet and peanuts (Africa), rice and tempeh (Java, a very poor place with one the biggest density of the world). The list can go long. Modern science showed us those associations skyrocket the ratio of limiting amino acids, like Lysin + Cystine/Methionine, for soy and brown rice.

Plants have always been the poor’s staple protein because they are dirt cheap, nutritious and convenient. More importantly for the future: the require many times less land to grow because of the protein consumed/returned by animals ratio (see above). Wild fish is an exception but the relative yields have been decreasing for decades, only compensated by more and bigger boats. Also 1/5 of world wild fishing production exist for feeding livestock and farmed fishes.

It’s true (and sad in my opinion) plant proteins are now luxury in certains cities/places, the reasons for that are habits and taste preferences, world richness imbalance (Europe imports 60kg/person/year of Brazilian soy only for livestock consumption, a quantity that would provide 50% of their protein need if consumed directly as so) and subsidies and price regulations to maintain an unsustainable diary industry.

I’m not advocating for everyone eating soy but let’s face it : feeding livestock with human consumable proteins is not the most efficient diet.

0 L’assomoir from Emile Zola, loosely translated by myself : "On Sundays, when there was work at the mine, we'd have a bit of bacon, and on the rare good days, a chunk of beef the size of your fist. It was a real treat"

cageface

The vast numbers of low income vegetarians in India would no doubt find this argument very novel.

000ooo000

..if it weren't so regularly rolled out by the ignorant.

tracerbulletx

There's probably a reversal of that trend when you know a lot about it. I don't know how you could go through building your own GPT-2 and not see it as an incredible technology and useful advancement that is going to give us an incredible number of insights and tools.

I still can't get over how soon people just accepted that you can universally translate all languages, that audio can be accurately transcribed, that you can speak a sentence and get pretty good research on a topic. These things are all actually insane and everyone has just decided to ignore this and focus on the aesthetic grossness of some of the things people are trying to get them to do.

vrighter

Because the only time I can be confident in the answers it gives me, is when I already knew the answer before asking, which makes using the LLM pointless. Anything else would require me to do research to fact check it anyway. Might as well skip the LLM and do the research directly.

This is what a lot of people forget about LLMs, if you didn't already know what the answer should be, then you won't know when it's giving you crappy responses. And if you do know what you are talking about and actually do catch it giving wrong information (or think you did), it'll just say "whoops.... you're right!". Sometimes, even if you aren't.

The only things I can feel confident asking an LLM is stuff that I already know.

aprilthird2021

I mean, if you are willing to accept mistakes, then a lot of this was doable a while ago. Google Translate has been around a long time (and it's improved due to transformers but it was pretty good a long time ago too). Wikipedia has been around forever. Siri and other such agents have been doing audio transcription well long before.

So all of these have improved. But they aren't completely novel

ripped_britches

Or on the far end of the spectrum of literacy, we have serious techno optimists that understand the fullest potential of lifesaving research (alpha fold, etc)

kjkjadksj

Well that end of the spectrum isn’t exactly language models

jonas21

Both Alpha Fold and LLMs are built around transformers.

pclmulqdq

That's pretty much where the similarity stops, though.

analog31

I wonder if the "literacy" about AI is not so much about the inner workings of the technology, but about the broad ramifications of its use.

It takes at least a wee bit of sophistication to go from "this is a neat search engine, and it can help me with my writing tasks" to "the stuff that's generated by this thing is going to affect society in unpredictable ways that are not necessarily positive."

A similar spectrum of attitudes may already exist for social media, where the term "algorithm" predated the term "AI."

ksenzee

> These insights pose a challenge for policymakers and educators. Efforts to boost AI literacy might unintentionally dampen people’s enthusiasm for using AI by making it seem less magical. This creates a tricky balance between helping people understand AI and keeping them open to its adoption. To make the most of AI’s potential, businesses, educators and policymakers need to strike this balance.

Why would we avoid educating people, in order to keep them willing to use AI? Why is getting people to use AI seen as a good in itself? Did AI write this article? (Don’t answer that.)

voidhorse

You're striking at the fundamental irrationality behind the current hype cycle. It is still, in the vast majority of cases, a solution looking for a real problem. Since there isn't an actual clear benefit or problem to solve yet, that justifies the high costs, people are trying to make their (silly) bets come true by brute force (get everyone to use this thing so I get my ROI, whether it makes any actual sense or not).

JohnMakin

I’m sure it’s already a solution to a lot of things, that just aren’t sexy or justifiable for the current cost - for instance, if it was cheap enough, I would never have to use a keyboard again, which due to a disability I have, is becoming more difficult. That would help a lot of people almost certainly, but since it isn’t like, the next iphone, people turn their noses at it.

Solutions like neuralink if they become mature could probably use some sort of llm powered translation layer to machine instructions (which neither tech is near being perfect with). Things like that, I think we can technically do already, it is a matter of refining - the magical stuff though, I’m less sure of.

CaptainFever

Have you tried dictation apps built on Whisper? They're pretty damn magic to me. Also they are often small enough to run locally.

rsynnott

> for instance, if it was cheap enough, I would never have to use a keyboard again, which due to a disability I have, is becoming more difficult. That would help a lot of people almost certainly

I mean, sure, but it’s a disability aid which isn’t useful to the average person. I don’t think most people are saying that ‘AI’ is entirely useless, but you mostly are talking about niche uses like this, not the world-changing thing it is being sold as.

refulgentis

I don't understand how this facilitates or admits any discussion.

If I tried to learn more about how you feel, anything I say that indicates any questioning of what you wrote, is to support something that is vapid, negative, worthless, that doesn't solve anything, a "solution" looking for a problem.

blibble

> Why would we avoid educating people, in order to keep them willing to use AI? Why is getting people to use AI seen as a good in itself?

because you can't extract a few tenths of a cent every time someone engages their brain

LambdaComplex

> you can't extract a few tenths of a cent every time someone engages their brain

Don't give the startups any ideas

CaptainFever

A non cynical answer: probably to try and encourage people to find new and useful ways to use AI, while learning about their limitations (e.g. hallucinations) and their strengths (e.g. rephrasing, autocomplete, concept art).

ADeerAppeared

> (Don’t answer that.)

It is nevertheless important to say out loud:

> Why is getting people to use AI seen as a good in itself?

Because user counts pump up the stock price. And that is all AI has.

Whether you believe the claims that inference is profitable or not (and there are good reasons to distrust them), AI does not live up to the financial hype.

AI cannot stand on it's own merits. It's not acceptable to let history run it's course and let the AI skeptics be shown wrong in due time. Because it'll dampen the hype, and perhaps these skeptics aren't so wrong. The people can't be educated into a healthy skepticism of AI, because they wouldn't use it enough.

It's readily obvious that the emperor has no clothes. The actions of the companies and executives involved betray their statements about how great AI is.

AI is forced into products, at deeply subsidized prices. You wouldn't do that if the tech is that big a deal. Apple charged premium prices for the iPhone.

Benchmarks are aggressively cheated. OpenAI funding FrontierMath and only giving a verbal agreement after having already broken so many of those is a joke. If the systems actually worked as promised there is no reason for this mess, and every reason in the world to gather accurate data on the generality of the intelligence.

And biggest of all: This entire mess has the implied framing of the Manhattan Project. That it's all a big race towards AGI, and whomever develops AGI will win capitalism forever. So important that they're getting support from the US government with their "Stargate" project. And until rather recently, everyone was making lots of noise about AI safety and the world-destroying dangers of letting someone else develop AGI.

In 1942 Georgii Flyorov figured out the Manhattan Project's existance from the sudden silence in nuclear fission research.

Today, despite stakes that are proclaimed to be even higher, all the big players will not shut up about their accomplishments. Everything is aggressively published and propagandized. Every single fart an AI model makes is spun into a research paper. You might as well mail the model weights directly to Beijing.

Those are not the actions of companies trying to win an R&D race. Those are the actions of companies pushing up their stock price by any means necessary.

nyarlathotep_

Thanks for the sober perspective.

I really wonder what "losing the AI race" (typically meaning USA vs China) is supposed to indicate.

They have a better LLM or something......and then what? A rogue chatbot takes over the world or something?

We're like two plus years into being a few months away from LLMs taking every office jobs, and I'm still at a total loss as to where this is all supposed to go or what I'm even supposed to be sold on.

raxxor

Is there already something more advanced than langchain? I haven't really seen many integrated AI applications aside from bots of course.

refulgentis

None of this thread means anything at all, it's 90% nihilistic cynicism wedded to 10% regurgitating talking points from their training data.

The real high-school-sophomore smelling thing, which you'd miss through all the purple prose about the Manhattan Project, is "open research is bad and proves it's fake bunko crap...that the Chinese are stealing(!?)"

I've been here for 15 years and am shuddering to think there are commentators here who would start selling you "open is bad" the instant they had a soapbox to pound their chest on.

dralley

refulgentis

Titled: "Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible?"

autobodie

the profit motive

null

[deleted]

dartos

It’s just like politics!

The less you know about any politician, the more you like them!

venkat223

As Phillip Kotler says marketing is cosmetic cheating AI marketing wants all not to know the intricacies and risks by analysing it.They want to celebrate what we get.Sounds like bounded rationalities canvassed by marketing cheaters.

Qwertious

>As Phillip Kotler says marketing is cosmetic cheating AI marketing wants all not to know the intricacies and risks by analysing it.They want to celebrate what we get.Sounds like bounded rationalities canvassed by marketing cheaters.

Your comment is incomprehensible. Please use commas.

sebmellen

Here’s a try:

> As Phillip Kotler says: “marketing is cosmetic cheating”. AI marketing wants [us] all not to know the intricacies and risks [that we would find] by analysing [the AI]. [The AI labs] want us to celebrate what we get.

Sounds like bounded rationalities [that have been] canvassed by marketing cheaters.

roywiggins

Americans tend to like their own House representatives far more than they like Congress writ large.

null

[deleted]

allears

And sausage-making...

Spivak

Eh, politics ain't that special when it comes to this phenomenon. Don't meet your heroes works for pretty much every field.

dartos

> balance between helping people understand AI and keeping them open to its adoption.

Why is the latter a goal?

null

[deleted]

shlomo_z

Reminds me of the saying (internet meme):

"Give a man a game and he'll have fun for a day. Teach a man to make games and he'll never have fun again"

floppiplopp

That's how scams work.

oidar

The paper referenced is behind a paywall. This article is really hard to understand because the paper it is reporting on uses "AI literacy" as the determinant of "openness to AI". I'm very curious about what they mean by "AI literacy".