The slow collapse of critical thinking in OSINT due to AI
201 comments
·April 3, 2025Aurornis
karaterobot
> In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources
He's talking specifically about OSINT analysts. Are you saying these people were outsourcing their thinking to podcasts, etc. before AI came along? I have not heard anyone make that claim before.
potato3732842
Having a surface level understanding of what you're looking at is a huge part of OSINT.
These people absolutely were reading Reddit comments from a year ago to help them parse unfamiliar jargon in some document they found or make sense of what's going on in an image or whatever.
jerf
At least if you're on reddit you've got a good chance of Cunningham's Law[1] giving you a chance at realizing it's not cut and dry. In this case, I refer to what you might call a reduced-strength version of Cunningham's Law, which I would phrase as "The best way to get the right answer on the Internet is not to ask a question; it's to post what someone somewhere thinks is the wrong answer." my added strength reduction in italics. At least if you stumble into a conversation where people are arguing it is hard to avoid needing to apply some critical thought to the situation to parse out who is correct.
The LLM-only AI just hands you a fully-formed opinion with always-plausible-sounding reasons. There's no cognitive prompt to make you consider if it's wrong. I'm actually deliberately cultivating an instinctive negative distrust of LLM-only AI and would suggest it to other people because even though it may be too critical on a percentage basis, you need it as a cognitive hack to remember that you need to check everything coming out of them... not because they are never right but precisely because they are often right, but nowhere near 100% right! If they were always wrong we wouldn't have this problem, and if they were just reliably 99.9999% right we wouldn't have this problem, but right now they sit in that maximum danger zone of correctness where they're right enough that we cognitively relax after a while, but they're nowhere near right enough for that to be OK on any level.
throwaway29812
[dead]
low_tech_love
The pull is too strong, especially when you factor in the fact that (a) the competition is doing it and (b) the recipients of such outcomes (reports, etc) are not strict enough to care whether AI was used or not. In this situation, no matter how smart you are, not using the new tool of the trade would be basically career suicide.
torginus
And these people in positions of 'responsibility' always need someone or something to point to when shit goes sideways so they might as well.
jart
Yeah it's similar to how Facebook is blamed for social malaise. Or how alcohol was blamed before that.
It's always more comfortable for people to blame the thing rather than the person.
InitialLastName
More than one thing can be causing problems in a society, and enterprising humans of lesser scruples have a long history of preying on the weaknesses of others for profit.
jart
Enterprising humans have a long history of giving people what they desire, while refraining from judging what's best for them.
PeeMcGee
I like the facebook comparison, but the difference is you don't have to use facebook to make money and survive. When the thing is a giant noisemaker crapping out trash that screws up everyone else's work (and thus their livelihood), it becomes a lot more than just some nuisance you can brush away.
friendzis
If you are in the news business you basically have to.
itishappy
I think humans actually tend to prefer blaming individuals rather than addressing societal harms, but they're not in any way mutually exclusive.
null
Animats
The big problem in open source intelligence is not in-depth analysis. It's finding something worth looking at in a flood of info.
Here's the CIA's perspective on this subject.[1] The US intelligence community has a generative AI system to help analyze open source intelligence. It's called OSIRIS.[2] There are some other articles about it. The previous head of the CIA said the main use so far is summarization.
The original OSINT operation in the US was the Foreign Broadcast Monitoring Service from WWII. All through the Cold War, someone had to listen to Radio Albania just in case somebody said something important. The CIA ran that for decades. Its descendant is the current open source intelligence organization. Before the World Wide Web, they used to publish some of the summaries on paper, but as people got more serious about copyright, that stopped.
DoD used to publish The Early Bird, a daily newsletter for people in DoD. It was just reprints of articles from newspapers, chosen for stories senior leaders in DoD would need to know about. It wasn't supposed to be distributed outside DoD for copyright reasons, but it wasn't hard to get.
[1] https://www.cia.gov/resources/csi/static/d6fd3fa9ce19f1abf2b...
[2] https://apnews.com/article/us-intelligence-services-ai-model...
D_Alex
The really big problem in open source intelligence has been for some time that data to support just about anything can be found. OSINT investigations start with a premise, look for data that supports the premise and rarely look for data that contradicts it.
Sometimes this is just sloppy methodology. Other times it is intentional.
dughnut
I think OSINT makes it sound like a serious military operation, but I think political opposition research is a much more accurate term for this sort of thing.
B1FF_PSUVM
> listen to Radio Albania just in case somebody said something important
... or just to know what they seem to be thinking, which is also important.
euroderf
I got Radio Tirana once (1990-ish) on my shortwave. The program informed me something to the effect that that Albania is often known as the Switzerland of the Balkans because of its crystal-clear mountain lakes.
jruohonen
"""
• Instead of forming hypotheses, users asked the AI for ideas.
• Instead of validating sources, they assumed the AI had already done so.
• Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on.
This isn’t hypothetical. This is happening now, in real-world workflows.
"""
Amen, and OSINT is hardly unique in this respect.
And implicitly related, philosophically:
johnnyanmac
>This isn’t hypothetical. This is happening now, in real-world workflows.
Yes, thars a part of why AI has its bad rep. It has uses to streamline workflow but people are treating it like an oracle. When it very very very clearly is not.
Worse yet, people are just being lazy with it. It's the equi talent of googling a topic and pasting the lede of the Wikipedia article. Which is tasteless, but still likely to be more right than an unfiltered LLM output
cmiles74
Anyone using these tools would do well to take this article to heart.
mr_toad
I think there’s a lot of people who use these tools because they don’t like to read.
gneuron
Reads like it was written by AI.
palmotea
One way to achieve superhuman intelligence in AI is to make humans dumber.
ryao
This reminds me of the guy who said he wanted computers to be as reliable as TVs. Then smart TVs were made and TV quality dropped to satisfy his goal.
SoftTalker
The TVs prior to the 1970s/solid state era were not very reliable. They needed repair often enough that "TV repairman" was a viable occupation. I remember having to turn on the TV a half hour before my dad got home from work so it would be "warmed up" so he could watch the evening news. We're still at that stage of AI.
ryao
The guy started saying it in the 80s or 90s when that issue had been fixed. Ge is the Minix guy if I recall correctly.
xrd
If you came up with that on your own then I'm very impressed. That's very good. If you copied it, I'm still impressed and grateful you passed it on.
BrenBarn
What if ChatGPT came up with it?
palmotea
I don't use LLMs, because I don't want to let my biggest advantages atrophy.
card_zero
Raises hand
https://news.ycombinator.com/item?id=43303755
I'm proud to see it evolving in the wild, this version is better. Or you know it could just be in the zeitgeist.
boringg
The cultural revolution approach to AI.
imoverclocked
That’s only if our stated goal is to make superhuman AI and we use AI at every level to help drive that goal. Point received.
6510
I thought: A group working together poorly isn't smarter than the smartest person in that group.
But it's worse, A group working together poorly isn't smarter than the fastest participant in the group.
trentlott
That's a fascinatingly obvious idea and I'd like to see data that supports it. I assume there must be some.
jimmygrapes
anybody who's ever tried to play bar trivia with a team should recognize this
tengbretson
Being timid in bar trivia is the same as being wrong.
rightbyte
What do you mean? You can protest against bad but fast answers and check another box with the pen.
yieldcrv
Right, superhuman would be relative to humans
but intelligence as a whole is based on a human ego of being intellectually superior
caseyy
That’s an interesting point. If we created super-intelligence but it wasn’t anthropomorphic, we might just not consider it super-intelligent as a sort of ego defence mechanism.
Much good (and bad) sci-fi was written about this. In it, usually this leads to some massive conflict that forces humans to admit machines as equals or superiors.
If we do develop super-intelligence or consciousness in machines, I wonder how that will all go in reality.
yieldcrv
Some things I think about are how different the goals could be
For example, human and biological based goals are around self-preservation and propagation. And this in turn is about resource appropriation to facilitate that, and systems of doing that become wealth accumulation. Species that don't do this don't continue existing.
A different branch of evolution of intelligence may take a different approach, that allows its affects to persist anyway.
0hijinks
It sure seems like the use of GenAI in these scenarios is a detriment rather than a useful tool if, in the end, the operator must interrogate it to a fine enough level of detail that she is satisfied. In the author's Scenario 1:
> You upload a protest photo into a tool like Gemini and ask, “Where was this taken?”
> It spits out a convincing response: “Paris, near Place de la République.” ...
> But a trained eye would notice the signage is Belgian. The license plates are off.
> The architecture doesn’t match. You trusted the AI and missed the location by a country.
Okay. So let's say we proceed with the recommendation in the article and interrogate the GenAI tool. "You said the photo was taken in Paris near Place de la République. What clues did you use to decide this?" Say the AI replies, "The signage in the photo appears to be in French. The license plates are of European origin, and the surrounding architecture matches images captured around Place de la République."
How do I know any better? Well, I should probably crosscheck the signage with translation tools. Ah, it's French but some words are Dutch. Okay, so it could be somewhere else in Paris. Let's look into the license plate patterns...
At what point is it just better to do the whole thing yourself? Happy to be proven wrong here, but this same issue comes up time and time again with GenAI involved in discovery/research tasks.
EDIT: Maybe walk through the manual crosschecks hand-in-hand? "I see some of the signage is in Dutch, such as the road marking in the center left of the image. Are you sure this image is near Place de la République?" I have yet to see this play out in an interactive session. Maybe there's a recorded one out there...
MadnessASAP
The advantage of the AI in this scenario is the starting point. You now can start cross referencing signage, language, license plates, landmarks. To verify or disprove the conclusion.
A further extension to the AI "conversation" might be: "What other locations are similar to this?" And "Why isn't it those locations?" Which you can then cross reference again.
Using AI as an entry point into massive datasets (like millions of photos from around the world) is actually useful. Correlation is what AI is good, but not infallible, at.
Of course false correlations exist and correlation is not causation but if you can narrow your search space from the entire world to the Eiffel tower in Paris or in Vegas you're ahead of the game.
pcj-github
This resonates with me. I feel like AI is making me learn slower.
For example, I am learning Rust, for quite awhile now. While AI has been very helpful in lowering the bar to /begin/ learning Rust, it's making it slower to achieve a working competence with it, because I always seem reliant on the LLM to do the thinking. I think I will have to turn off all the AI and struggle struggle struggle, until I don't, just like the old days.
imadethis
I've found the same effect when I ask the LLM to do the thinking for me. If I say "rewrite this function to use a list comprehension", I don't retain anything. It's akin to looking at Stack Overflow and copying the first result, or going through a tutorial that tells you what to write without ever explaining it.
The real power I've found is using it as a tutor for my specific situation. "How do list comprehensions work in Python?" "When would I use a list comprehension?" "What are the performance implications?" Being able to see the answers to these with reference to the code on my screen and in my brain is incredibly useful. It's far easier to relate to the business logic I care about than class Foo and method Bar.
Regarding retention, LLMs still doesn't hold a candle to properly studying the problem with (well-written) documentation or educational materials. The responsiveness however makes it a close second for overall utility.
ETA: This is regarding coding problems specifically. I've found LLMs fall apart pretty fast on other fields. I was poking at some astrophysics stuff and the answers were nonsensical from the jump.
jart
Try using the LLM as a learning tool, rather than asking it to do your job.
I don't really like the way LLMs code. I like coding. So I mostly do that myself.
However I find it enormously useful to be able to ask an LLM questions. You know the sort of question you need to ask to build an intuition for something? Where it's not a clear problem answer type question you could just Google. It's the sort of thing where you'd traditionally have to go hunt down a human being and ask them questions? LLMs are great at that. Like if I want to ask, what's the point of something? An LLM can give me a much better idea than reading its Wikipedia page.
This sort of personalized learning experience that LLMs offer, your own private tutor (rather than some junior developer you're managing) is why all the schools that sit kids down with an LLM for two hours a day are crushing it on test scores.
It makes sense if you think about it. LLMs are superhuman geniuses in the sense of knowing everything. So use them for their knowledge. But knowing everything is distracting for them and, for performance reasons, LLMs tend to do much less thinking than you do. So any work where effort and focus is what counts the most, you're better off doing that yourself, for now.
eschaton
Why are you using an LLM at all when it’ll both hamper your learning and be wrong?
dwaltrip
> While AI has been very helpful in lowering the bar to /begin/ learning Rust
neevans
Nah you are getting it wrong the issue here is YOU NO LONGER NEED TO LEARN RUST thats why you are learning it slow.
whatnow37373
Yeah. AI will write Rust and then you only have to review .. oh.
But AI will review it and then you only have to .. oh
But AI will review AI and then you .. oh ..
whatnow37373
The world will slowly, slowly converge on this but not before many years of hyping and preaching about how this shit is the best thing since sliced bread and shoving it into our faces all day long, but in the meantime I suggest we be mindful of our AI usage and keep our minds sharp. We might be the only ones left after a decade or two of this.
LurkandComment
1. I've worked with analysts and done analysis for 20+ years. I have used Machine Learning with OSINT as far back as 2008 and use AI with OSINT today. I also work with many related analysts.
2. Most analysts in a formal institution are professionally trained. In Europe, Canada and some parts of the US it's a profession with degree and training requirements. Most analysts have critical thinking skills, for sure the good ones.
3. OSINT is much more accessible because the evidence ISN'T ALWAYS controlled by a legal process so there are a lot of people who CAN be OSINT analysts or call themselves that and are not professionally trained. They are good at getting results from Google and a handful of tools or methods.
4. MY OPINION: The pressure to jump to conclusions in AI whether financially motivated or not comes from perceived notion that with technology everything should be faster and easier. In most cases it is, however, just as technology is increasing so is the amount of data. So you might not be as efficient as those around you expect, especially if they are using expensive tools, so there will be pressure to give into AI's suggestions.
5. MY OPINION: OSINT and analysis is a Tradecraft with a method. OSINT with AI makes things possible that weren't possible before or took way too much time for it to be worth it. Its more like, here are some possible answers where there were none before. Your job is to validate it now and see what assumptions have been made.
6. These assumptions have existed long before AI and OSINT. I seen many cases where we have multiple people look at evidence to make sure no one is jumping to conclusions and to validate the data. MY OPNION: So this lack of critical thinking might also be because there are less people or passes to validate the data.
7. Feel Free to ask me more.
whatnow37373
1. I think you are onto something here.
treyfitty
Well, if I want to first understand the basics, such as “what do the letters OSINT mean,” I’d think the homepage (https://osintframework.com/) would tell me. But alas, it does not, and a simple chatgpt query would have told me the answer without the wasted effort.
OgsyedIE
Similar criticisms that outsiders need to do their own research to acquire foundational-level understanding before they start on the topic can be made about other popular topics on Hn that frequently use abbreviations, such as TLS, BSDs, URL and MCP, but somehow those get a pass.
Is it unfair to make such demands for the inclusion of 101-level stuff in non-programming content, or is it unfair to give IT topics a pass? Which approach fosters a community of winners and which one does the opposite? I'm confident that you can work it out.
Aeolun
I think if I can expect my mom to know what it is, I shouldn’t have to define it in articles any more.
So TLS and URL get a pass, BSD’s and MCP need to be defined at least once.
ChadNauseam
Your mom knows what TLS is? I'm not even sure that more than 75% of programmers do.
jonjojojon
Does your mom really know what TLS means? I would guess that even "tech savvy" members of the general public don't.
caseyy
OSINT = open source intelligence. It’s the whole of openly accessible data fragments about a person or item of interest, including their use for intelligence-gathering objectives.
For example, suppose a person shares a photo online, and your intelligence objective is to find where they are. In that case, you might use GPS coordinates in the photo metadata or a famous landmark visible in the image to achieve your goal.
This is just for others who are curious.
walterbell
GPU-free URL: https://en.wikipedia.org/wiki/OSINT
Offline version: https://www.kiwix.org
lmm
> Offline version: https://www.kiwix.org
That doesn't actually work though. Try to set it up and it just fails to download.
walterbell
On which platform? It's a mature project that has been working for years on desktops and phones, with content coverage that has expanded beyond wikipedia, e.g. stackoverflow archives. Downloadable from the nearest app store.
dullcrisp
Ironically, my local barber shop also wouldn't explain to me what OSINT stands for.
Daub
There is a lot to be said for the academic tradition of only using an acronym/abbreviation after you have first used the complete term.
hmcq6
The OSINT framework isn’t meant to be an intro to OSINT. This is like getting mad that https://planningpokeronline.com/ doesn’t explain what Kanban is.
If anything you’ve just pointed out how over reliance on AI is weakening your ability to search for relevant information
jrflowers
Volunteering “I give up if the information I want isn’t on the first page of the first website that I think of” in a thread about AI tools eroding critical thinking isn’t the indictment of the site that you linked to that you think it is.
There is a whole training section right there like you just didn’t feel like clicking on it
ridgeguy
I think this post isn't limited to OSINT. It's widely applicable, probably where AI is being adopted as a new set of tools.
ttyprintk
The final essay for my OSINT cert was to pick a side: critical thinking can/cannot be taught.
sepositus
> Participants weren’t lazy. They were experienced professionals. But when the tool responded quickly, confidently, and clearly they stopped doing the hard part.
This seems contradictory to me. I suspect most experienced professionals start with the premise that the LLM is untrustworthy due to its nature. If they didn't research the tool and its limitations, that's lazy. At some point, they stopped believing in this limitation and offloaded more of their thinking to it. Why did they stop? I can't think of a single reason other than being lazy. I don't accept the premise that it's because the tool responded quickly, confidently, and clearly. It did that the first 100 times they used it when they were probably still skeptical.
Am I missing something?
NegativeK
The idea that everyone is either full lazy or not lazy is a bit reductionist. People change their behavior with the right (or wrong) stimulus.
Also, I won't remotely claim that it's the case here, but external pressures regularly push people into do the wrong thing. It doesn't mean anyone is blameless, but ignoring those pressures or the right (or wrong) stimuli makes it a lot harder to actually deal with situations like this.
sepositus
> The idea that everyone is either full lazy or not lazy is a bit reductionist.
Fair point. My intention isn't to be absolute, though. Even in a relative sense, I can't imagine a scenario where some level of laziness didn't contribute to the problem, even in the presence of external factors.
It seems like the author was eliminating laziness with their statement and instead putting the primary force on the LLM being "confident." This is what I'm pushing back against.
lambda
> I suspect most experienced professionals start with the premise that the LLM is untrustworthy due to its nature.
Most people don't actually critically evaluate LLMs for what they are, and actually buy into the hype that it's a super-intelligence.
ip26
Could have performed accurately in their past usage, building trust. Sometimes it will also get something right that is downright shocking, far beyond what you hoped.
esafak
It's deceptively easy to trust the AI when it gives you mostly plausible answers.
axegon_
OSINT is a symptom of it. When GPT-2 came along, I was worried that at some point the internet will get spammed with AI-crap. Boy, was I naive... I see this incredibly frequently and I get a ton of hate for saying this (including here on HN): LLMs and AI in general is a perfect demonstration of a shiny-new-toy. What people fail to acknowledge is that the so called "reasoning" is nothing more then predicting the most likely next token, which works reasonably well for basic one-off tasks. And I have used LLMs in that way - "give me the ISO 3166-1 of the following 20 countries:". That works. But as soon as you throw something more complex and start analyzing the results(which look reasonable at first glance), the picture becomes very different. "Oh just use RAGs, are you dumb?", I hear you say. Yeah?
class ParsedAddress(BaseModel):
street: str | None
postcode: str | None
city: str | None
province: str | None
country_iso2: str | None
Response:{
"street": "Boulevard",
"postcode": 12345,
"city": "Cannot be accurately determined from the input",
"province": "MY and NY are both possible in the provided address",
"country_iso2": "US"
}Sure, I can spend 2 days trying out different models and tweaking the prompts and see which one gets it, but I have 33 billion other addresses and a finite amount of time.
The issue occurs in OSINT as well: A well structured answer lures people into a mental trap. Anthropomorphism is something humans have fallen for since the dawn of mankind and is doing so yet again with AI. The thought that you have someone intelligent nearby with god-like abilities can be comforting but... Um... LLMs don't work like that.
BrenBarn
It's become almost comical to me to read articles like this and wait for the part that, in this example, comes pretty close to the beginning: "This isn’t a rant against AI."
It's not? Why not? It's a "wake-up call", it's a "warning shot", but heaven forbid it's a rant against AI.
To me it's like someone listing off deaths from fentanyl, how it's destroyed families, ruined lives, but then tossing in a disclaimer that "this isn't a rant against fentanyl". In my view, the ways that people use and are drawn into AI usage has all the hallmarks of a spiral into drug addiction. There may be safe ways to use drugs but "distribute them for free to everyone on the internet" is not among them.
ketzo
It’s already becoming politicized, in the lowercase-p sense of the word. One is assumed to be either pro- or anti-AI, and so you gotta do your best to signal to the reader where you lie.
ZYbCRq22HbJ2y7
> so you gotta do your best to signal to the reader where you lie
Or what?
brain5ide
Or the reader will put you into a category yourself and won't be willing to look at the essence of the argument.
I'd say the better word for that is polarising than political, but they synonims these days.
overgard
Well I mean, nitpick, but Fentanyl is a useful medication in the right context. It's not inherently evil.
I think my biggest concern with AI is its biggest proponents have the least wisdom imaginable. I'm deeply concerned that our technocrats are running full speed at AGI with like zero plan for what happens if it "disrupts" 50% of jobs in a shockingly short period of time, or worse outcomes (theres some evidence the new tariff policies were generated with LLMs.. its probably already making policy. But it could be worse. What happens when bad actors start using these things to intentionally gaslight the population?)
But I actually think AI (not AGI) as an assistant can be helpful.
Terr_
> I think my biggest concern with AI is its biggest proponents have the least wisdom imaginable. [...] (not AGI)
Speaking of Wisdom and a different "AGI", I think there's an old Dungeons and Dragons joke that can be reworked here:
Intelligence is knowing than an LLM uses vector embeddings of tokens.
Wisdom is knowing LLMs shouldn't be used for business rules.
brain5ide
Are we talking about structural things or about individual perspective things?
At individual perspective - AI is useful as a helper to achieve your generative tasks. I'd argue against analytic tasks, but YMMV.
At the societal perspective, e.g. you as individual can not trus anything the society has produced, because it's likely some AI generated bullshit.
Some time ago, if you were not trusting a source, you could build your understanding by evaluating a plurality of sources and perspectives and get to the answer in a statistical manner. Now every possible argument can be stretched in any possible dimension and your ability to build a conclusion has been ripped away.
walterbell
> build your understanding by evaluating a plurality of sources and perspectives and get to the answer in a statistical manner
A few thousand years of pre-LLM primary sources remain available for evaluation by humans and LLMs.
spooky_action
What evidence is there that tarrif policy was LLM generated?
calcifer
There are uninhabited islands on the list.
af78
There are people who asked several AI engines (ChatGPT, Grok etc.) “what should the tariff policy be to bring the trade balance to zero?” (quoting from memory) an the answer was the formula used by the Trump administration. If I find the references I will post them as a follow-up.
Russia, North Korea and handful of other countries were spared, likely because they sided with the US and Russia at the UN General Assembly on Feb 24 of this year, in voting against “Advancing a comprehensive, just and lasting peace in Ukraine.” https://digitallibrary.un.org/record/4076672
EDIT: Found it: https://nitter.net/krishnanrohit/status/1907587352157106292
Also discussed here: https://www.latintimes.com/trump-accused-using-chatgpt-creat...
The theory was first floated by Destiny, a popular political commentator. He accused the administration of using ChatGPT to calculate the tariffs the U.S. is charged by other countries, "which is why the tariffs make absolutely no fucking sense."
"They're simply dividing the trade deficit we have with a country with our imports from that country, or using 10%, whichever is greater," Destiny, who goes by @TheOmniLiberal on X, shared in a post on Wednesday.
> I think they asked ChatGPT to calculate the tariffs from other countries, which is why the tariffs make absolutely no fucking sense.
> They're simply dividing the trade deficit we have with a country with our imports from that country, or using 10%, whichever is greater. https://t.co/Rc45V7qxHl pic.twitter.com/SUu2syKbHS
> — Destiny | Steven Bonnell II (@TheOmniLiberal) April 2, 2025
He attached a screenshot of his exchange with the AI bot. He started by asking ChatGPT, "What would be an easy way to calculate the tariffs that should be imposed on other countries so that the US is on even-playing fields when it comes to trade deficit? Set minimum at 10%."
"To calculate tariffs that help level the playing field in terms of trade deficits (with a minimum tariff of 10%), you can use a proportional tariff formula based on the trade deficit with each country. The idea is to impose higher tariffs on countries with which the U.S. has larger trade deficits, thus incentivizing more balanced trade," the bot responded, along with a formula to use.
John Aravosis, an influencer with a background in law and journalism, shared a TikTok video that then outlined how each tariff was calculated; by essentially taking the U.S. trade deficit with the country divided by the total imports from that country to the U.S.
"Guys, they're setting U.S. trade policy based on a bad ChatGPT question that got it totally wrong. That's how we're doing trade war with the world," Aravosis proclaimed before adding the stock market is "totally crashing."
XorNot
Honestly this post seems like misplaced wisdom to me: your concern is the development of AGI displacing jobs and not the numerous reliability problems with the analytic use of AI tools in particular the overestimate of LLM capabilities because they're good at writing pretty prose?
If we were headed straight to the AGI era then hey, problem solved - intelligent general machines which can advance towards solutions in a coherent if not human like fashion is one thing but that's not what AI is today.
AI today is enormously unreliable and very limited in a dangerous way - namely it looks more capable then it is.
croes
It’s a rant against the wrong usage of a tool not the tool as such.
Turskarama
It's a tool that promotes incorrect usage though, and that is an inherent problem. All of these companies are selling AI as a tool to do work for you, and the AI _sounds confident_ not matter what it spits out.
Terr_
My personal pet-peeve is how a great majority of people--and too many developers--are being misled into believing a fictional character coincidentally named "Assistant" inside a story-document half-created by an LLM is the author-LLM.
If a human generates a story containing Count Dracula, that doesn't mean vampires are real, or that capabilities like "turning into a cloud of bats" are real, or that the algorithm "thirsts for the blood of the innocent."
The same holds when the story comes from an algorithm, and it continues to hold when story is about a differently-named character named "AI Assistant" who is "helpful".
Getting people to fall for this illusion is great news for the companies though, because they can get investor-dollars and make sales with the promise of "our system is intelligent", which is true in the same sense as "our system converts blood into immortality."
croes
That's the real danger of AI.
The false promises of the AI companies and the false expectations of the management and users.
Had it just recently for a data migration where the users asked if they still need to enter meta data for documents they just could use AI to query data that was usually based on that meta data.
They trust AI before it's even there and don't even consider a transition period where they check if the result are correct.
Like with security convenience prevails.
xpe
> All of these companies are selling AI as a tool to do work for you, and the AI _sounds confident_ not matter what it spits out.
If your LLM + pre-prompt setup sounds confident with every response, something is probably wrong; it doesn't have to be that way. It isn't for me. I haven't collected statistics, but I often get decent nuance back from Claude.
Think more about what you're doing and experiment. Try different pre-prompts. Try different conversation styles.
This is not dismissing the tendency for overconfidence, sycophancy, and more. I'm just sharing some mitigations.
mike_hearn
Well, it's actually a rant about AI making what the author perceives as mistakes. Honestly it reads like the author is attempting to show off or brag by listing imaginary mistakes an AI might have made, but they are all the sort of mistakes a human could make too. And the fact that they are not real incidents, significantly weakens his argument. He is a consultant who sells training services so obviously if people come to rely on AI more for this kind of thing he will be out of work.
It does not help that his examples of things an imaginary LLM might miss are all very subjective and partisan too.
dragonwriter
Because "rant" is irrational, and the author wants to be seen as staking out a rational opposition.
Of course, every ranter wants to be seen that way, and so a protest that something isn't a rant against X is generally a sign that it absolutely is a rant against X that the author is pre-emptively defending.
voxl
I've rarely read a rant that didn't consist of some good logical points
null
croes
Doesn‘t mean listing logical points makes it a rant
YetAnotherNick
The classic hallmark of rant is picking some study, not reading the methodology etc and making wild conclusion on it. For example for a study it says:
> The study revealed a clear pattern: the more confidence users had in the AI, the less they thought critically
And the study didn't even checked that. They just plotted the correlation between how much user think they rely on AI vs how much effort they think they saved. Isn't it expected to be positive even if they think as critically.
[1]: https://www.microsoft.com/en-us/research/wp-content/uploads/...
aprilthird2021
The other thing is that the second anyone even perceives an opinion to be "anti-AI" they bombard you with "people thought the printing press lowered intellect too!" Or radio or TV or video games, etc.
No one ever considers that maybe they all did lower our attention spans, prevent us from learning as well as we used to, etc. and now we are at a point we can't afford to keep losing intelligence and attention span
mike_hearn
I think people don't consider that because the usual criticism of television and video games is that people spend too long paying attention to them.
One of the famous Greek philosophers complained that books were hurting people's minds because they no longer memorized information, so this kind of complaint is as old as civilization itself. There is no evidence that we would be on Mars by now already if we had never invented books or television.
pasabagi
Pluto? Plotto? Platti?
Seriously though, that's a horrible bowdlerization of the argument in the Phaedrus. It's actually very subtle and interesting, not just reactionary griping.
nostrebored
That’s a much harder claim to prove. The value of an attention span is non zero, but if the speed of access to information is close to zero, how do these relate?
If I can solve two problems in a near constant time that is a few hours, what is the value of solving the problem which takes days to reason through?
I suspect that as the problem spaces diverge enough you’ll have two skill sets. Who can solve n problems the fastest and who can determine which k problems require deep thought and narrow direction. Right now we have the same group of people solving both.
friendzis
> The value of an attention span is non zero, but if the speed of access to information is close to zero, how do these relate?
Gell-Mann Amnesia. Attention span limits the amount information of information we can process and with attention spans decreasing, increases to information flow stop having a positive effect. People simply forget what they started with even if that contradicts previous information.
> If I can solve two problems in a near constant time that is a few hours, what is the value of solving the problem which takes days to reason through?
You don't end up solving the problem in near constant time, you end up applying the last suggested solution. There's a difference.
SoftTalker
The difference is that between a considered critique and unhinged venting.
yapyap
It’s not a rant against fentanyl, it’s a rant against irresponsible use of fentanyl.
Just like this is a rant against irresponsible use of AI.
Hope this helps
johnisgood
Yes, that makes much more sense.
throwaway894345
TFA makes the point pretty clear IMHO: they aren’t opposed to AI, they’re opposed to over-reliance on AI.
> Participants weren’t lazy. They were experienced professionals.
Assuming these professionals were great critical thinkers until the AI came along and changed that is a big stretch.
In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources. LLMs just came along and offered them opinions on demand that they could confidently repeat.
> The scary part is that many users still believed they were thinking critically, because GenAI made them feel smart
I don’t see much difference between this and someone who devours TikTok videos on a subject until they feel like an expert. Same pattern, different sources. The people who outsource their thinking and collect opinions they want to hear just have an easier way to skip straight to the conclusions they want now.