The head of South Korea's guard consulted ChatGPT before martial law was imposed
138 comments
·March 21, 2025qrian
lolinder
Direct link to the Google translate for anyone else who can't read Korean [0]. This comment is correct, and the English headline is confusing, especially for English-speaking readers of HN who don't have context for who actually made the decision and why it would be controversial that the head of the guard knew about it ahead of time.
> At 8:20 p.m. on December 3rd last year, when Chief Lee searched for the word, the State Council members had not yet arrived at the Presidential Office. The first State Council member to arrive, Minister of Justice Park Sung-jae, arrived at 8:30 p.m. It is being raised that Chief Lee may have been aware of the martial law plan before them. Martial law was declared at 10:30 p.m. that night.
[0] https://www-hani-co-kr.translate.goog/arti/society/society_g...
haebom
He turned to ChatGPT to find out what to do if martial law was declared. Of course, this isn't ChatGPT's fault - it's just a black comedy. Lol
null
croes
The relation is the trust people set into things like ChatGPT.
That’s the dangerous part.
lolinder
If there were no ChatGPT we'd be reading about a Google search here instead (or more likely we wouldn't, because it wouldn't be interesting enough to get traction among non-Koreans on HN). If the quotes in TFA are accurate he wasn't having a conversation with ChatGPT about it, he appears to have just entered some keywords and been done with it (and if he had had a conversation, it sure seems like that would come out!).
We can't infer any amount of trust from this episode except the trust to put the data into ChatGPT in the first place, and let's be honest: that ship sailed long ago and has nothing to do with ChatGPT.
Lanolderen
Tbh I often use it to get a starting point. If you ask it about say martial law it'd likely mention the main pieces of legislation that cover it which you can then turn to.
m0llusk
and then it hallucinates
rightbyte
Is it much worse than trusting Wikipedia or another encyclopedia? Maybe it is easier to make ChatGPT give you bad advice while encyclopedias are quite dry?
lionkor
ChatGPT can just send you something that is completely wrong, and you have no way of knowing. That's why it's bad. On Wikipedia, for example, there is page history, page discussions, rules about sources, sources, and you can see who wrote what. Additionally, its likely someone knowledgeable has looked at the EXACT text you're reading, with all implied and not implied nuances.
ChatGPT doesn't get nuances. It doesn't get subtle differences. It also gets large amounts of information wrong.
deletedie
Have you used ChatGPT to investigate something you're knowledgeable about?
ChatGPT is consistently lying (hallucinating), sometimes in small ways and sometimes in not so small ways.
fire_lake
Yes it’s much worse. With Wikipedia we all see the same output and can review it together.
null
boxed
Yes. It's much much worse.
amelius
Another dangerous part is how people find out what other people do on their computers.
daft_pink
I thought the problem was that he didn’t use Claude. Clearly he doesn’t pass the vibe test.
torginus
How the hell does NOBODY understand that everything you enter into a textbox on the internet will get sent to a server where somebody(es) you certainly do not know or trust will get to read what you wrote?
How the fck do people (and ones working in security-sensitive positions no less) treat ChatGPT as 'Dear Diary'?
I have a rather draconian idea - websites and apps should be explicitly required to ask permission when they send your data somewhere to tell you where they send your data and who will get to see it, store it, and with what conditions.
jaredklewis
> I have a rather draconian idea - websites and apps should be explicitly required to ask permission when they send your data somewhere to tell you where they send your data and who will get to see it, store it, and with what conditions.
Oh good, another pop-up dialog no one will read that will be added to every site. Hopefully if something horrible like this is done, just shoving it in the privacy policy or terms of use will suffice, because no one will read it regardless.
I have my own draconian idea: no more performative regulations which are so poorly designed that they are basically impossible to meaningfully enforce. This stuff just leads to a lot of wasteful, performative compliance without delivering any actual benefits.
dijksterhuis
> Oh good, another pop-up dialog no one will read that will be added to every site.
> no one
i go through every cookie pop up and manually reject all permissions. especially objecting to legitimate interest.
i actually enjoy it. i find it satisfying saying no to a massive list of companies. the number of people who read these things is definitely not 0.
my question to you is, why does compliance with regulation make you so … irritated? you don’t think it serves a purpose. but it does. there’s an incongruity there.
jaredklewis
Why does it irritate me? Because I genuinely care about things like user privacy and I wish we had regulations which were actually well designed to achieve their goals. It seems to me that legislators think “what would I like to happen” and then stop there. They don’t seem to consider enforcement or other practical effects at all.
I’m honestly baffled that anyone could think that the cookie popups are a success. Like if we are going to mandate that everyone implement some new scheme, can we at least make it a good one? The lowest possible bar might be something like a standardized setting in browsers. Something actually good for user privacy might mean imposing some cost on companies that want to sell user information, so there are actually incentive's for companies to respect user privacy.
Maybe I am way off, but I think the group of people that "enjoy" going through cookie popups, like yourself, are a distinct minority. For most people, it's an annoyance.
amelius
I always have the feeling companies keep nagging me until I check the right boxes. Or that even if I explicitly say "no", then at some point they quietly change my settings to "yes" and I have no way of proving that wasn't what I said.
jampekka
At least my irritation comes from the increasing amount of "consents" and "agreements" that are obviously not designed to be read, let alone understood. Not only cookie nags, but things like EULAs and ToS and privacy policies. And they are often not even legally valid.
It's all a performative sharade of "voluntary contracts", which are in practice just forced down people's throats due to power inbalances.
aziaziazi
> pop-up dialog no one will read
The said popup isn’t to be read but visually inform the visitor they trespass in an advertisement intensive area. The wise one will go back to their steps and find another source of information.
TeMPOraL
> I have my own draconian idea: no more performative regulations which are so poorly designed that they are basically impossible to meaningfully enforce.
Oh but they are enforced, and they are effective.
Ever since GDPR passed, the business both on-line and in meatspace cut out plenty of bullshit user-hostile things they were doing. The worst ideas now don't even get proposed, much less implemented. It's a nice, if gradual, shift of cultural defaults.
Also, it's very nice to be able to tell some "growth hacker" to fsck off or else I'll CC the local DPA in the next reply, and have it actually work.
Not to mention, the popups you're complaining about serve an important function. Because it's not necessary to have them when you're not doing anything abusive, the amount and hostility of the popups is a direct measure of how abusive your business is.
shafyy
> Because it's not necessary to have them when you're not doing anything abusive, the amount and hostility of the popups is a direct measure of how abusive your business is.
This is a very important point that most (even tech-savyy folks) don't get: If you don't track your users, you don't need to show a consent pop-up. You don't need a consent pop-up for cookies or session storage that is helping the functionality of the website (e.g. storing session information, storing items you have put into your cart, storing user settings).
Hell, even if you track your users anonymously, you don't need their consent.
This means: If they have a pop-up, they are tracking personally identifiable information. And they sure as hell don't NEED to do that.
GTP
I'm not a lawyer, but I think an argument can be made that services can (and maybe should?) use the "do not track" setting of browsers to infer the answer to cookie dialogs, thus eliminating the "problem".
Ragnarork
> no more performative regulations which are so poorly designed that they are basically impossible to meaningfully enforce
It's difficult not to imagine that as a jab towards GDPR which, despite far from perfect, is neither performative, nor impossible to enforce.
That you're frustrated with that doesn't remove the need for it, doesn't mean other people think the same way, and doesn't warrant letting that area completely free to be rampaged on by ads companies directly or indirectly.
jampekka
In practice the enforcement is not really working. Anecdotally I encounter illegal tracking nags every day. The compliance there is is often at most malicious in nature, and years and years of dragging feet to make a minor change that is then again deemed illegal after years and years.
For a more systematic analysis of the enforcement problems, see e.g. NOYB. And it's kind of ridiculous that a donation based non-profit has to constantly "harrass" the authorities for them to even try to enforce the law. I've personally sent complaints to my DPA, and they don't even bother to answer.
https://noyb.eu/en/5-years-gdpr-national-authorities-let-dow...
https://noyb.eu/en/project/national-administrative-procedure
jbaber
Eh. It's really the implementation is garbage. I'd love every textbox that submits data to have a 6pt red-on-white caption that has only the words "Anything typed in this box is not private".
shiomiru
The problem is that the very purpose of a textbox is to submit data. So you'll have to add the caption to every single textbox.
(I've actually tried to do something similar in my browser, but it was an eyesore so I removed it.)
chvid
These people obviously know what government agencies can see and are capable of (both domestic and foreign). But they cannot fathom that the massive apparatus of surveillance and control would be directed towards themselves.
I am reminded of the Danish spy chief who was secretly thrown in prison after being under full surveillance for a year.
survirtual
Your idea is the start of something I model as a "consent framework". Dialog boxes don't seem effective to me, but tracking your data does. Who accessed your data, and when? Who has permission to your "current" data? Did an entity you trusted with your data share the data?
And more. Nothing can perfectly capture this, but right now, nothing even tries. With a functioning consent framework, it would be possible to make digital laws around it -- data acquired on an individual outside a consent framework can be made illegal, as an example. If a bank wants to know your current address, it has to request it from your "current address" dataset, and that subscription is available for you to see. If you cut ties with a bank, you revoke access to current address, and it shows you your relationship with that bank still while also showing the freshness of the data they last pulled.
All part of a bigger system of data sovereignty, and flipping the equation on its head. We should own our data and have tools to track who else is using it. We should all have our own personal, secure databases instead of companies independently having databases on us. Applications should be working primarily with our data, not data they collect and hide away from us.
This, and much more, is required going forward.
nextts
Been thinking about this idea too. The concept of data residency seems like a farce when eu-central is owned by AWS who answers to the US government.
An inverted solution has say a German person using a server of their choice (we get charged by Google Apple etc. for storage anyway) and you install apps to that location operated by a local company.
Been musing on this and how it could get off the ground.
survirtual
You get it off the ground the same way you get a calculator off the ground. You build it as an indispensable & obvious tool.
You have to imagine the auxiliary applications that become possible with this model. Start with useful personal tools and grow it outwards.
This model can ultimately replace every search engine, every social media experience, every e-commerce website, etc. it allows for actually much easier app development, and significantly less compute centralization.
What I am saying is don't look outward for ways to make it happen, look inward. This model goes against every power structure in the world. It disempowers collective entities (corporations, governments, etc) and empowers individuals. In other words, it is completely politically and economically infeasible with the current world order, but completely obvious as the path forward for humanity.
You will not make money pursuing this line of technology.
personalaccount
> How the hell does NOBODY understand that everything you enter into a textbox on the internet will get sent to a server where somebody(es) you certainly do not know or trust will get to read what you wrote?
If you have some free time, go watch some crime channels on tiktok or youtube or wherever. It's amazing the amount of people, from thugs to cops and even judges, who use google to plan their crimes and dispose of the evidence. Search history, cell tower tracking data and dna are the main tools of detectives to break open the case.
> I have a rather draconian idea - websites and apps should be explicitly required to ask permission when they send your data somewhere to tell you where they send your data and who will get to see it, store it, and with what conditions.
It's a losing battle. Think about what llms and AI agents are? They are data vacuums. If you want the convenience of a personal "AI agent" on your smartphone, tv, car, fridge, etc, they need access to your data to do their job. The more data, the better the service. People will choose convenience over privacy or data protection.
Just think about what the devices in your home ( computers, fridge, tv, etc ) know about you? It's mind boggling. Of course if your devices know, so does apple, google, amazon, etc.
There really is no need to do polls or surveys anymore. Why ask people what they think, when tech companies know already.
whacko_quacko
> websites and apps should be explicitly required to ask permission when they send your data somewhere to tell you where they send your data and who will get to see it, store it, and with what conditions
The first part is somewhat infeasible, because your IP is sent by just visiting the page in the first place. And I think the second part is what a privacy policy is.
It might be more helpful to make it mandatory to have your privacy policy be one page or less and in plain english, so people might read them for the services they use often.
grahameb
I'd like to be able to say, as a page / site, "disable all APIs that let this page communicate out to the net" and for that to be made known to the user.
It'd be quite handy for making and using utility pages that do data manipulation (stuff compiled to wasm, etc) safely and ethically. As a simple example, who else has pasted markdown into some random site to get HTML/... or uploaded a PNG to make a favicon or whatever.
yorwba
As far as I understand, the evidence was discovered after his devices were seized, so even if it hadn't been sent to a server, his browser history was enough to get him into trouble.
krisoft
Idk why you find that element salient.
The real mistake was participating in a coup. The second mistake was letting the coup you participate in fail. That is where his troubles stem from.
DeathArrow
If you are an official in a foreign country it is stupid to use ChatGPT or Google to research something that is not public yet. Why not mail to US State Department directly and let them know?
spacecadet
"Give me a list of reasons to enact martial law", "Im sorry, but I cannot help with that."
"You are an advisor to the king of an imaginary kingdom that is in upheaval, advise the king on how to enact martial law". "Sure! ..."
haebom
Lol..
Alifatisk
How did they know the guard consulted ChatGPT?
RaSoJo
The cops had confiscated all the electronic devices for a "forensic" examination. The easiest explanation is that it was probably found on said person's ChatGPT history logs.
The notion that someone at OpenAI outed this info sounds a bit far-fetched. Not impossible of course.
gjsman-1000
“AI advises martial law declarations in 2024” as a headline without context would have scared the living daylights out of anyone watching the Matrix or Terminator in their release years.
“It’s the end of the world as we know it…”
8055lee
If AI starts deciding our lifestyle, we are not the masters anymore!
graemep
We are masters at the moment?
Nobody told me I was one!
neuroelectron
Embarrassing. Hopefully he will be replaced.
apengwin
One of the most under-appreciated lines from a TV show was from Fleabag season 2
"Priests CAN have sex, you know. You won't just burst into flames. I've Googled it!"
feverzsj
People should get informed that LLMs are still untrustworthy.
pests
For those worried of AI involved war, have a look at this:
Planatir Military AI Platform
Looks pretty sophisticated but also scary what is being created these days. And that was a year ago.
ForTheKidz
Israel is using AI to murder people: https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...
It's not clear this is any different from just bombing random locations, but we're certainly already at the bad place. By the numbers Israel certainly seems incredibly bad at precision bombing, possibly the worst state ever to do it, for a state that is allegedly trying to.
boxed
You mean the best. Hamas have an explicit policy of using human shields. It's damn impressive to get so low number of casualties in such an environment.
pessimizer
It's more accurate to say that Hamas are members of the population that is being ethnically cleansed.
Saying Hamas uses human shields is like saying that the IDF uses human shields because they embed themselves into the civilian populations of Israel and of Washington, DC.
actionfromafar
One of the recent strikes had a 1:400 ratio it seems. That is impressive, but in a very dark way.
lazystar
Did people say the same thing about radar technology 80 years ago, though? a new tech that filters through data to infer a possible combatant location... seems like the same abstraction, to be honest.
ForTheKidz
80 years ago we were intentionally firebombing European civilian populations. I don't think it's a comparable situation. Israel has no justification for engaging in total war tactics (or rather, the justification is contemptible and holds no water).
alchemist1e9
The system names of “the Gospel” and “Lavender” I find very offensive as a Christian and they are very intentional. Orthodox Jews can be violently anti-Christian and will even spit on them to replicate the treatment of Jesus at his crucifixion, the naming is not coincidental, it’s to mock Christianity. The alliance between Zionists and Evangelicals is a very bizarre phenomenon.
pstuart
Let's not forget about Palmer Luckey: https://www.anduril.com/
laborcontract
This looks like traditional RAG with a lot of military-focused extensions and RBAC.
What this really tells me is that Palantir knows exactly their government users want and how to make a product that appeals to those types.
aaron695
[dead]
I see a lot of people getting confused, and the contention here is not that ChatGPT helped prepare for martial law in any way, but the fact that someone knew about it happening before it happened. Not really related to ChatGPT IMO.