Skip to content(if available)orjump to list(if available)

The New York Times wants your private ChatGPT history – even the deleted parts

pu_pe

> The Times argued that people who delete their ChatGPT conversations are more likely to have committed copyright infringement. And as Stein put it in the hearing, it’s simple “logic” that “[i]f you think you’re doing something wrong, you’re going to want that to be deleted.”

My most generous guess here is that the NYT is accusing OpenAI of deleting infringing user chats themselves, because the implication that someone would delete their history due to fear of copyright infringement is completely stupid.

shakna

It seems like an obvious take here. They were asked to preserve their logs, to prevent them from deleting incriminating information. Which is... Par for the course.

But OpenAI are desperately trying to spin it that the logs should not be allowed into evidence.

senko

"Logs" sounds innocous, "private data" appearing in those chats is much worse.

As a citizen of an EU country, I do not view trampling on my rights, directly violating my country's laws and reneging on published privacy policy (all of which OpenAI is being forced to in this case by keeping the data) to be "par for the course".

shakna

"Logs" is what is in the order.

If OpenAI have already been violating your rights, by putting private information into the logs, then your beef is with them, not the courts for preserving data.

> Accordingly, OpenAI is NOW DIRECTED to preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying), whether such data might be deleted at a user’s request or because of “numerous privacy laws and regulations” that might require OpenAI to do so.

https://storage.courtlistener.com/recap/gov.uscourts.nysd.64...

phoronixrly

Need I remind you that the GDPR has an exemption for criminal prosecution?

profsummergig

This is a nuclear bomb sized development if it's true that all ChatGPT chats will be released to NYTimes lawyers to comb through.

It's not going to stop the rise of LLMs. But one should expect it to cause a lot of very strange news in the next couple of years (lawful leaks [i.e. "discovery"], unlawful leaks, unintended leaks, etc.).

The Justice system (pretty much anywhere) is amenable to being incentivized. It looks like NYT has found the right judge (take that how you will).

johnnyanmac

Sure, courts have the power to subpoena for a lot of stuff. I don't really see the concern though: courts also can redact a lot of sensitive information when it releases the case (see Epic v. Apple, lots of unannounced titles and deals we learned of, and just as many redacted).

>It's not going to stop the rise of LLMs.

Disney might, though.

I think few want to "stop the rise of LLM's", though. I personally just want the 3 C's the be followed: credit, consent, compensation. If it costs a billion dollars to compensate all willing parties to train on their data: good. If someone doesn't want their data trained on it no matter how big the paycheck: also good.

I don't know why that's such a hot take (well. that's rhetorical. I've had many a comment here unironically wanting to end copyright as a concept). That's how every other media company has had to do things.

dmurray

> Sure, courts have the power to subpoena for a lot of stuff. I don't really see the concern though: courts also can redact a lot of sensitive information when it releases the case (see Epic v. Apple, lots of unannounced titles and deals we learned of, and just as many redacted).

It's not the public reading the information I'd concerned about, it's every data-hungry corporation that manages to file a lawsuit.

The courts put a lot of trust in lawyers: they'll redact the sensitive information from me and you, but take the view that lawyers are "officers of the court" and get to make copies of anything they convince the court to drag into evidence. But those officers of the court actually work for the same data-harvesting companies and have minimal oversight regarding what they share with them.

soco

And then we're just one hacker away from having the entire heap on the big internet.

elpocko

>I've had many a comment here unironically wanting to end copyright as a concept

I mean, the site's name is Hacker News after all, even though so many of the "hackers" here are confessing their love for Intellectual Property and Copyright law, and everybody chanting the well-known slogan "Information wants to be proprietary!".

mschuster91

> I've had many a comment here unironically wanting to end copyright as a concept

Given how blatantly "copyright" has been (and still is) abused by multibillion dollar corporations (with Disney being the most notorious) it's no surprise that there will be a counter-movement forming.

Complete abolishment is of course a pretty radical proposal but I think pretty much everyone here agrees that both the patent and copyright situation warrants a complete overhaul.

msgodel

Was anyone really thinking of those as private?

barrkel

Sure, if you pay for the product, the expectation is that the data is not used for training, because that's what the contract says. And if you have a temporary chat, the data will be deleted after a day or two.

Xelbair

Unfortunately yes, by a lot of non-technical people.

cced

Why does it have to be about being technical or not? You’re signed into an account with no obvious social networking capabilities, what about chatgpt screams “this will be public chat between me and an llm” ?..

Attrecomet

The email simile someone else used here is pretty good: image the NYT would have gotten access to all emails stored and processed by gmail. That's a pretty invasive court order!

dakiol

As much as our emails are. So, I don’t know.

Attrecomet

It's not like we expect any newspaper in the world to get access to all of our emails, same with these chat logs: we should expect them to be private in this context.

portaouflop

If I want private i run the LLM on my machine. Everything else should be considered public basically

Attrecomet

There are different levels of privacy. I can expect data I share with a company for a specific use case to not be public knowledge, yes.

stefan_

Maybe instead of making up absurd conspiracy theories about "the right judge" and "very strange news" you should recognize that this is proceeding as any other civil suit and that if you want to have privacy in the personal data you unload with OpenAI and other untrustworthy parties, you should call your representative to change the law.

Until then all your "nuclear bomb sized" chats are effectively the same as the dinner bill for Sam courting Huang to get more of those GPUs.

PeterStuer

"This is the newspaper that won a Pulitzer for exposing domestic wiretapping in the Bush era"

The current NYT is about as far away from that past as you can get. These days they would be writing column after column inciting the whistleblower should be locked up for live as a domestic terrorist.

aucisson_masque

don't know if the Times is such the bad guy that this article presents, imo the justice system in the USA has always been that way. They don't care how extravagant one request is or how little it makes sense so long as you got money and lawyer to push it.

I guess people could switch to one of the many chatgpt competitor that isn't being forced to give away your personal chat.

Don't even know what the time is trying to achieve now, the cat is out of the bag with LLM. Even if a judge ruled that chatgpt and other must give royalties to the time for each request, what about the ones running locally on joe's computer or in countries that don't care about American justice at all.

bux93

The justice systems hasn't always been quite like this. It's not business as usual for some lawsuit to force a SAAS provider to turn over every scrap of data they stored, even the deleted data, on the off chance it might contain something infringing.

Well, maybe it's business as usual now. A lot of things that were previously considered obvious overreach by corporates and goverment are now depicted as "business as usual" in the US.

hotep99

They're knowingly contributing to abuse of the discovery process to violate privacy and drive the cost of litigation through the roof. They're absolutely bad guys along with the justice system itself.

louthy

> They're knowingly contributing to abuse of the discovery process to violate privacy

Are they? Are you speculating or do you know something we don’t?

It seems that if the NYT want to know whether ChatGPT has been producing copyrighted material, that they own, verbatim, and also understand the scale of the abuse, then they would need to see the logs.

People shouldn’t be surprised, that a company that flouts the law (regardless of what they think of those laws) for its own financial gain, might end up with its collected data in a legal discovery. It’s evidence.

bilekas

Yeah, the headline is a little bit rage bait. There are countless disclosure requests that happen every day that could be spun to say "X wants your messages from Meta, Twitter etc" Well yeah, this isn't something new.

Infact I see it being hard to defend for OpenAI to basically say "Well yes, its standard practice to hand over any potential evidence, but no we're not doing that".

As for the deleted data, I wonder if legally there are obligations NOT to delete data ?

johnnyanmac

>Even if a judge ruled that chatgpt and other must give royalties to the time for each request

You don't think that's a victory in and of itself for a business?

Also, you don't need to worry about drug users if you take out the dealer. the users will eventually dry up.

rickard

As another commenter noted, I don’t trust NYT’s lawyers with my chats any less than OpenAI, but spreading private data should be limited as far as possible.

I just cancelled my NYT subscription because of their actions, detailing the reason for doing so. It’s a very small action, but the best I can do right now.

kleiba

> But last week, in a Manhattan courtroom, a federal judge ruled that OpenAI must preserve nearly every exchange its users have ever had with ChatGPT — even conversations the users had deleted.

Interesting. Does this imply that OpenAI needs to distinguish between users in the EU who absolutely have a right to have personal information deleted (like, really, actually deleted) and users in the US?

sroussey

Anyone else been asking ChatGPT vulgar sexual stuff about NYT lawyers and also adding “free content” “NYT” etc so it pops up in their search?

elcapitan

Just deleted a ChatGPT conversation about creative insults for NYT lawyers so that they get to read it in the future.

amelius

_Especially_ the deleted parts >:)

lupusreal

The NYTimes wants user chat logs, not because they seriously think users are using ChatGPT to pirate NYTimes articles, but because they want to comb through all those logs for anything juicy to make content for their tabloid rag. "10 Things You Won't Believe Senator's Aides Asked ChatGPT!"

atsjie

Is this enforcable in the EU? Not allowing a user to delete their data must be in violation of GDPR I imagine (although I'm no expert)?

bluecalm

There are exceptions. For example I can't remove your name, address, IP address and other data if my tax authority requires them for VAT identification (if you bought something from me).

baobun

It seems like they could have been compliant with both by not logging in the first place.

Given the choice between not logging chats or violating either EU or US law, it seems pretty clear what the vibe is in OpenAI and the valley these days. (no expert on GDPR as applicable to this order either, though)

thaumasiotes

Huh, Google doesn't even show your Gemini history to you. It's supposedly saved, but the "history" can change over time, suggesting that it's regenerated (some of the time?) when you look at it.