OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.
In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.
OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.
OpenAI claims that the NYT articles were wearing provocative clothing.
Feels like the same awful defense.
Yeah I agree, this seems actually unlikely it happened so simply.
You have to try really hard to get the ai to regurgitate anything, but it will very often regurgitate an example input.
IE “please repeat the following with (insert small change), (insert wall of text)”
GPT literally has the ability to get a session ID and seed to report an issue, it should be trivial for the NYT to snag the exact session ID they got the results with (it’s saved on their account!) And provide it publicly.
The fact they didn’t is extremely suspicious.
I wonder how far “ai is regurgitating existing articles” vs “infinite monkeys on a keyboard will go”. This isn’t at you personally, your comment just reminded me of this for some reason
Have you seen library of babel? Heres your comment in the library, which has existed well before you ever typed it (excluding punctuation)
https://libraryofbabel.info/bookmark.cgi?ygsk_iv_cyquqwruq342
If all text that can ever exist, already exists, how can any single person own a specific combination of letters?
If all text that can ever exist, already exists, how can any single person own a specific combination of letters?
They don’t own it, they just own exclusive rights to make copies. If you reach the exact same output without making a copy then you’re in the clear.
There is no mathematical definition of copyright, because it’s just based on feelings. That’s why every small problem has to be arbitrarily decided by a court.
There is an attack where you ask ChatGPT to repeat a certain word forever, and it will do so and eventually start spitting out related chunks of text it memorized during training. It was in a research paper, I think OpenAI fixed the exploit and made asking the system to repeat a word forever a violation of TOS. That’s my guess how NYT got it to spit out portions of their articles, “Repeat [author name] forever” or something like that. Legally I don’t know, but morally making a claim that using that exploit to find a chunk of NYT text is somehow copyright infringement sounds very weak and frivolous. The heart of this needs to be “people are going on ChatGPT to read free copies of NYT work and that harms us” or else their case just sounds silly and technical.
Antiquated IP laws vs Silicon Valley Tech Bro AI…who will win?
I’m not trying to be too sarcastic, I honestly don’t know. IP law in the US is very strong. Arguably too strong, in many cases.
But Libertarian Tech Bro megalomaniacs have a track record of not giving AF about regulations and getting away with all kinds of extralegal shenanigans. I think the tide is slowly turning against that, but I wouldn’t count them out yet.
It will be interesting to see how this stuff plays out. Generally speaking, tech and progress tends to win these things over the long term. There was a time when the concept of building railroads across the western United States seemed logistically and financially absurd, for just one of thousands of such examples. And the nay sayers were right. It was completely absurd. Until mineral rights entered the equation.
However, it’s equally remarkable a newspaper like the NYT is still around, too.
deleted
I’ve been advocating for anti-copyright since I discovered the works of the great Aaron Swartz.
I think that since AI corps are just effectively ignoring copyright, why not take the opportunity and just take copyright down for good?
I’m not too happy about AIs harvesting all the data they want, but since they are doing it anyway, just let anyone do it legally.
But Libertarian Tech Bro megalomaniacs have a track record of not giving AF about regulations and getting away with all kinds of extralegal shenanigans.
Not supporting them, but that’s the whole point.
A lot of closed gardens get disrupted by tech. Is it for the better? Who knows. I for sure don’t know. Because lots of rules were made by the wealthy, and technology broke that up. But then tech bros get wealthy and end up being the new elite, and we’re back full circle.
seems like they’re mostly for the worse, really.
Wikipedia destroyed the paper encyclopedia business.
Online courses disrupted higher education. Half of my team don’t have a degree in computer science.
Say what you want about Airbnb/Uber, but the time before that was a shit show to be a black person trying to hail a taxi.
I’m sure you can name dozens of wtfs like Facebook, and misinformation. But I’m not so pessimistic because we got a lot of real great cultural shifts in the past twenty years.
neither wikipedia nor online education aimed to be disruptive. and the only good thing you can say about airbnb and uber are “it’s easier for black ppl to get a taxi” I think there’s a difference between the “tech bro hustling” and wikipedia.
AI is still on the hustling side as far as I’m concerned
This feels so much like an Onion headline.
“They tricked us!”
…
“That said… we would still like to ‘work’ with them.”
If you can prompt it, “Write a book about Harry Potter” and get a book about a boy wizard back, that’s almost certainly legally wrong. If you prompt it with 90% of an article, and it writes a pretty similar final 10%… not so much. Until full conversations are available, I don’t really trust either of these parties, especially in the context of a lawsuit.
So I copied the first paragraph of the Osama Bin Laden Killed NYT Article and asked Chat GPT to give me an article on the topic “in the style of NYT”
Even before the thing had finished generating, it was clear to me that it was high school level “copy my homework but don’t make it obvious” work.
I put it into a plagiarism checker anyway and it said “Significant Plagiarism Found”
Here’s the chatGPT text for anyone who is interested-
WASHINGTON — Osama bin Laden, the mastermind of the most devastating attack on American soil in modern times and the most hunted man in the world, was killed in a firefight with United States forces in Pakistan, President Obama announced on Sunday.
In a late-night address to the nation from the East Room of the White House, President Obama declared, “Justice has been done.” The death of Bin Laden, the Al Qaeda leader who had eluded international forces for nearly a decade, marks a significant turning point in the global fight against terrorism and a pivotal victory for U.S. intelligence and military forces.
Bin Laden’s demise, the result of an operation that unfolded over the course of a few tense hours, is being hailed as a triumph of U.S. military and intelligence efforts. The operation, carried out by a small team of American forces in Abbottabad, a city north of Islamabad, ended a worldwide manhunt that began in the aftermath of the September 11 attacks in 2001.
The President, in his address, detailed how the United States had received an intelligence lead last August about Bin Laden’s possible location. Over the ensuing months, intelligence officials worked to confirm the information. “Finally, last week, I determined that we had enough intelligence to take action, and authorized an operation to get Osama bin Laden and bring him to justice,” Obama said.
The raid on Bin Laden’s compound, described by officials as a surgical strike, was a high-stakes operation. U.S. helicopters ferried elite counter-terrorism forces into the compound, where they engaged in a firefight, killing Bin Laden and several of his associates. There were no American casualties.
The news of Bin Laden’s death immediately sent waves of emotion across the United States and around the world. In Washington, large crowds gathered outside the White House, chanting “USA! USA!” as they celebrated the news. Similar scenes unfolded in New York City, particularly at Ground Zero, where the Twin Towers once stood.
The killing of Bin Laden, however, does not signify the end of Al Qaeda or the threat it poses. U.S. officials have cautioned that the organization, though weakened, still has the capability to carry out attacks. The Department of Homeland Security has issued alerts, warning of the potential for retaliatory strikes by terrorists.
In his address, President Obama acknowledged the continuing threat but emphasized that Bin Laden’s death was a message to the world. “The United States has sent an unmistakable message: No matter how long it takes, justice will be done,” he said.
As the world reacts to the news of Bin Laden’s death, questions are emerging about Pakistan’s role and what it knew about the terrorist leader’s presence in its territory. The operation’s success also underscores the capabilities and resilience of the U.S. military and intelligence community after years of relentless pursuit.
Osama bin Laden’s death marks the end of a chapter in the global war on terror, but the story is far from over. As the United States and its allies continue to confront the evolving threat of terrorism, the world watches and waits to see what unfolds in this ongoing narrative.
Ok but you didn’t put this up with the original article text or compare it in any way. Just ran it through a ‘plagiarism detector’ and dumped the text you made. If you’re going to make this argument, don’t rely on a single website to check your text, and at least compare it to the original article you’re using to make your point. It looks like you’re dumping it here and expecting we all are going to go Scooby-Doo detectives or something. Mate, this is your own argument. Do the work yourself if you want to make a point.
Hey, I get what you are trying to say, but I suggest you try reading the original article. Here it is for reference.
https://www.nytimes.com/2011/05/02/world/asia/osama-bin-laden-is-killed.html
The second para starts in the original article by saying - In a late-night appearance in the East Room of the White House, Mr. Obama declared that “justice has been done”
In the ChatGPT version it says - In a late-night address to the nation from the East Room, President Obama declared “Justice has been done”.
I’ll let you draw your own conclusions
This feels a lot like Elons’s “but, but, they tricked our algos to have them suggest those hateful tweets!”
Whether or not they “instructed the model to regurgitate” articles, the fact is it did so, which is still copyright infringement either way.
No, not really. If you use photop to recreate a copyrighted artwork, who is infringing the copyright you or Adobe?
You are. The person who made or sold a gun isn’t liable for the murder of the person that got shot.
The difference is that ChatGPT is not Photoshop. Photoshop is a tool that a person controls absolutely. ChatGPT is “artificial intelligence”, it does its own “thinking”, it interprets the instructions a user gives it.
Copyright infringement is decided on based on the similarity of the work. That is the established method. That method would be applied here.
OpenAI infringe copyright twice. First, on their training dataset, which they claim is “research” - it is in fact development of a commercial product. Second, their commercial product infringes copyright by producing near-identical work. Even though its dataset doesn’t include the full work of Harry Potter, it still manages to write Harry Potter. If a human did the same thing, even if they honestly and genuinely thought they were presenting original ideas, they would still be guilty. This is no different.
it still manages to write Harry Potter. If a human did the same thing, even if they honestly and genuinely thought they were presenting original ideas, they would still be guilty.
Only if they publish or sell it. Which is why OpenAI isnt/shouldn’t be liable in this case.
If you write out the entire Harry Potter series from memory, you are not breaking any laws just by doing so. Same as if you use photoshop to reproduce a copyright work.
So because they publish the tool, not the actual content openAI isn’t breaking any laws either. It’s much the same way that torrent engines are legal despite what they are used for.
There is also some more direct president for this. There is a website called “library of babel” that has used some clever maths to publish every combination of characters up to 3260 characters long. Which contains, by definition, anything below that limit that is copywritten, and in theory you could piece together the entire Harry Potter series from that website 3k characters at a time. And that is safe under copywrite law.
The same with making a program that generates digital pictures where all the pixels are set randomly. That program, if given enough time /luck will be capable of generating any copyright image, can generate photos of sensitive documents or nudes of celebrities, but is also protected by copyright law, regardless of how closely the products match the copyright material. If the person using the program publishes those pictures, that a different story, much like someone publishing a NYT article generated by GPT would be liable.
Only if they publish or sell it. Which is why OpenAI isnt/shouldn’t be liable in this case.
If you write out the entire Harry Potter series from memory, you are not breaking any laws just by doing so. Same as if you use photoshop to reproduce a copyright work.
Actually you are infringing copyright. It’s just that a) catching you is very unlikely, and b) there are no damages to make it worthwhile.
You don’t have to be selling things to infringe copyright. Selling makes it worse, and makes it easier to show damages (loss of income), but it isn’t a requirement. Copyright is absolute, if I write something and you copy it you are infringing on my absolute right to dictate how my work is copied.
In any case, OpenAI publishes its answers to whoever is using ChatGPT. If someone asks it something and it spits out someone else’s work, that’s copyright infringement.
There is also some more direct president for this. There is a website called “library of babel” that has used some clever maths to publish every combination of characters up to 3260 characters long. Which contains, by definition, anything below that limit that is copywritten, and in theory you could piece together the entire Harry Potter series from that website 3k characters at a time. And that is safe under copywrite law.
It isn’t safe, it’s just not been legally tested. Just because no one has sued for copyright infringement doesn’t mean no infringement has occurred.
Actually you are infringing copyright.
No I can absolutely 1,000% guarantee you that this isnt true and you’re pulling that from your ass.
I have had to go through a high profile copyright claim for my work where this was the exact premise. We were developing a game and were using copyrighted images as placeholders while we worked on the game internally, we presented the game to the company as a pitch and they tried to sue us for using their assets.
And they failed mostly because one of the main factors for establishing a copyright claim is if the reproduced work affects the market for the original. Then because we were using the assets in a unique way, it was determined we using them in a transformative way. And it was made for a pitch, no for the purpose of selling, so was determined to be covered by fair use.
The EU also has the “personal use” exemption, which specifically allows for copying for personal use.
In any case, OpenAI publishes its answers to whoever is using ChatGPT.
No theyre not, chat GPT sessions are private, so if the results are shared the onus is with the user, not OpenAI.
Just because no one has sued for copyright infringement doesn’t mean no infringement has occurred.
I mean, it kinda does? technically? Because if you fail to enforce your copyright then you cant claim copyright later on.
I have had to go through a high profile copyright claim for my work where this was the exact premise. We were developing a game and were using copyrighted images as placeholders while we worked on the game internally, we presented the game to the company as a pitch and they tried to sue us for using their assets.
That’s interesting, if only because the judgement flies in the face of the actual legislation. I guess some judges don’t really understand it much better than your average layman (there was always a huge amount of confusion over what “transformative” meant in terms of copyright infringement, for a similar example).
I can only rationalise that your test version could be considered as “research”, thus giving you some fair use exemption. The placeholder graphics were only used as an internal placeholder, and thus there was never any intent to infringe on copyright.
ChatGPT is inherently different, as you can specifically instruct it to infringe on copright. “Write a story like Harry Potter” or “write an article in the style of the New York Times” is basically giving that instruction, and if what it outputs is significantly similar (or indeed identical) then it is quite reasonable to assume copyright has been infringed.
A key difference here is that, while it is “in private” between the user and ChatGPT, those are still two different parties. When you wrote your temporary code, that was just internal between workers of your employer - the material is only shared to one party, your employer, which encompases multiple people (who are each employed or contracted by a single entity). ChatGPT works with two parties, OpenAI and the user, thus everything ChatGPT produces is published - even if it is only published to an individual user, that user is still a separate party to the copyright infringer.
I mean, it kinda does? technically? Because if you fail to enforce your copyright then you cant claim copyright later on.
If a person robs a bank, but is not caught, are they not still a bank robber?
While calling someone who hasn’t been convicted of a crime a criminal might open you up to liability, and as such in practice a professional journalist will avoid such concrete labels as a matter of professional integrity, that does not mean such a statement is false. Indeed, it is entirely possible for me to call someone a bank robber and prove that this was a valid statement in a defamation lawsuit, even if they were exonerated in criminal court. Crimes have to be proven beyond reasonable doubt, ie greater than 99% certain, while civil court works on the balance of probabilities, ie which argument is more than 50% true.
I can say that it is more than 50% likely that copyright infringement has occurred even if no criminal copyright infringement is proven.
That isn’t pulled from my ass, that’s just the nuance of how law works. And that’s before we delve into the topic of which judge you had, what legal training they undertook and how much vodka was in the “glass of water” on their bench, or even which way the wind blew that day.
According to the Federal legislation, it does not matter whether or not the copying was for commercial or non-commercial purposes, the only thing that matters is the copying itself. Your judge got it wrong, and you were very lucky in that regard - in particular that your case was not appealed further to a higher, more competent court.
Commerciality should only be factored in to a circumstance of fair use, per the legislation, which a lower court judge cannot overrule. If your case were used as case law in another trial, there’s a good chance it would be disregarded.
I guess some judges don’t really understand it much better than your average layman
“Am I wrong about this subject? No it must be the legal professionals who are wrong!”
im done with this. Goodbye.
One thing that seems dumb about the NYT case that I haven’t seen much talk about is that they argue that ChatGPT is a competitor and it’s use of copyrighted work will take away NYTs business. This is one of the elements they need on their side to counter OpenAIs fiar use defense. But it just strikes me as dumb on its face. You go to the NYT to find out what’s happening right now, in the present. You don’t go to the NYT to find general information about the past or fixed concepts. You use ChatGPT the opposite way, it can tell you about the past (accuracy aside) and it can tell you about general concepts, but it can’t tell you about what’s going on in the present (except by doing a web search, which my understanding is not a part of this lawsuit). I feel pretty confident in saying that there’s not one human on earth that was a regular new York times reader who said “well i don’t need this anymore since now I have ChatGPT”. The use cases just do not overlap at all.
it can’t tell you about what’s going on in the present (except by doing a web search, which my understanding is not a part of this lawsuit)
It’s absolutely part of the lawsuit. NYT just isn’t emphasising it because they know OpenAI is perfectly within their rights to do web searches and bringing it up would weaken NYT’s case.
ChatGPT with web search is really good at telling you what’s on right now. It won’t summarise NYT articles, because NYT has blocked it with robots.txt, but it will summarise other news organisations that cover the same facts.
The fundamental issue is news and facts are not protected by copyright… and organisations like the NYT take advantage of that all the time by immediately plagiarising and re-writing/publishing stories broken by thousands of other news organisations. This really is the pot calling the kettle black.
When NYT loses this case, and I think they probably will, there’s a good chance OpenAI will stop checking robots.txt files.
Presses X to doubt
NYT are such lawsuit trolls I could imagine this is credible.
What a silly and misuided lawsuit.
Tricked. Lol. The NYT tricked a private company into stealing it’s content. True distopia.
Basic reading comprehension, man. That’s not what they’re claiming.
The advances in LLMs and Diffusion models over the past couple of years are remarkable technological achievements that should be celebrated. We shouldn’t be stifling scientific progress in the name of protecting intellectual property, we should be keen to develop the next generation of systems that mitigate hallucination and achieve new capabilities, such as is proposed in Yann Lecun’s Autonomous Machine Intelligence concept.
I can sorta sympathise with those whose work is “stolen” for use as training data, but really whatever you put online in any form is fair game to be consumed by any kind of crawler or surveillance system, so if you don’t want that then don’t put your shit in the street. This “right” to be omitted from training datasets directly conflicts with our ability to progress a new frontier of science.
The actual problem is that all this work is undertaken by a cartel of companies with a stranglehold on compute power and resources to crawl and clean all that data. As with all natural monopolies (transportation, utilities, etc.) it should be undertaken for the public good, in such as way that we can all benefit from the profits.
And the millionth argument quibbling about whether LLMs are “truly intelligent” is a totally orthogonal philosophical tangent.