Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But look at your own argument. LLMs are not fair use because they might be prompted into regurgitating something substantially similar to the trained data.

And yet, the IA is 100% aiming to absolutely reproduce literally every part of the work in a 100% complete manner that replaces the original use of the work.

And you cannot bring yourself to admit that the IA is wrong. When you get to that point you have to admit to yourself that you're not making an argument your pushing a dogma.



I’m not arguing that the IA is right or wrong here.

The point more generally is that there’s an asymmetry in how people are thinking about these issues, and to highlight that asymmetry.

If it turns out after various lawsuits shake out that LLMs as they currently exist are actually entirely legal, there’s a case to be made that the criteria for establishing fair use is quite broken. In a world where the IA gets in legal trouble for interpreting existing rules too broadly, it seems entirely unjust that LLM companies would get off scott free for doing something arguably far worse from some perspectives.


IA was lending a digital copies (only one user at a time may read the book), it was acting like a library lending out physical books, only IA did it over the Internet which is more convenient. IA is non-profit.

What publishers argue is that you cannot treat digital books like physical ones; i.e. you cannot re-sell or lend (like IA did) a digital book.

What LLM do is that they use copyrighted content for profit and do not lend anything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: