It means that just because a human can't read the code doesn't mean the code is not correct. Obfuscators exist, for example, and it's conceivable that the LLM writes perfectly correct code even though it's unmaintainable to us.
Thanks, that's a good insight into my value system then. I understand that code doesn't have to be human-readable to be correct. I don't want to work on a codebase filled with unreadable code which no human colleague understands though. This is also why I don't like a lot of web frameworks - the final code outputted to the page is a huge spaghetti of un-inspectable Javascript and HTML.
I want to have the ability to understand each relevant layer of the system, even if I don't necessarily have the full understanding at every given moment.
Also, to add to my point earlier: You don't like frameworks but it's frameworks all the way down to microcode, and that's a massive amount of layers. Javascript isn't an absolute source of truth, you're just picking one layer out of the entire abstraction stack and saying "this is good enough for me".
It's perfectly fine to do that, but also realize that other people might just choose a different layer, and that's fine too if the end result fulfills its purpose.
Sure, but that's more your preference than an objective way to do software "correctly". We're still figuring out what the latter means when LLMs are involved (hence my article here).