There's nothing about LLMs that is inherently nondeterministic. Sure, if you're using some API you have no control over, anything could happen. But if you run it on your own hardware, you can make it as deterministic as any classic NLP approach.
And whether you always have to check everything is a separate question from nondeterminism. You could have a deterministic heuristic that is often wrong in a domain where mistakes are fatal, or you could have a nondeterministic model that is almost always correct for a task where errors cost next to nothing.
That's what I meant by "if you're using some API you have no control over, anything could happen." OpenAI could offer access to a fully deterministic version at a higher price point, but they choose not to and there's nothing you can do about that.
If you'd read the source you linked you'd see that no, the system is inherently non-deterministic. They cannot sell you a deterministic version of GPT-4. There is no such thing. They could sell you a deterministic version of some other inferior model.
And whether you always have to check everything is a separate question from nondeterminism. You could have a deterministic heuristic that is often wrong in a domain where mistakes are fatal, or you could have a nondeterministic model that is almost always correct for a task where errors cost next to nothing.