That’s a really interesting thought. I think the key part (as a consumer of AI tools) would be identifying the things that are guesses vs deductions vs complete accurate based on the training data. I would happily look up or think about the output parts that are possibly hallucinated myself but we don’t currently get that kind of feedback. Whereas a human could list things out that they know, and then highlight the things they making educated guesses about, which makes it easier to build upon.