I feel like I may be missing some of what you're trying to communicate. I agree that injecting defects can help test a test suite and that this can be a good practice. I don't quite see the relevance to the issue I described.
Edited to add - thanks for responding, by the way. I'm confused by all the down-votes.
Assertions in code are for documenting and checking things that can't happen unless there's a bug. This helps programmers reason about the code as they read it, and helps us find errors closer to the defect that caused them.
If a test case triggers an assertion violation down in some method, there is a bug. That should break the test, so that I'm told about the bug, and can investigate and fix it. If there happens to be a `try...except Exception` anywhere in the stack above that method, the test never learns that an assertion fired and might even pass. This makes every test less useful than it could be.
Oh, that's why you inject defects, and if that injection doesn't cause your test to fail, eg because of a try..except Exception, you know that your test is wrong.
(Then you investigate and hopefully remove the catch-all.)
This isn't terribly relevant to narrow unit tests, where I should be able to know what I expect to have happened and presumably if an assertion pops it won't have happened, but it makes larger scale fuzzing substantially less useful.
Edited to add - thanks for responding, by the way. I'm confused by all the down-votes.