I have two objections: First, what is "one thing"? Render a Doom 3 frame? Or increment the "i" variable? The "Clean Code" author argued that a log statement or a try/catch statement deserved its own method.
Second: says who? Where's the evidence that doing "one thing" is better than doing more things?
In general I notice that there are many rules and best practices in programming, without a lot of evidence, or even discussion of the drawbacks.
The first of your two objections is perhaps a confusion of the word best practice. It doesn't mean ultimatum as much as it means something seriously worth considering.
What constitutes "one thing" is a judgment call, and it varies tremendously as should be evident from all the other replies here trying to define it concretely. I imagine the definition changes a lot depending on how one's catalog of discrete things change. The point is not to define this "one thing" for you, it is to get you thinking about it.
A lot of the best practices that have been put forth are the empirical outcome of repeated judgment calls by others. They notice a trend in their own practice, and put it forth as being more of an emergent principle than a coincidence. They might evangelize some, but it is a sign they are excited about the idea. It doesn't mean they don't know there are drawbacks; they might want to float the idea first and see if their concerns are legitimate. If I floated all of the possible drawbacks of ideas I've had, I'd probably scare people away, even though I overplayed a lot of them.
Oh, and for the second objection, there is data. I don't have it in front of me, but I believe that Steve McConnell cites data within Code Complete about the length and scope of methods relative to error rate. Granted, one citation does not constitute proof, but it's a sign Uncle Bob isn't alone on his soapbox. :-)
But then, the point of all of this evangelizing, really, is to get you off auto-pilot, and thinking about your work, not to give you the definitive answers to everything. That you asked this question shows you're thinking about it. Work hard to keep doing this, it's more important than any rule or best practice.
You're the author, dude! What you choose as the "one thing" of any given method gives me, the reader, information. You can emphasize an activity by breaking it out into a method, or de-emphasize it by leaving it inline.
There's no evidence that Poe was a great short story writer either. You have communicative power with your choices; use it consistently and wisely.
- Shower
- Get wet
- Clean
- Rinse
- Get Dressed.
- Have breakfast
- Prepare food
- Eat
So different tasks have a different level of detail. The advantage is that you end up with more reusable methods. Like "shower" or "Prepare food" that can be used elseware.
In another time there used to be more emphasis on top-down design, but I feel that has been lost and I do not know if it is still taught.
> The advantage is that you end up with more reusable methods. Like "shower" or "Prepare food" that can be used elseware.
But that is exactly the author of this post's point. When you have something that is more reusable, like "Prepare food", you end up using it in multiple situations, increasing the complexity of relationships between components.
So for example, when you want to make sure you eat bacon with breakfast, you can't just add "Prepare bacon" to "Prepare food" b/c now the "Prepare food" task may be executed in "Have lunch", "Have dinner", and "Have midnight snack". And you don't want bacon with your midnight snack! Instead, you have to add a parameter to the task, some conditional, add parameter to each call site, etc... i.e. more complex.
Is "prepare bacon" necessarily part of "prepare food" itself, though? Rather than hardcoding preparing bacon as a stage of food prep, it's probably more maintainable to have the kinds of food passed in (as a list, a Menu object, whatever) - what you're cooking strongly influences the entire process.
washHands()
for f in val(foods) do
coroutine.init(f)
...
end
waitUntilDone()
if (not aLazyGit) then doDishes() end
Really, cooking usually involves interwoven stages of things like setting water boiling, whisking a roux, kneading dough, chopping peppers, etc. Tacking on "...and make bacon" at the end of all cooking batches doesn't make sense. Instead, break prepareFood() down further into operations such as boilWater(vol), preheatOven(temp), etc.; Those should be reusable, barring unusual circumstances (e.g. boiling water at high altitudes). Without a lot of specialization via arguments, prepareFood() is too coarse an abstraction, about as vague as doTaxes().
It probably is okay to duplicate a calculation 4 times, all other things being equal. At the same time, I'd say it's probably a sign of other problems. If you need to reuse a calculation, that calculation is part of your application's domain (using the term loosely) and should be represented there, rather than being embedded inside scenario-appropriate parts of the system.
To answer your first question, the right size of the method is that which a typcial maintainer of the code has no problems holding within his head.
The "one thing" is a one-statement decription of the method such that the maintaner does not need to look inside the second go around undersand the place where it is being invoked.
As to your second question, I have no answer. It is a bunch of voodoo right now and I have to wonder which body could, even in theory, produce a scientificaly valid proof of efficiency (or lack thereof)?
I find a great way to understand the complexity you are coding into a method / function is to try and write a unit test for it. Short simple methods are easy to write tests for, while long, complex ones become exponentially harder to capture all the combinatorial state possibilities (eg: does it work when foo is null, but bar is not and baz is out of range? what about when bar is null but foo is not and baz is in range .... and so on.)
Even if you are not intending to write a unit test for a particular function, it's useful to imagine how hard it would be to do so as a thought experiment.
I have two objections: First, what is "one thing"? Render a Doom 3 frame? Or increment the "i" variable? The "Clean Code" author argued that a log statement or a try/catch statement deserved its own method.
Second: says who? Where's the evidence that doing "one thing" is better than doing more things?
In general I notice that there are many rules and best practices in programming, without a lot of evidence, or even discussion of the drawbacks.