Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Flexibility Is Bad Design (phillarson.blogspot.com)
12 points by dizm on June 30, 2008 | hide | past | favorite | 12 comments


Nice. I guess the author never experienced the joy of having this lovely conversation with his users:

Programmer: That feature you asked for two weeks ago? It's done. Take a look. What do you think?

User: Oh... hmmm... yeah, that's kinda wrong... See, I know I said I wanted X, but I just realized I actually wanted Y. But Y is very similar to X, can't you just tweak it and I'll come back to take a look after lunch?

Programmer: Well, actually, yeah, Y is a lot like X, but I tried to just "start shipping great software," so I didn't take the extra day which will make X->Y only take an hour or two. I'll have to start from scratch. See you in another two weeks.

User (two weeks later): Actually, Y isn't quite right. Can you do Z?

I'll go so far as to claim that code which can't adapt to changing requirements is either (1) pure genius because it just works and does exactly what users want, and therefore rare, or (2) unused, or (3) thrown away. Obviously it shouldn't be overdone, but the ability to balance between flexibility and delivering working applications is precisely what defines people who, as the author puts it, "ship great software."


This is where you see a strong advantage in a language like Ruby over Java.

The less code you have to write initially, the less you'll have to scrap when the rewrite comes. And the rewrite always comes. The question is when.


It isn't that simple. Excessive generality is one problem. Insufficient generality is another. Some things are easy to change later. Others aren't. It's part of being a good programmer to make good judgments about these things.

What people usually do (and this author, in my opinion, is doing) is observe one unhelpful behavior and conclude that the opposite behavior must be "correct". I think this is because building software is a complex activity, so we seek invariant principles to simplify it. Then we view the complex activity through the filter of our "correct" model (which we've identified with it) and get a feeling that we know what we're doing. In the end this causes a lot of problems.


I understand the author's motivations, but I disagree with the conclusion.

The simplest solution is often also the most flexible. I don't think you can argue that simple is bad design.

During requirements and feature planning, I have often found that features get more and more complicated until - guess what - things click, similarities become obvious, and you end up with a solution that's a lot more simple - and more flexible - than you expected. You don't always get that "aha!" moment, and you can't spend your entire development cycle chasing it, but it is often there.


In the name of attacking goldplating and useless makework design fetishism, he seems to be opposed to basic principles like separation of concerns.

Talking about getting things done in the least amount of time and then bringing up the retarded old "it's all Turing complete" argument shows that he has some more thinking to do on this subject.


I think that anyone with even a little bit of real world experience would realize that the spec is rarely ever set in stone. You don't know what you want in a car until you've driven a few. The same is with software; Users don't know what they want until they've tried it out themselves.

This is part of what the big deal about 'agile' methodologies is: People remaining flexible about requirements because they don't really know what will be needed in the future. Specs change and grow and the more flexible your code base is, the easier it is to make changes and maintain.

Granted it can be overdone, (some of the worst WTFs come from overly flexible code,) Ideally, a program should be flexible enough to allow for any reasonable change or extension.


Actually, I think you're putting your finger on a certain tension here (I won't say contradiction) in the agile principles. On the one hand, develop incrementally and adapt to change; on the other hand, build the simplest thing that meets the requirements now (and resist the temptation to avoid generalizing up front). The trouble is that code that does X in a direct, concrete way can't always be easily changed to meet the new requirements when X changes to X'.

The typical agile answer is, "We'll just refactor to allow the necessary changes as we go along." This is an oversimplification, because the cost of such refactoring isn't always low. Sometimes it's no less than the cost of rewriting the system. Other times the refactoring leads to suboptimal solutions for X', compared with what you would have if you designed for X' originally. I've seen agile projects get into trouble this way.

There's no question that up-front generality is often misplaced (since you can't know for sure what you'll need in the future) and can cause a lot of complexity damage. But the answer doesn't lie in making a "process" out of a half-dozen or dozen principles and pretending you've solved the problem.


Oh, I understand. My current job is an agile shop and I've seen firsthand how dealing with poorly thought out legacy design can be a pain. We take a lot of pains these days to make sure decisions today don't get in the way of plans tomorrow.

Personally, I'm not a huge fan of agile but it does hit on a few things right.


2-word summary:

YAGNI principle


These guys have written code that allows you to make operating systems, databases, windowing toolkits, financial software, even the software that you are working on write now.

Grammar error or Freudian slip?


so i guess the master plan is preferable?


If you're writing software without any kind of plan I feel sorry for you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: