She beautifully explained what draws people like myself to GNU/Linux. I have tried to convey some of these ideas to colleagues who are serious about Windows programming, to no avail.
It's not about Microsoft either. In my industry (flight simulation) there are some machines and systems where you can go deep and acquire design-level knowledge, and other systems where you're dealing with black boxes.
Some people prefer the latter; when things go wrong, you cycle the power switch and then contact the manufacturer if the problem still exists. I think some people take comfort in the notion that they are not ultimately responsible for whether the machines work or not. Sorry if that sounds terribly arrogant.
From time to time I have to explain to colleagues why I prefer a minimalistic approach to programming.
One analogy to cars :) ... for normal people, having automatic transmission is great ... you can drive the vehicle with only one hand, and it allows you to concentrate more on the actual driving. But race cars still have manual or semi-automatic transmissions. Thus race drivers are empowered to do neat tricks while driving ... like not changing gears while passing another car to not lose acceleration. A manual transmission is also great for off road driving. And great drivers are also good mechanics ... cars break during races, and without knowing your car, you probably won't finish.
Of course, not all of us are meant to be race drivers. But if you want to be the best you can be, taking the ease route is not the way to go.
Thing is, a race car is designed with the fact in mind that you're not going to do anything but drive when you're in the seat. If there were racing-while-talking-on-cellphones competitions, the automatic transmission would probably be a better optimization in terms of available-attention-to-performance.
Likewise, for some types of programs (extremely parallel ones, for example), you have so many high-level considerations that also thinking about the low-level implications would simply be too much. Premature optimization is just as much a drain on the programmer as the program.
> you have so many high-level considerations that also thinking about the low-level implications would simply be too much
You still have to have a high-level perspective on how things are working, because abstractions are often leaky.
For example, I don't care much about tasks related to systems administration, allowing me to concentrate on what matters to me ... for instance I don't care about the difference between Postfix and Sendmail, or the difference between Apache and Lighttpd ... I just use the one I know until I have a problem with it, but when I do have a problem I have to find ways to solve it. And some knowledge on how the protocols and processes work really help.
You don't (usually) think about your oil when you're driving, and you especially don't change your oil while driving. Just because you understand how the low level works doesn't mean you keep it loaded in your working memory simultaneously to the high-level knowledge. You change your oil when the car is parked; you optimize when you're not in the middle of writing a feature or fixing a bug.
'Linux: home-brewed, hobbyist, group-hacked. UNIX-like operating system created in 1991 by Linus Torvalds then passed around from hand to hand like so much anti-Soviet samizdat. Noncommercial, sold on the cheap mainly for the cost of the documentation, impracticable except perhaps for the thrill of actually looking at the source code and utterly useless to my life as a software engineering consultant.'
1. GNU has been going for nearly a couple of decades - Linus didn't magic an OS out of thin air. The modern equivalent is the articles written about Safari that suggest Apple did the same.
2. Nor was was Linux for home brews - Red Hat, the commercial distro, had emerged as the current leader for both businesses and nerds, with Apache (written by the ASF, which still had strong commercial roots) and to a lesser extent Samba (which I guess back then was still relatively uncommercial, so I give the article that).
3. As a Linux consultant at the end of 1998, I started on 75 Australian dollars an hour, because I knew Unix despite having no access to RISC systems. Utterly useless? I think not.
lol are we the only two ppl on this article who actually know anything about computers? all the other comments seem to have been written by my mom.
did u bust a gut at all that "peeling back the layers" so she could start replacing windows with linux? AHAHAHAHAHAHA
or that crap about rom basic? PUT IN THE FUCKING LINUX FLOPPY MOM!!!
god. this fucktard is a SOFTWARE ENGINEER. plz, kill me now.
no, actually, this kind of person is why ppl who have any skills can basically say YOU'RE GONNA PAY ME THIS AND I'M GONNA DO THAT NOW GET OUT OF MY OFFICE AND DON'T BOTHER ME.
Some people prefer the latter; when things go wrong, you cycle the power switch and then contact the manufacturer if the problem still exists. I think some people take comfort in the notion that they are not ultimately responsible for whether the machines work or not. Sorry if that sounds terribly arrogant.
A specific instance of a much more general principle. Responsibility and understanding are hard, and passing around the problem as a black-box for someone else to solve is far simpler.
The arrogance isn't yours; it is with the people who feel entitled to someone who will take their black-boxes and return a solution, and otherwise assure they are never responsible for understanding anything. The ability to have someone do this is valuable, but it is nothing that is deserved.
"But what if you're an experienced engineer? What if you've already learned the technology contained in the tool, and you're ready to stop worrying about it? Maybe letting the wizard do the work isn't a loss of knowledge but simply a form of storage: the tool as convenient information repository."
Right, and this is what a lot of code generation tools do. After understanding the problem, you write a tool which can write code to solve that problem in a more general sense.
Almost all the software we write can be thought of that way; they are information repositories for problems we already solved.
However, with a lot of things that require deep understanding, there are walls. These represent the places where a person can no longer fake understanding, and either have to stop immediately and develop some or give up. Before the days of wizards, users would encounter plenty of warning signs that their understanding was not enough well before hitting the wall. Example: your 100 line C program is segmentation faulting; time to better understand about pointers and memory allocation before you write a 20k line program depending on the same.
The problem is that wizards are very undiscriminating on who experiences the easing. The output is mostly the same for the experienced engineer as for the non-engineer. The brick wall for the non-engineer still remains, however warnings of danger ahead are hidden being a easy-to-use interface. There is no encouragement for them to go understand better before progressing, and instead they rush straight on into the wall. The still need all the understanding once they get there, but the encouragement to develop it comes later, and so the rate at which it needs to be learned comes faster.
The question this poses is really difficult: how do we abstract away the need to apply knowledge to the same problem repeatedly, yet avoid abstracting away the need to have that knowledge?
Otherwise, very worthwhile article. She captures the spirit of curiosity that drives the need for deep understanding in me, and probably most people here.
Her point about code being incrementally forgotten and IBM having "no on left who understands" was well taken. I think this is where and why open source comes into play. Most programmers understand this basic fact about code that non-programmers don't.
If IBM's air traffic control system had been open source, it likely would have never gotten to the point of obsolescence it reached.
"Run as the root user from the root directory, type in rm -r f * , and, at the stroke of the ENTER key, gone are all the files and directories. Recursively, each directory deleting itself once its files have been deleted, right down to the very directory from which you entered the command: the snake swallowing its tail."
Actually that would be rm -rf * and it wouldn't delete /, the directory from which you entered the command, since with * you've selected all the directories in /, not itself. To do that you'd have to invoke rm -rf . or /, I believe. Though I'm not sure if after deleting all files the system will let you delete / itself? (moot point though)
I think they have disabled it in Ubuntu? But I believe it used to work, if it doesn't still, in other distros. I mean half the reason it's verboten to run as root is that you might accidentally type "rm -rf pics /".
Agreed. I especially liked the use of "All the lovely graphical skins turned to so much bitwise detritus" I can visualize the 1's and 0's settling on the bottom.
Despite his discontent for "the wizards", he seems to realize their usefulness in his last sentence with "the tool as convenient information repository." Certainly, especially in their earlier incantations, the wizards can try to do too much, but a decade later I think they are hitting their intended mark as knowledge repositories.
Wizards and GUIs (for the most part) only allow the user to perform operations that the developers have anticipated.
The beauty of interacting with the computer through the command line -- the shell or some other programmable interface -- is that you can do things that no one else has thought of before. You can write any program that is expressible in the language you use or string together the input and output of any program on your machine.
Buttons are nice, but sometimes you just need to be more expressive than that. When this mode of interaction is necessary, I find that the Unix way has a lot to offer.
Code generation is nearly always the Wrong Thing. Using the right abstractions is the Right Thing. The problem is it leads to a bunch of duplicated code... and they you have to change something.
The one exception might be scaffolding code, which isn't really intended to be kept.
The nice thing about Code Generation is that you can do it as often as you like. So if you change something, you just re-generate all the boilerplate and you're back in business.
This is the one instance where I'll argue in favor of duplicated code. When something breaks in one of your generated classes, you'll find yourself on a breakpoint at a single line that does a single thing, so you can quickly figure out what happened and why (and what changes you'll need to make to the Codesmith template so that it never happens again). When your "right abstraction" breaks, you might find yourself seven interfaces deep looking at a hash full of Objects, one of which is probably not supposed to be null, maybe, and it's not going to be any fun to debug.
I'm inclined to suggest that the right abstraction be implemented well so it's not so hard to debug.
Proper macros provide a nice way to blend the advantages of each. There are even some new languages that have them without looking like Lisp (Plot and Ioke come to mind).
I can't find any empathy with the author though. Her description of getting a cd-rom drive working in Linux embodies all the things I dislike about working with computers, and is the probably reason my dealing with Linux have been so painful.
I simply don't want another hobby. And even if I did, it wouldn't be "getting my computer to function at all." Linux seems to enforce this hobby on me, every step of the way. Granted it's a lot easier 10 year on from this article, but it's still just not fun for me.
This is so true - all too often the problems encountered in a project were caused because someone didn't really understand what was going on under the surface.
While high-level languages and nice operating systems are a good thing, they often mask what's beyond them. If someone grows up in a world they can't look beyond the curtain, bad things happen.
Many times I have witnessed sysadmins banging their heads trying to tune parameters on a Linux server that ran under a virtual machine setting and wondering why some counters were off, without realizing the machine underneath their OS didn't really have hardware counters they could, forgive me, count on.
It's not about Microsoft either. In my industry (flight simulation) there are some machines and systems where you can go deep and acquire design-level knowledge, and other systems where you're dealing with black boxes.
Some people prefer the latter; when things go wrong, you cycle the power switch and then contact the manufacturer if the problem still exists. I think some people take comfort in the notion that they are not ultimately responsible for whether the machines work or not. Sorry if that sounds terribly arrogant.