Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t think any non-gc language is trivial to learn, C also. Manual memory management is hard, and either you learn from the system (C way) or learn it upfront from the language (Rust way), it’s still the same amount of concept you have to digest to write practical software.


It’s not just that C is hard. I can wrangle together pretty much any program in C since I used to program in C about a decade ago. However, having been so long, my greatest fear is that I wouldn’t even be aware of the footguns I have introduced in my code. C is littered with best practices which are absolutely not obvious, and unless you follow them can be massive security risks. And to a slightly lesser extent the same is true of C++ as well (although my friends tell me newer versions of C++ are much better at this. I worked with versions prior to C++11).


That's true, but the original idea of Unix is to have many small program invocations work together, not to erect a monolithical long running daemon that does it all in a single address space. Memory management for one-shot command line apps isn't hard at all and can often get away with static allocations. Even if you're screwing up your heap, process isolation will take care of recycling memory when your program terminates.


Eh, that's the original idea of the Unix shell, sure. But Unix has many ideas. Long running daemons have been a part of Unix systems since the beginning.


BSD 4.3 introduced inetd along with TCP/IP to mainstream Unix. Quoting from [1]:

> When a TCP packet or UDP packet arrives with a particular destination port number, inetd launches the appropriate server program to handle the connection. For services that are not expected to run with high loads, this method uses memory more efficiently, since the specific servers run only when needed. Furthermore, no network code is required in the service-specific programs, as inetd hooks the sockets directly to stdin, stdout and stderr of the spawned process.

[1]: https://en.wikipedia.org/wiki/Inetd


I'm familiar with inetd, but I don't understand your point. Unix is more than a dozen years older than 4.3BSD. Claiming that long lived daemons is somehow anti-Unix is absurd.

Just ask init or getty.


My point was that the idea of small, self-contained apps that do one thing, and do it well, was in no way limited to shell programming.

Init and getty are small, self-contained system demons that are part of the O/S rather than application servers inviting long-running, single address-space processes for business logic.


Many small programs simply doesn't scale to the modern web when just the baggage associated with spinning up processes will kill you at any reasonable load. UNIX has nice ideas but recognise they were founded in the shared computer between a few dozen users environment of the 70s, not a web server serving thousands of requests a second of today


What do you mean by modern web? I don't see how modernity implies more load, at least for fork() bottlenecked programs specifically.

If you're serving thousands of requests per second, you probably need a pretty beefy server anyway, regardless of the stack you're running. Forking makes good use of multiprocessor capabilities if nothing else.


Modern language runtimes used in the server space like Go basically multiplex green threads across processes to minimize context switches between concurrent execution paths. By leveraging the fact that I/O is much slower than processor ops and switching green threads on I/O without necessarily always switching OS process, they easily scale to hundreds of thousands of concurrent requests. I don't really know about Erlang's model but it also has green threads. There's a reason why high traffic sites use languages like these.

https://talks.golang.org/2012/concurrency.slide#1


No, it really doesn't. Forking is expensive. Running a multithreaded (even if it's green threads rather than OS threads) http server and web application makes far better use of multi-processor capabilities.

Shell scripting is a handy tool but it is also slow so you wouldn't write hot paths in shell scripts. And for the same reasons you wouldn't write busy web servers in CGI. Both are fork() heavy.


Isn't this basically what "serverless" is?


Great observation. And in fact "serverless" is the way that Unix network servers used to work: https://en.wikipedia.org/wiki/Inetd

CGI is also the way the web used to work. Spawning processes has only gotten faster since then. The entire "process spawning doesn't scale to the modern web" argument is completely and totally bogus. Today, spawning a process in Linux is only 10-20 microseconds slower than creating a thread: http://www.bitsnbites.eu/benchmarking-os-primitives/

The performance problems are elsewhere.


What you are missing here is the fact that I/O is orders of magnitude slower than processor. Most of the time in servers is spent waiting for I/O and multiplexing green threads on processes without context switches while some execution paths are waiting on I/O gives you far higher capacity on the same hardware. See the other comment thread on GP comment.


serverless is primarily a decentralisation/small-scale/unpredictable workloads play where you trade off fixed costs of managing a dedicated instance for higher variable per call cost. At any reasonable scale and predictability, running a dedicated server is cheaper.

The folks who started at a small scale in serverless but see traffic growing and kind of are in the middle space before jumping to dedicated servers have an entire art form of keeping their functions "hot" since both Lambda and AppEngine actually keep your functions hot loaded when once it's spun up for some time.

https://www.google.com.sg/search?q=serverless+warm+up&oq=ser...


The Basics, Pascal, Modula-2 linages are surely relatively easy to learn, while preventing 90% of the typical class of C errors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: