The most common reasons for requiring shared memory concurrency (instead of message passing) are:
1. It's just plain easier to write a lot of stuff with a shared memory approach.
2. There are algorithms where you get the best speedup using shared memory.
But yes, there are also downsides.
1. Shared memory concurrency can be a lot harder on the runtime. In particular, writing an efficient single-threaded GC is an order of magnitude easier than an efficient concurrent GC.
2. You eventually run into scaling issues with shared memory only (number of cores, limited memory bandwidth). That said, a hybrid shared memory/message passing solution can still be superior to a pure message passing one, and for "desktop" parallelism that's not an issue.
3. Shared memory concurrency requires programming language support (at least if you want to keep a modicum of sanity), whereas message passing concurrency can be done as a library. And many languages have really, really screwed up their handling of shared memory concurrency where it's extremely difficult to reason about correctness.
My perceptions may be colored here because I did cut my teeth learning about parallel programming with distributed computing in the 1990s, but I also find that lack of shared memory concurrency less of an issue in actual practice (though, obviously, I'd hardly pass up on having the option).
Erlang essentially using that hybrid model. The runtime uses a shared heap for strings and some other immutable data that can be reference counted. On top of that each thread has own GC-heap and messages copies everything private heap.
I am puzzled why other languages do not even try to explore this. This will be a nice fit for Ocaml or nodejs. And even with Go I wish for be an option to run several Go processes with a shared heap with explicit API to write/read things there and channels working across processes.
Indeed I welcome the option as well. If it doesn't make the runtime noticably slower for non concurrent programs, of course, but I'm confident the authors will make all this a no-op in that case.
Performance backwards compatibility has been a key goal in the Multicore OCaml project. Currently, the overhead for running legacy sequential OCaml programs on the multicore GC is a few percentage points on average. The overhead is low enough that we may not need to provide a "sequential-only" compilation flag.
1. It's just plain easier to write a lot of stuff with a shared memory approach.
2. There are algorithms where you get the best speedup using shared memory.
But yes, there are also downsides.
1. Shared memory concurrency can be a lot harder on the runtime. In particular, writing an efficient single-threaded GC is an order of magnitude easier than an efficient concurrent GC.
2. You eventually run into scaling issues with shared memory only (number of cores, limited memory bandwidth). That said, a hybrid shared memory/message passing solution can still be superior to a pure message passing one, and for "desktop" parallelism that's not an issue.
3. Shared memory concurrency requires programming language support (at least if you want to keep a modicum of sanity), whereas message passing concurrency can be done as a library. And many languages have really, really screwed up their handling of shared memory concurrency where it's extremely difficult to reason about correctness.
My perceptions may be colored here because I did cut my teeth learning about parallel programming with distributed computing in the 1990s, but I also find that lack of shared memory concurrency less of an issue in actual practice (though, obviously, I'd hardly pass up on having the option).