Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I keep seeing new message queue solutions pop up over the years and it's just been my impression at least that this is one area where silicon valley really is way behind the trading industry.

Reliable pub/sub that supports message rates over 100k/sec (even up to the millions) has been available for a while now and with a great deal of efficiency (eg; the Aeron project). The incredible amount of effort to support complex partitions, extreme fault tolerance (instead of more clever recovery logic), etc. add a lot of overhead. To the point of talking about "low latency" overhead in the order of 5ms instead of microseconds or even nanoseconds as is expected in trading.

Worse, many startups try to adopt these technologies where their message rates are miniscule. To give you some context, even two beefy machines with an older message queue solution like ZeroMQ can tolerate throughput in excess of what most companies produce.

This is not to discredit the authors of Pulsar or Kafka at all... but it's just a concerning trend where easy to use horizontally scalable message queues are being deployed everywhere. Similar to how everyone was running hadoop a few years back even when the data fit in memory.



Using wild heavy-duty (or faux-heavy-duty but just as hard to manage) solutions where 2-3 colo'd servers running boring services + Cloudflare would do is so well-accepted as normal practice in Startup Land that it's not worth fighting. Just take the free résumé sugar and don't rock the boat. You won't get anywhere anyway, and on the off chance you do win all you're doing is ensuring that you, personally, are to blame for any problems that come up. Meanwhile the costs and problems of Kafka and Kubernetes and all that jazz are no-one's fault, because that's "industry standard".

[EDIT] in fact it's pretty much the norm outside startup land, too, as soon is you're involved with any kind of bigco "innovation" or greenfield-development division.


Aeron[1] is something I have been look at as well.

I like that it is adaptable to run without an external broker (using embedded media driver option), yet can add that for scalability/redundancy.

Our use cases do not require a query on top of streams. Instead all data would go into timescaleDB [2].

[1] https://github.com/real-logic/aeron/wiki/Java-Programming-Gu...

[2] https://github.com/timescale/timescaledb


Worth noting that Kafka is not a queue, but an append-only log.


ZeroMQ is not a message queue, it's a networking library.


I'll bet he is aware. The problem with the trading industry is that they have hundreds of users with bespoke solutions catering to extreme performance criteria rather than hundreds of thousands like NATS. They will keep reinventing the wheel every time a nanosecond can be saved by the latest hardware stack.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: