Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Mongo absolutely nailed creating a database that is easy to get started with and even do things that are traditionally more 'hard' such as replication. It is still super attractive for me to pick it up for small projects, even after dealing with its (many) pain points both in development and operational settings.

Given this, it is so tragic to see how dismissive they have been in regards to the consistency issues that have plagued the db since the early days. Whether it was the stupidity of bad defaults in drivers to not confirm writes, or easily corruptible data in the 1.6 days, or now with not seriously looking at the results of jepsen, the mongodb organization has never taken the issues head on. It would be so refreshing to see more transparency and admitting to the faults rather than wiggling around them until eventually pushing a fix buried in patch notes.

I often feel like a mongodb apologist when I admit that I don't mind using mongo for small (and not important) projects and while the mongodb hate can be a bit extreme at times, the companies treatment of these sorts of issues may justify some of it.



I'm with you on this, I have a product that is based on MongoDB and, though customers haven't complained about these issues and it's easy to stick your head in the sand when you haven't had any issues, the response by MongoDB is troublesome.

For instance, compare MongoDB's response to elastic's response:

Initial response to "Call me maybe: Elaticsearch":

https://www.elastic.co/blog/resiliency-elasticsearch/

Their ongoing status on resiliency:

http://www.elastic.co/guide/en/elasticsearch/resiliency/curr...

That is how you respond to a negative jespen test. It's particularly illuminating since elasticsearch doesn't actually bill itself as primary storage and they take resiliency seriously as opposed to MongoDB who do consider themselves primary storage and they do not.


The reason why Mongo was able to "solve" those hard replication problems is because they simply ignored all those hard parts. That's the reason why Mongo is not reliable and most likely never will. The issues they have are due to fundamental design choices.


After MongoDB published their write speed benchmarks based entirely on unacknowledged writes (e.g. how fast can you write to a socket?), it's been a long downhill ride with an immense amount of inexplicable ignorant support.


If they are making money, why would they care? There's lots of shitty software raking in huge license fees based on misplaced reputation.


Can you post a link to these unacknowledged write benchmarks? I can't find them.


Need to find an archived version but it caused a lot of arguments in 2009/2010: e.g. http://rethinkdb.com/blog/the-benchmark-youre-reading-is-pro... references similar benchmarks

Can also link simply to the HN discussion from back then too: https://news.ycombinator.com/item?id=1496035

> Full disclosure: I work for 10gen.

> We did this to make MongoDB look good in stupid benchmarks.


From my own memory, I don't recall 10gen ever posting misleading benchmarks. There were a bunch of other people who did so, and 10gen did little or nothing to try to shut that publicity down.

That said, I think that the 'how fast can you write to a socket?' default setting was probably intentionally put in place to make benchmarks look good.


Hey, it worked for MySQL...




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: