`grep` itself is admirably fast on single files, and if you're just searching a small number of files, the bottleneck will be IO, and `grep` will be about as fast as anything. But if you want to use `grep` recursively on a selected subset of millions of files it suffers a bit. `grep` assumes that if you want to collect some subset of files to search, you'll do that with some other utility, and pass the list of files to `grep`, but that creates a bottleneck.
More recent search programs like `ack` and `ag` solve the multifile bottleneck issue, but also perform worse than `grep` on single large files.
I'd encourage you to check out the benchmarks in my blog, especially the subtitle benchmarks, because this isn't actually true in the case of ripgrep. :-)
Ahh, fantastic. I just tried out the new build. It's a real joy to be able to do things like `rg pattern -g '!{docs,tests}/' -tpy` and have it just work. Honestly, all performance aside, your thoughtful choice of command line options is the big selling point for me. I struggle to find the command line flags I need to make `ag` do just what I want, but `rg` seems to fit my expectations without much training.
Thanks for your encouragement. It's really hard to stand my ground because so many people have so many different use cases. The design space is large and it's very challenging to get it even a little right.
In any case, your example is quite fortuitous, since the `{docs,tests}` glob syntax was also added in 0.2.0. :-)
More recent search programs like `ack` and `ag` solve the multifile bottleneck issue, but also perform worse than `grep` on single large files.