What do you mean by "unix -r custom"? It's not like "-r" is some kind of standard, and many commands or programs have other ideas.
For example, GNU cp uses -r or -R for recursive copy, but OpenBSD cp only uses -R. ls uses -R only (-r is listing in reverse order). scp uses -r only (-R is remote copy). rm allows both -r and -R.
But then, commands like mv or find don't support anything like -r or -R, they implicitly affect the whole directory structure. Bash has * (with shopt -s globstar) for globbing with subdirectories. Make doesn't support any concept of recursive subdirectory traversal.
You're absolutely right it's not a standard, but neither is ./... and one is in widespread use (a custom) and one is not.
Other software that supports -r: rsync, fdupes, zfs destroy, et c.
I'm not saying they're breaking spec, I'm saying they're being weird and divergent for no particular benefit (and at a quite obvious cost in terms of Google-specific learning curve, as Go is used widely outside of Google).
Blame Blaze (Bazel) and it's file/directory/target globbing rules. Though, truthfully - I find ... far more meaningful as an extension of the unix "." for "this directory" and ".." for "parent directory".
As another comment noted, Perforce used this syntax too - Which is probably where Blaze got it from. I can't speak to earlier examples, but it's "obvious" enough that I would not be surprised if someone found one.
I often think that a substantial portion of my job is software archaeology...
I wish I could replace go generate by //go:embed for the case where I want to embed static assets with a pre-embed pass (eg, for something like minification). Sadly there's no room in the current API for doing this.
> - no execution of untrusted software in the build environment.
Erm, I must say I disagree with the decision if it's true. Do you have a link supporting it? I don't recall seeing anything like that when the feature was released.
Also I wasn't referring to "external tools" but a way to pipeline the embed.FS tree through a filepath.WalkFn function that the developer adds.
> //go:embed assets/* minifyFn
> var minifiedAssets embed.FS
> var minifyFn filepath.WalkFn
Making paternalistic decisions for your users from the presumption that they're idiots is never a good idea in my opinion.
And I'm sure people can already use the aforementioned go generate to execute untrusted software.
> Erm, I must say I disagree with the decision if it's true. Do you have a link supporting it? I don't recall seeing anything like that when the feature was released.
From the `go generate` design doc:
> Second, go build should never cause generation to happen automatically by the client of the package. Generators should run only when explicitly requested.
I think both me and xyzzy_plugh were referring to go:embed. But I take your meaning that using go generate as a build step is not meant to be idiomatic. I guess I've been using it wrong all this time. :)
> It is important to note that as a matter of both design and policy, the go command never runs user-specified code during a build. This improves the reproducibility, scalability, and security of builds. This is also the reason that go generate is a separate manual step rather than an automatic one. Any new go command support for embedded static assets is constrained by that design and policy choice.
Yes, but I think that's mostly wishful thinking. Not all applications get built using strictly go tooling. I rely on make for mine because it's better suited for the platform I'm targeting.
Right, and because your dependencies don't shell out to some random tools during a go build it makes your build system less likely to break and makes your job easier. You have guarantees that unless you run a `go generate` no third-party code will run, and that every build step is fully hermetic. And since you're already using Make for the build, it's probably better equipped to perform the preprocessing than go embed or even go generate so you don't really need Go's build system to take care of that.
`go build` & co are designed to easily integrate into complex or overarching build systems (which are inevitable for any large project, as you mentioned), and the lack of expectation of being able to shell out arbitrarily during a 'normal' build of some leaf library is what contributes to this ease of integration. Contrast that to how difficult it is to integrate with Rust/Cargo's build system where every second package has a `build.rs` which expects some undefined ambient environment to be present and makes any attempt to make the builds hermetic extremely painful.
We have the unix -r custom already. Now we have another, that makes no sense, that you just have to learn.