Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I made this point poorly, because I agree with you that Go is quite easy to read and very predictable.

With more words: I have written a parser and type checker for an ML-style language, with parametric types and several other neat tricks in it, and I've now written a parser and type checker for a good subset of Go. The latter has been far more work. I am not entirely sure how to explain the work. Go has lots of syntactic and type conveniences that are easy to use and read, but quite difficult to implement.

As there are few implementers and many users, I think the original designers of Go picked well when they put the work on us implementers.



Can you elaborate on what syntactic conveniences are difficult to implement and why? Language design is on of my hobbies, so I really would like to know that.


One good example is untyped constants. They form a large extension to the type system only present at compile time. It is a good example because there is an excellent blog post describing them in detail: https://blog.golang.org/constants

In particular note the difference between

    const hello = "Hello, 世界"
and

    const typedHello string = "Hello, 世界"
one of these can be assigned to named string types, the other cannot.

As a user of Go I find untyped constants to be extremely useful. They almost always do what I want without surprise. Implementing them however, is not trivial.

A tricker example is embedding. It adds a lot of complexity to the type system. As a user, it can be a bit surprising, but I admit it is very useful.


At a guess, I would describe it as a trade off between syntax and semantics.

Languages that have simpler syntax tend to have far more complex semantics (to infer what's missing, etc.).

Eg. python's semantics is horribly complex: "hello".upper() is something like str.__dict__['upper']("hello")) -- which is all run-time. Whereas say a C++ version amounts to quite a simple run-time function call on some bytes.


What stopped you from using the parsing in the standard library?


Because Neugram is a different language. It is so only in small ways, but they add up when you are in the parser, and especially in the type checker.

First is the fact that the top level is statements, not declarations. That would mean heavy modification to the go/parser package to make the inner-function statement parser the top level.

Second is the fact I need to parse incrementally, to implement a REPL. That would not require serious modification to go/parser, but it would to go/types.

Third is I wanted to experiment with new syntax. It was quicker to get to that experimentation by working from scratch rather than adopting go/parser and go/types.

Fourth is there are some fundamental things I would like to do differently, particularly around how comments are handled. I haven't got there yet.

In retrospect, if the new Go parser inside cmd/compile existed when I started this, I probably would have started from there. (It does many of the things I wanted to do to go/parser.) I suspect I still would have ended up with my own type checker though, if for no other reason than incremental type checking.


I've toyed with making some kind of interpreter for Go. Something that integrates well with it, giving access to all the libraries, channels, goroutines and such. Discussing this with an acquaintance, we decided the best scripting language for Go was... Go. It is interesting to see that you've headed in the same general direction, though you've thought things through much further than we did.

I'll have to try it out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: