Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Continued Fractions in Haskell (cdsmithus.medium.com)
96 points by g0xA52A2A on April 20, 2021 | hide | past | favorite | 24 comments


Continued fractions are quite awesome. Gospers monograph, which is also mentioned in the article, is quite readable: https://perl.plover.com/classes/cftalk/INFO/gosper.txt

One thing to watch out for is that when you compute floor(sqrt(2) * sqrt(2)), the computation will never finish. The reason is that continued fractions only compute approximations, so the approximation will only tell you that the result is very close to 2, but it might still be slightly above or below 2. That is, unless you implement some logic to deal with repeating coefficients, but then you will have the same problem with 1/pi times pi, for example.


if you are amazed by continued fractions, your head will explode with continued logarithms


True, but why would you need a floor? It's a rather arbitrary function.


It's actually worse than that. Even knowing the very first term of the continued fraction result would already tell you that the number is at least 2. So the computation will hang without ever producing even a single term.

This is a fundamental problem shared by any form of exact real arithmetic with unique representations; you can always then be forced to make a choice, before emitting any information, that depends on infinite amounts of information about the operands. To solve this problem, you would need to choose a representation with redundancy. (With redundancy, equality comparison becomes harder, and it can still diverge, but you can still get approximations of the answer.)


The sequence is certainly not useless?? This is how you construct real numbers. It is what is called a completion. A real number is a sequence of rationals. You take the space of all sequences of rationals and then quotient out by an ideal of sequences which converge in a sense. I think about the limit object by thinking to some finite depth (truncation). The existence of a limit object is because there are arrows between the different finite steps. A continued fraction is just a nice representation, it’s the same thing.


I think their point is that if you have a convergent sequence of fractions, you can drop some finite number of them from the start and still converge to the same real number. In some sense, the rest of the sequence already contains all of the information, so early terms are redundant.

With the decimal and continued fraction representations, that's not true. The later parts of the representations only describe refinements on the earlier parts; they don't themselves contain the earlier parts.


You misquoted the slighly misphrased claim:

> In a sense, every single one of the rational numbers in that sequence is useles.

"Every single one" is not the same as "all".

But it's not quite right, as you note. Better to say "every single one is mostly redundant information"


As the author, I have just updated the article to try to make this clearer. Thanks.


The continued fraction form is particularly nice. It seems it is just a nice way to write a completion. Consider the approximation for sqrt(x):

Pick a, b such that x^2 > a and b=x^2 - a. Then there is a continued fraction of the form:

x = a + b/(2a + b/(2a + \dots)).

The other way to find the sqrt is with Newton's method. The rationals given by Newton appear in the list generated by the partial fraction (convergents).

The first step in the completion is going to involve a tangent space, which I think looks a bit like the first step in Newton. I think the idea with storing information in a "continued fraction" form could then be generalised a bit. It probably applies to more things than just irrationals.

The fact there are multiple continued fractions for real numbers is okay because it is a quotient, you have picked a representative.


The actual title did not fit within the character limits and I couldn't chop it down without really changing it, so I've submitted this with the title of the author's earlier version.


I never knew about continued fractions.

Impressive the very simple representations of sqrt of 2, sqrt of 3 and sqrt of 5.


There seems to be a continued fraction for pi[1] as well:

  pi = 3 + (1^2)/6 + (3^2)/6 + (5^2)/6 + (7^2)/6 + (9^2)/6 + (11^2)/6 + ...
[1]: https://www.maa.org/sites/default/files/pdf/pubs/amm_supplem...


Mathematicians use words differently. What you're talking about is what the article calls a generalized continued fraction. Generalized continued fractions are not unique, and there are actually many of them for pi, which have different rates of convergence. The article here contains a (different) nice generalized continued fraction for pi, and an algorithm for converting from generalized continued fractions to standard ones, thereby computing the exact standard continued fraction for pi as a result.

(These were later edits to the article, so if you read it early on, you may not have read those parts.)


There are multiple representations of pi as stated by the other commentor. One way of deriving it is given in the link, very elegant using the simple series formula. (Euler came up with it)

[1]: https://golem.ph.utexas.edu/category/2020/08/eulers_continue...


So how fast does it go?


Actually this can be mitigated by compile-time conversion to a selected floating point or rational approximation (no idea if such things are supported).


Haskell is this kind of language which makes hard things easy when you are smart and easy things hard for everyone else.


Which easy thing do you think is hard to do in Haskell?


Printing is the classic answer.

Mutable arrays, for the clever people who know how or print.

Also, looking at the article, the cryptic "fix" operator for making a cyclic structure. Elegant, beautiful, efficient, but mysterious.


Perhaps I’m missing something, but printing stuff in Haskell seems pretty simple.

    print “hello world”
As for mutable arrays, is there something wrong with Data.Array.MArray from the array[0] library?

[0]: https://hackage.haskell.org/package/array-0.5.4.0


Here's a cyclic structure. Seems very easy to me. No cryptic operator required!

    let one_zeros = 1 : zero_ones
        zero_ones = 0 : one_zeros


Monadic anamorphisms


Are you suggesting monadic anamorphisms are easy in other languages but hard in Haskell? Could you give an example for comparison?


I was being facetious, you can certainly do stuff like anamorphisms easily in other languages, but the hard work of pulling out and expressing the abstraction is definitely a job for Haskell.

Anyway here's one from Data.Functor.Foldable.Exotic just for fun:

    anaM :: (Corecursive t, Traversable (Base t), Monad m) => (a -> m (Base t a)) -> (a -> m t)
    anaM psi = a where a = (fmap embed . traverse a) <=< psi




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: