descriptionContinued fraction C library.
homepage URLhttp://cs.stanford.edu/~blynn/frac/
ownerbenlynn@gmail.com
last changeSat, 23 May 2015 04:08:11 +0000 (22 21:08 -0700)
content tags
add:
readme

The binary representation of 1/3 is infinite. Floating-point numbers, our standard tool for working with reals, cannot even handle numbers a young child could understand.

In contrast, every rational can be uniquely and exactly represented by a finite continued fraction (provided we forbid those with last term 1). Continued fractions are immune from oddities such as 1/3 = 0.333… and 1 = 0.999…, living in a realm free from the tyranny of binary, or any other base.

Every irrational number can also be uniquely expressed as an infinite continued fraction. Many frequently encountered algebraic and transcendental numbers have easily computable continued fraction expansions which can be truncated to yield approximations with built-in error bounds, unlike floating-point approximations, which must be accompanied at all times with their error bounds.

Furthermore, suppose we use floating-point to compute some answer to 100 decimal places. What if we now want 200 decimal places? Even if we possess the first 100 digits and carefully preserve the program state, we must start from scratch and redo all our arithmetic with 200 digit precision. If we had used continued fractions, we could arbitrarily increase the precision of our answer without redoing any work, and without any tedious error analysis.

But we must pay a price for these advantages. We have a representation from which we can instantly infer error bounds. A representation for which we may arbitrarily increase precision without repeating any calculations. A representation which in some sense is the closest we can hope to get to an exact number using rationals. Is it any wonder therefore that binary operations on continued fractions, whose output must also uphold these high standards, are clumsy and convoluted?

These drawbacks are almost certainly why these fascinating creatures remain obscure. Yet surely there are some problems that scream for continued fractions. How about a screen saver that computes more and more digits of pi, each time picking up where it left off? Or imagine a real-time application where all continued fractions simply output the last computed convergent on a timer interrupt. Under heavy system load, approximations are coarser but the show goes on.

Design

Threads are the future, if not already the present. Multicore systems are already commonplace, and as time passes, the number of cores per system will steadily march upward. Happily, this suits continued fractions.

Inspired by Doug McIlroy's “Squinting at Power Series”, we spawn at least one thread per continued fraction, and also per operation on continued fractions. The threads communicate via crude demand channels, each providing a few output terms upon each request.

This scheme allows us to quickly write code for generating continued fraction expansions or operating on them. For example, consider the code for generating e = 2.71828…

// e = [2; 1, 2, 1, 1, 4, 1, ...]
static void *e_expansion(cf_t cf) {
  int even = 2;
  cf_put_int(cf, even);

  while(cf_wait(cf)) {
    cf_put_int(cf, 1);
    cf_put_int(cf, even);
    even += 2;
    cf_put_int(cf, 1);
  }

  return NULL;
}

The function cf_put places a term on the output channel, and cf_wait is our implementation of demand channels, a sort of cooperative multitasking. Without it, not only would we have to destroy and clean up after the thread ourselves, but more seriously, the thread might consume vast amounts of resources computing unwanted terms. The cf_wait functions instructs the thread to stay idle. Our threads call this function often, and if it returns zero, our threads clean themselves up and exit.

Disclaimer

Lest my salesmanship backfire, let me temper my anti-floating-point rhetoric. Increasing the precision of floating-point operations without redoing work is in fact possible for many common cases. For example, Newton's method is self-correcting, meaning we may use low precision at first, and increase it for future iterations. Even pi enjoys such methods: there exists a formula revealing any hexadecimal digit of pi without computing any previous digits, though it requires about the same amount of work.

Moreover, several floating-point algorithms converge quadratically or faster, thus good implementations will asymptotically outperform continued fractions as these converge only linearly.

Nonetheless, for lower accuracies, the smaller overhead may give continued fractions the edge in certain problems such as finding the square root of 2. Additionally, precision decisions are automatic, for example, one simply needs enough bits to hold the last computed convergents. Built-in error analysis simplifies and hence accelerates development.

shortlog
2015-05-23 Ben LynnMoved old website to README.master
2009-11-07 Ben LynnI haven't touched this in a while.
2008-09-19 Ben LynnMoved README to website.
2008-09-18 Ben LynnCosmetic changes.
2008-09-18 Ben LynnCompute HAKMEM constant.
2008-09-17 Ben LynnConverted tests to use tee.
2008-09-17 Ben LynnImplemented tee.
2008-09-17 Ben LynnReplaced Newton's method with binary search.
2008-09-17 Ben LynnNewton's method for quadratics involving continued...
2008-09-16 Ben LynnImproved Makefile, tests.
2008-09-16 Ben LynnQuadratic algorithm supports sign, as does decimal...
2008-09-16 Ben LynnAdded sign of a continued fraction.
2008-09-15 Ben LynnPlugged more memory leaks.
2008-09-15 Ben LynnPatched memory leak.
2008-09-15 Ben LynnFixed bug in bihom.c.
2008-09-15 Ben LynnImproved tests. Appears to be a bug in bihom.c.
...
heads
8 years ago master