Solving Fibonacci Numbers is the start of Chapter 3 on Dynamic Programming (more on that below) in Erickson’s Algorithm book. I had started with Peasant Multiplication, then the merge sort, and most fun – the n Queens Problem. With Fibonacci Number calculating though, there are a couple of obvious ways to code this and some presented questions for me with Rust.

I became interested in comparing a couple of these ways – both in thinking about the Rust code and looking at the timing. Then my eyes popped out when I saw the difference in the release build times!

## Recursive Backtracing

First is probably closest to the pseudo code for the classic solution for Fibonacci. Define what 0 and 1 return and solve for the given number, recursing all the way back to 2 (which adds the fixed answers for 0 and 1):

fn backtrace_fib(fib_num: u128) -> u128 { if fib_num == 0 || fib_num == 1 { return fib_num; } backtrace_fib(fib_num - 1) + backtrace_fib(fib_num - 2) }

## Backtracing With Memoization

Next, especially important if your function will be called multiple times, is to add memoization. That is, once you calculate an answer for a given input, remember than answer so you can quickly respond with it later, skipping the math.

I don’t like that, with the way I did this, I have to pass in my HashMap where I’m storing the answers. Is there any way to have some `static living private variable that only this function would see, but would last the whole run of the program?? There are other places I could store away data, like memcache and redis.

fn backtrace_memo_fib(memo: &mut HashMap<u128, u128>, fib_num: u128) -> u128 { match memo.get(&fib_num).map(|answer| answer.clone()) { Some(result) => result, None => { let result = match fib_num { 0 | 1 => fib_num, n => backtrace_memo_fib(memo, n - 1) + backtrace_memo_fib(memo, n - 2), }; memo.insert(fib_num, result.clone()); result } } }

## Dynamic Programming

So, Jeff Erickson, author of the Algorithms book, explains another way solve for a Fibonacci Number involves using dynamic programming. But this IS your grandfathers “programming” (not ours).

The word ‘programming’ does not refer to writing code, but rather to the older sense of

Erickson, Jeff (2018, December 29). Algorithms. pp. 100. Retrieved from Algorithms: http://algorithms.wtfplanningorscheduling, typically by filling in a table. For example, sports programs and theater programs are schedules of important events (with ads); television programming involves filling each available time slot with a show (and ads); degree programs are schedules of classes to be taken (with ads).

For this one, I just kept the memoization inside the function. We then purposely fill the HashMap on our way up, referencing already-calculated answers, and so never descend into levels of recursion.

fn dynamic_fib(fib_num: u128) -> u128 { let mut memo = HashMap::new(); memo.insert(0, 0); memo.insert(1, 1); match fib_num { 0 | 1 => {} // already set n => { for i in 2..=n { let result = *memo.get(&(i - 1)).unwrap() + *memo.get(&(i - 2)).unwrap(); memo.insert(i, result); } } }; *memo.get(&fib_num).unwrap() }

## Cached Backtracing Function

So, as I was trying to figure out how to store some data in a function for future calls (if that is even possible), I came across the cached crate. “`cached`

provides implementations of several caching structures as well as a handy macro for defining memoized functions.” How interesting – just what I needed and the memoization is even hidden from ME. Ok, I’ll do the simple, recursive, backtracing function but cache the results.

#[cached(size = 200)] fn cached_fib(fib_num: u128) -> u128 { if fib_num == 0 || fib_num == 1 { return fib_num; } cached_fib(fib_num - 1) + cached_fib(fib_num - 2) }

## Fibonacci Fight

So, I wrap all of those up into main() so I can start comparing them. That code is pretty boring, so check it out in the repo. Mostly I run each flavor of the algorithm, timing the result and print it all out. The first one looks like:

let now = Instant::now(); let _ = backtrace_fib(fib_num); let elapsed = now.elapsed(); print_results(fib_num, "simple backtracing/recursion", elapsed);

I also added a test, just to be sure each is giving the correct answer.

#[test] fn test_each_version() { assert_eq!(backtrace_fib(20), 6765); assert_eq!(backtrace_memo_fib(&mut HashMap::new(), 20), 6765); assert_eq!(dynamic_fib(20), 6765); assert_eq!(cached_fib(20), 6765); }

## Fibonacci Racing

And then some code to time each version, and call it a second and third time, so we can see the memoization at work.

Of course, your results will vary by CPU, but look at this run for Fibonacci Number 50 with the debug build (where the differences are huge). This is on an Amazon t2.medium EC2 server:

The first time solving will be the slowest Solving fib:50 with simple backtracing/recursion took 287244288946 ns Solving fib:50 with backtracing/recursion with memoization took 211289 ns Solving fib:50 with dynamic programming with memoization took 167291 ns Solving fib:50 with cached function took 214175 ns What about solving it a second or third time, anyone faster this time? Solving fib:50 with simple backtracing/recursion took 287262389809 ns Solving fib:50 with backtracing/recursion with memoization took 202481 ns Solving fib:50 with dynamic programming with memoization took 162527 ns Solving fib:50 with cached function took 6285 ns Solving fib:50 with simple backtracing/recursion took 287273113221 ns Solving fib:50 with backtracing/recursion with memoization took 185609 ns Solving fib:50 with dynamic programming with memoization took 158492 ns Solving fib:50 with cached function took 6496 ns By the way, Fibonacci Number 50 is 12586269025 which (divided by Fib Num 49) approximates phi as 1.618033988749895

But wow, check out these numbers with a release build!!

The first time solving will be the slowest Solving fib:50 with simple backtracing/recursion took 806 ns Solving fib:50 with backtracing/recursion with memoization took 16128 ns Solving fib:50 with dynamic programming with memoization took 8052 ns Solving fib:50 with cached function took 19495 ns What about solving it a second or third time, anyone faster this time? Solving fib:50 with simple backtracing/recursion took 596 ns Solving fib:50 with backtracing/recursion with memoization took 8142 ns Solving fib:50 with dynamic programming with memoization took 7576 ns Solving fib:50 with cached function took 724 ns Solving fib:50 with simple backtracing/recursion took 596 ns Solving fib:50 with backtracing/recursion with memoization took 7549 ns Solving fib:50 with dynamic programming with memoization took 7327 ns Solving fib:50 with cached function took 746 ns By the way, Fibonacci Number 50 is 12586269025 which (divided by Fib Num 49) approximates phi as 1.618033988749895

## A Really Big Number

And it handles a Fibonacci Number of 186, though any higher and casting the answers as **f64**s to divide and get Phi breaks. The fastest “first” calculation for Fibonacci Number 186 was **simple backtracing/recursion** and took only 820 ns and it’s kinda big:

By the way, Fibonacci Number 186 is 332825110087067562321196029789634457848 which (divided by Fib Num 185) approximates phi as 1.6180339887498947

## Update #1

unreliablebazooka on reddit points out my dynamic_fib is a poor use of a HashMap and offered a suggestion I now added as better_dynamic_fib. It uses a tuple (SO much easier and cleaner) and in the category of optimization, it only keeps the tuple around for the past 2 Fibonacci numbers in the chain, not the whole list. Much, much faster – the fasted first execution yet, in fact!

fn better_dynamic_fib(fib_num: u128) -> u128 { let mut memo = (0, 1); match fib_num { 0 | 1 => fib_num, _ => { for _ in 2..=fib_num { memo = (memo.1, memo.0 + memo.1) } memo.1 } } }

First and additional runs: Solving fib:20 with dynamic programming with memoization via tuple took 706 ns