Hi, Simon.

I see at least two ways to generate Fib numbers efficiently. Here is the code.
We need helper function.

let AddAndSwap(one, two) = (two, one + two)

And then use sequence to yield all numbers sequentially up until given number

let fibs n = seq {
if n >= 0 then yield 1
if n >= 1 then yield 1
if n >= 2 then
let pair = ref (1,1)
for i = 2 to n do
pair := AddAndSwap !pair
yield snd (!pair)
}

[0..5] |> List.map (fibs >> List.of_seq)
>
val it : int list list =
[[1]; [1; 1]; [1; 1; 2]; [1; 1; 2; 3]; [1; 1; 2; 3; 5]; [1; 1; 2; 3; 5; 8]]

Or you can use memoization technique, below is a sample from Expert F#

//generic memoize function from Expert F# book
let memoize (f: 'a -> 'b) =
let t = new System.Collections.Generic.Dictionary<'a,'b>()
fun n ->
if t.ContainsKey(n) then t.[n]
else let res = f n
t.Add(n,res)
res

let rec fibFast =
memoize (fun n -> if n < 2 then 1 else fibFast(n-1) + fibFast(n-2))

[0..5] |> List.map (fibFast)
>
val it : int list = [1; 1; 2; 3; 5; 8]

regards,
Alex

By on 7/29/2009 12:02 PM ()

if you want to generate sequences, I think this is a better implementation:

1
2
let fibs = (1I, 1I) |> Seq.unfold(fun (n0, n1) -> Some(n0, (n1, n0 + n1)))

Also, you can use LazyList:

1
2
let fibs = (1I, 1I) |> LazyList.unfold(fun (n0, n1) -> Some(n0, (n1, n0 + n1)))

Ideally, to memoize a recursive function, you should use the y combinator [link:matt.might.net]

By on 7/29/2009 4:22 PM ()

Two things. First, your original three-line lazyFunc/test1/test2 example computes two different lazy values and then 'forces' the values on the test1 and test2 lines (that's what calling .Value does). It seems like you have some different expectation? I'm not sure what it might be, though; clearly in order for test1 to have the final value '1' and test2 to have the final value '4' we have to run the body of the lazyFunc function two different times with different argument values.

Second, in terms of the big picture and terminology, it's not laziness that would make the recursive 'fib' example efficient, rather it's memoization. The F# lazy type does memoize, specifically in your second three-line example, test1 will 'force' the value stored in 'lazyFunc' (and print the message), and thus when test2 re-forces the value, it can just return the cached/memoized result (since it refers to the same 'lazyFunc' object, which already has been evaluated).

A memoized efficient implementation of 'fib' (in any language) requires storing some kind of dictionary so that, e.g. once you compute that (fib 4)=5, you store the key-value-pair (4,5) so that the next time you ask for (fib 4) again it can return the same (already computed) Lazy<int> object (that already knows that the answer is 5).

By on 7/29/2009 8:51 AM ()

Memoization makes sense. After I posted I thought maybe Haskell had some sort of reduction optimization that reduced ``fib A'' everywhere to the same computation. But, it looks like it works the same way in Haskell as F# (provided the Haskell thunk is actually evaluated): [link:www.haskell.org]

By on 7/29/2009 9:37 AM ()

I don't think lazy functions with arguments make sense. In your lazyFunc example, I would not expect x=1 and x=2 to return the same result (lazy or eager).

For the fib function, what does laziness buy you? ``fib n'' should always evaluate until ``fib 0''. I might expect a lazy implementation to be less efficient.

By on 7/29/2009 8:33 AM ()

Thanks for your answer!

Sorry, in the lazyFunc example is a typo. The parameter x should be the same in both cases. So it should look like this:

let test1 = (lazyFunk 1).Value
let test2 = (lazyFunk 1).Value

But the output remains the same.

So what does laziness buy me? I will explain this with a little example. Say we want to compute 5th number of the fibonacci sequence. This would lead to the following computations

Even in this simple example occurs a lot of recurrent computation (e.g. fib(1) is evaluated 4 times). Built that tree for fib(10000) and you will know, why this is inefficient. With lazy evaluation this problem wouldn't occur. Lazy evaluation means compute only if needed and only once.

In our example this means that fib(1) will be evaluated once and the result will be cached. Any further calls to fib(1) doesn't result in reevaluation of fib(1), but in an access to a cache where the result is stored.

The result would be a huge performance boost for this algorithm.

Of course there is an easy solution to this problem. You could just build a cache manually. But then I can go ahead and code the whole thing in C# as easily than in F#.

In my opinion lazy evaluation is one of the most important benefits of functional programming languages compared to imperative ones. So it would be sad if the support of lazy evaluation in F# is quite limited.

By on 7/29/2009 9:59 AM ()

Lazy evaluation means compute only if needed and only once.

Agreed.

In our example this means that fib(1) will be evaluated once and the result will be cached. Any further calls to fib(1) doesn't result in reevaluation of fib(1), but in an access to a cache where the result is stored.

You are still labelling this 'lazy evaluation' but the rest of us are calling this portion 'memoization', and it is distinct from lazy evaluation. In any language with arbitrary effects that cannot be reasoned about in the type system (such as, say, F#), memoization must be explicitly authored by the user (lest the compiler silently 'optimize away' an effect you authored and intended).

It's a limitation, but it's also a feature. As someone else has pointed out, you can writ e a generic 'memoize' function and use it to write the 'fast' version of fib.

By on 7/29/2009 11:42 PM ()

As I started this post, I thought the term "lazy evaluation" would be quite definite.

However, this discussion shows my understanding of "lazy evaluation" differs from

what other people in this thread think about this term. So I gonna explain my point

of view which is based on the definition of lazy evaluation given in the book "Introduction

to functional programming using Haskell" by Richard Bird, Thomas E. Scruggs,

und Margo A. Mastropieri von Prentice Hall. Here's

a Link to the book at Amazon.com.

First of all, let's consider the evaluation of a simple expression as the following:

1
2
let square x = x * x
let result = square (3+7)

If we want to evaluate the expression "square (3+7)" there are two policies to do

that. The first is called innermost reduction and the second is called outermost

reduction
. With innermost reduction you reduce an innermost redex

in each step. Well, what's an innermost redex? Let's have look what the book says

about that.

"The word 'redex' is short for 'reducable expression', and an innermost redex is

one that contains no other redex." Okay, that's an innermost redex. An outermost

redex
is just the opposite. Once again a definition from the book. "An outermost

redex is one that is contained in no other redex." At first this might be a bit

confusing, but I think looking at our example this comes into sharper relief.

In the expression "square(3+7)" the section "3+7" is an innermost redex, because

the numbers 3 and 7 are literals, so that "3+7" contains no other redex anymore.

However, the function square(...) is an outermost redex, as there is no other redex

containing square. Now that we know the difference between innermost and outermost

reduction, let's take a look at the reduction sequences of "square (3+7)" using

the different policies (see Example 1).

Example 1

As you noticed for sure innermost

reduction needs one step less than outermost reduction. As you can imagine the reduction

of more complex expressions leads to a significantly larger difference. Fortunately

there has been a number of smart people, which had an idea to fix this problem.

The solution is called graph reduction. With graph reduction subexpressions

can be shared, so that almost no repeated computation occurs anymore. This is achieved

by replacing every occurence of a subexpression with a pointer, pointing to one

instance of this subexpression. Look at the next example using outermost graph reduction

to get a grasp of this approach (Example 2).

Example 2

So here's another extract from the book: "With graph reduction, outermost reduction

never takes more steps than innermost reduction. Henceforth we will refer to outermost

graph reduction by its common name, lazy evaluation and to innermost graph

reduction as eager evaluation."

Sounds interesting, but why is outermost graph reduction also called lazy evaluation?

The next example will give us a better understanding.

Consider the following:

1
2
let fst a b = a
let result = fst (square(3+7)) (square(2+5))

What fst simply does is that it takes two arguments and returns the first one. Take

a look at the reduction sequence (Example 3).

Example 3

As you notice with lazy evaluation we save a lot of work, as the function fst doesn't

need the second argument. So let's sum up the advantages of lazy evaluation by taking

another look into our book.

"We will adopt lazy evaluation as our model of computation, because it has to desirable

properties: (i) it terminates whenever any reduction order terminates, and (ii)

it requires no more (and possible fewer) steps than eager evaluation.

With this definition of lazy evaluation in hand, let's go back to our original problem:

Computing fibs with lazy evaluation. As you notice in the following reduction sequence

(Example 4), there doesn't occur any reevaluation, although I haven't declared any

memoization manually.

Example 4

As I'm not a Haskell expert, I have to admit, that I'm not absolutely sure that

Haskell actually reduces the sequence in the way I did. So any comments regarding

this reduction sequence are highly appreciated.

If you need some additional information about lazy evaluation, take a look here.

So I hope that someone has made it up here;)

Here are my questions regarding F#:

(i) Does F# supports lazy evaluation in a similar way, or is it possible to achieve

a similar behaviour with some workarounds?

and (ii) Is there a way to produce a lazy function in F# that has arguments?

@none: Thanks for the hint to y combinator. Sounds really interesting. I'll

have a look the next few days.

@avsbox: Thanks for the efficient implementations. Seems like this is the

way to go in F#. But actually I'm testing how much lazy evaluation regarding the

definition above, F# offers.

By on 7/30/2009 10:44 AM ()

(i) Does F# supports lazy evaluation in a similar way, or is it possible to achieve a similar behaviour with some workarounds?

No. The picture in example 4 auotmagically seems to discover that both (fib 1) calls are a 'common subexpression' that can be shared. I don't know if any real languages (Haskell included) do this; F# certainly does not, because as I mentioned before, it has no way that the call does not contain an intended side-effect (in which case only evaluating the function once would be an error, since the program explicitly asked to evaluate it twice).

The stuff in the book here all looks like a pipe dream to me, for people discussing 'modes of computation' rather than 'programming with computers in the real world', but perhaps I am too jaded. :)

By on 7/30/2009 11:09 AM ()

Thanks for your estimation. I think I'm going to post the last reduction sequence to a Haskell forum. There might be a Haskell Guru, who is able to estimate wether this reduction sequence is reality or just a 'pipe dream':)

As soon as I' ve got an answer, I will post it here.

By on 8/1/2009 11:58 AM ()

1 Naive definition

The standard definition can be expressed directly:

1
2
3
fib 0 = 0
fib 1 = 1
fib n = fib (n-1)

+ fib (n-2)

This implementation requires O(fib n) additions.

So, no, Haskell doesn't appear to memoize those subexpressions.

By on 8/2/2009 3:55 AM ()

I think in your example, you get 2 lazy variables. (and in the fibonacci graph, you get n lazy values).

Consider this example:

1
2
3
4
5
6
let lazyFunkOne = lazyFunk 1

let test1 = lazyFunkOne .Value

let test2 = lazyFunkOne .Value

You really want memoization. So, that the first call to fib evaluates, and subsequent calls use the lookup.

I don't know of a compiler that will reduce every occurrence of, say, (lazyFunk 1) to be _the same_ lazy value. When you look at the simple case, it seems obvious that the compiler ``should'' reduce them. But, the general case gets very complex (even for the simple fib function). Memoization [link:en.wikipedia.org] is the way to avoid unnecessary reevaluation in functional languages.

What language implements laziness in the way that you describe?

By on 7/29/2009 10:33 AM ()
IntelliFactory Offices Copyright (c) 2011-2012 IntelliFactory. All rights reserved.
Home | Products | Consulting | Trainings | Blogs | Jobs | Contact Us | Terms of Use | Privacy Policy | Cookie Policy
Built with WebSharper