Hi,
First off, I believe your bstree_add is NOT tail recursive, regardless of laziness. Consider your bstree_add with laziness removed:

1
2
3
let rec bstree_add tree n = match tree with
 ...
| Node(L,x,R) -> ... Node(L, x, (bstree_add ...)) // HERE

In the HERE line you do not immediately return a result of a recursive call from bstree_add, but use it to create a new Node object. This means that the stack frame should still be around after the return from the recursive call (bstree_add (R.Force()) n).

>compiler creates a closure around the lazy (expr) which to my limited

understanding means the stack frame
> is still left in place, even though

a value is returned immediately.

I think you are rather confused here. Compare the following two functions (I distill lazy to closures here):

1
2
let f x = g x + x
let f' x = (fun () -> g x + x)

To evaluate f <something>, you need to call actually call g, get its value and then add x to it. To do that, you need to keep stack frame from f on stack while you call g to be able to do the addition.
OTOH, when f' <something> is evaluated, no actual call to g is taking place. All that happens is that a closure is created in heap that contains a value of x (and an entry point for code evaluating g x + x).

One way to see this is f' is a version of f that allocates its stack on heap.

Now, lazy expr is essentially Lazy.create (fun () -> expr). So returning to your lazy bstree_add: it is NOT tail recursive, but it essentially evaluates in O(1) stack space. When you wrap up your recursive calls in 'lazy ...' you create a closure, and in doing so, allocate bstree_add's stack on heap.

Hope this helps,
Friendly,
Dmitry

By on 5/17/2008 12:31 AM ()

Yes it does. I appologize for seeming confused I was juggling a few things at the time I authored that post. However, when I said the bstree_Add was technically tail recursive I was using the term VERY loosely. What I meat was the resulting closure created by lazy(expr) would lead to the function operating in O(1) space(like you had said).

Your answer does force me to ask the following question. Does the compiled/jitted code reuse the space on the heap created by the closure a function like my bstree_add creates? Or does the next recursive call create a new closure on the heap ad leave the old one to be garbage collected?

By on 5/17/2008 12:37 PM ()

> Does the compiled/jitted code reuse the space on the heap created by

the closure a function like my bstree_add creates? Or does the next

recursive call create a new closure on the heap ad leave the old one to

be garbage collected?

It creates a new closure, leaving the old one to be garbage collected (closures in F# are essentialy immutable instances of inheritors of FastFunc base class)
Actually creating a closure (= allocating a block in Gen0) is fairly cheap (on par with stack allocation actually).
As a matter of fact, in generational GC, allocating a new object may even be cheaper that modifying an old object - if the modification adds a "wrong way" inter-generational reference.

Hope this helps,
Friendly,
Dmitry

By on 5/18/2008 5:43 AM ()

The original code is interesting - I think it's a very good piece of code for a first venture into F# functional coding.

The question you ask is also very interesting. Avoiding stack overflows when you consume the tree looks like it might require the use of continuations. Some of this is covered in Expert F# chapter 8.

However, it won't be so common for trees to get enomously deep if you keep them balanced.

One fun and helpful reference point in this domain is Wadler's classic on adding laziness to a strict language: [link:citeseer.ist.psu.edu]

Kind regards

don

By on 5/20/2008 3:55 AM ()
IntelliFactory Offices Copyright (c) 2011-2012 IntelliFactory. All rights reserved.
Home | Products | Consulting | Trainings | Blogs | Jobs | Contact Us | Terms of Use | Privacy Policy | Cookie Policy
Built with WebSharper