Using VS 2010 Beta 2, I am not observing this problem.
Could you show how you implemented CollatzLength?

By on 12/15/2009 3:26 PM ()

Well, I can't get the @#$ HTML to do the right thing so I'm attaching the entire .fs file as a zip file.

By on 12/16/2009 1:39 AM ()

The problem is, I'm not sure attachments work :) I don't see any.

By on 12/16/2009 10:11 AM ()

Hmm...

Maybe I just screwed up. I reedited and reattached, left the forum and came back in and it was still there so hopefully you'll be able to find it this time.

By on 12/16/2009 10:08 PM ()

I tried this quickly on my Intel Atom netbook, and got fairly consistent numbers:

C:\temp>problem14.exe

1.478000 seconds elapsed for first

0.747000 seconds elapsed for second

(after changing the code to calculate the CollatzLength too - the attached code only benchmarks the sequence generation, I think?)

By on 12/16/2009 10:36 PM ()

There's no benchmarking in the attached code. I've got a higher level C# function that calls my varioius Euler problem solutions and the timing is done there. Let's make sure we're on the same page - I uncomment out the line after the comment about "BAD!" and comment in the line following that one. So the only difference is whether you generate the Collatz lengths as:

Seq.initInfinite (fun i -> i) |> Seq.skip 1 |> Seq.truncate(1000000) |>Seq.map (fun i -> (i, CollatzLength i))

or as

seq {for i in 1..1000000 -> (i, CollatzLength i)}

The time goes from about 3.2 secs for the latter to 30 secs for the former as a result. Running under Windows 7 on my Sony Vaio laptop.

By on 12/16/2009 11:59 PM ()

I tested the two sequence generators separately this time (the previous result was invalid due to the memoizing ;)), and got (Atom 1.6GHz, battery power):

Seq.initInfinite (fun i -> i) |> Seq.skip 1 |> Seq.truncate(1000000) |>Seq.map (fun i -> (i, CollatzLength i))

= 3.7s

seq {for i in 1..1000000 -> (i, CollatzLength i)}

= 2.4s

Both compiled with "fsc.exe problem14.fs" from the command line.

By on 12/17/2009 6:18 AM ()

Okay, attached is a solution which displays the anomaly. It is a VS2010 solution - I'm runnig under beta 2 so won't work with VS 2008. It's pretty obvious what to do when you run it. There's a listbox with a single "14" in it - click on the 14 and then click on the "retrieve answer" button and the answer and a timing should appear. Go into problem14.fs and change the comment and hopefully somebody else will observe the same crazy timings that I've been seeing.

By on 12/17/2009 9:25 AM ()

Sorry, I tried your solution, both ways, and one was about 4.8s and the other was about 5.1s - no big difference.

Are you running release mode without the debugger attached?

x86 or x64? How much RAM? (I am really grasping at straws to explain what you're seeing.)

By on 12/17/2009 3:24 PM ()

Wow. Well, I might just have to give up on this one. I see similar timing running either debug or release. 64 bit machine with dual core. 6G RAM.

Aha! Big clue - when I run without debugging, both methods give under 4 secs running time! My formerly slow method now runs at 3.496 s and the other method at 3.408 s without a debugger attached. Curiouser and curiouser.

By on 12/17/2009 4:49 PM ()

Ok, well there's your solution: it's the debugger.

There are a million possible things in the debugger it might be, but my hunch is it's probably something droll like in one scenario there debugger is loading some symbols/pdbs that the other scenario does not need for some reason.

It may be useful to watch the 'modules' window refreshing while the debugger is attached, as well as the status bar at the bottom of VS.

By on 12/17/2009 5:16 PM ()

I suppose in some sense this could be considered a solution, but it's still highly mystifying to me if nobody else. 27 seconds difference for a miniscule change? I realize that there are unpredictable things going on in the debugger, but I've never seen it lengthen something by a factor of 10 in this way before. That's entirely unprecedented in my 30+ years of programming. I guess for the moment I'll chalk it up to beta software and not get too excited.

By on 12/17/2009 5:57 PM ()

Hm, ok, I tried this on my 32-bit box with recent VS bits, and I see the same behavior. It is not something basic like symbols loading; with the debugger attached, it really is taking about 20x longer on my box when attached to a Release build. I'll continue to investigate; I assumed this would be a debugger behavior I have seen before (attaching a debugger can sometimes do all kinds of nasty wacky unpredictable stuff, I have seen VS create some weird behaviors before), but offhand this doesn't smell like any of the 'usual suspects' to me. Hm!

By on 12/17/2009 7:01 PM ()

Couple of things I noted:

If I replace the initInfinite with

{for i in 1..1000000 -> i}

it's fast again. If I replace the Seq.map call to CollatzLength with Seq.toArray it's of neglible time. So weirdly, it seems to be the combination of initInfinite being piped into CollatzLength. I stuck in a couple of

Seq.map (fun i -> i)

into the pipeline thinking maybe that the debugger was somehow doing something on function entries, but no significant difference. I find it quite puzzling.

By on 12/18/2009 4:34 AM ()

Yeah, we're on the trail of understanding it, there is something noteworthy going on here, I'll report back once the investigation is completed. Thanks much for discovering this and pressing on it.

By on 12/18/2009 9:31 AM ()

Ok, it's vacation time, so hard to get all the investigating done, but briefly

On .Net 4.0, Seq.init and Seq.initInfinite use 4.0 System.Lazy under the hood. .Net 4.0's System.Lazy has some code that talks to the debugger (e.g. to warn it that trying to access .Value involves potential cross-thread synchronization and may block (e.g. if another thread is in the midst of forcing the value for the first time), and that the debugger ought not freeze up the UI if this happens). This communication between Lazy and the debugger takes a little time, and trying to run 100000 of them in a row takes a fair bit of time. :) It is unclear yet if that debugger performance aspect is avoidable (the relevant folks are mostly on vacation).

So the net for now is, if you want to avoid this, don't call Seq.init or Seq.initInfinite. You can always fake out similar behavior with functions like these

let initInfinite f =

seq { for i in 0 .. System.Int32.MaxValue do yield f i }

let init count f =

seq { for i in 0 .. (count-1) do yield f i }

By on 12/21/2009 3:41 PM ()

Hey! Thanks for the update! I'd already made the change for the time being. Hope something can be done, but I understand if not. Good info to know, regardless!

By on 12/22/2009 1:13 AM ()

Ah! So I'm not insane! I was beginning to wonder there for a bit. Thanks for verifying!

By on 12/18/2009 4:09 AM ()

Well,, this has all been quite interesting. I went back and put the timing code directly into the F# "let answer =" code and get 1.5 secs for the "long" version which matches up with what everybody is seeing so the culprit would appear to be in the framework calling code which is even odder than I'd originally imagined since it just gets further and further from the two lines in question. I don't know how to explain it, but I swear on a stack of Bibles that just swapping those two lines and absolutely nothing else has this strange performance effect. Over this I have a single F# call which looks like this:

let GetAnswer (iProblem:int) =

match iProblem with

| 1 -> DAP.EulerProblems.Problem1.answer |> sprintf "%A"

| 2 -> DAP.EulerProblems.Problem2.answer |> sprintf "%A"

.

.

.

| 14 -> DAP.EulerProblems.Problem14.answer |> sprintf "%A"

.

.

.

called by the following C# code:

taskCalculate = Task.Factory.StartNew(

() =>

{

sw.Start();

string Text = CSInterface.GetAnswer(iProblem);

sw.Stop();

tbAnswer.BeginInvoke(new MethodInvoker(() =>

{

tbAnswer.Text = Text;

btnGetAnswer.Enabled = true;

lblTime.Text = (sw.ElapsedMilliseconds / 1000.0).ToString() + " Secs";

}));

taskCalculate = null;

});

It's about as straightforward as it gets, but this very strange anomaly really has me scratching my head now. Thanks for taking the time to do the timings. Sadly, as so often happens with me, learning has only made things more confusing rather than less.

I'm going to see if I can't get the actual solution in a zip file < 64k (max this forum will allow). If I can, I'll post it. Thanks again for the enlightenment.

By on 12/17/2009 9:05 AM ()
IntelliFactory Offices Copyright (c) 2011-2012 IntelliFactory. All rights reserved.
Home | Products | Consulting | Trainings | Blogs | Jobs | Contact Us | Terms of Use | Privacy Policy | Cookie Policy
Built with WebSharper