Answered by:
Failure of lock() with TPL

Question
-
I have a bizarre situation where lock() is not locking: two or three threads manage to get inside the lock at the same time.
I can't reproduce it with toy code. The actual code is this:
public static ResamplerSet GetResamplerSet(IFull o) { // For the resampler set we will find or create. ResamplerSet resamplerSet; // Compute the key. var key = ComputeKey(o); // Lock the resampler sets. Block at this point because if we need to create it, the constructor does so in parallel. lock (resamplerSets) { // See if we can get a resampler set with the correct source step value and sufficient destination dimension. if (!resamplerSets.TryGetValue(key, out resamplerSet) || resamplerSet.MaximumDestinationDimension < o.DestinationDimension) { Console.WriteLine("Starting creation for " + key + ": resamplerSets does " + (resamplerSets.ContainsKey(key) ? "" : "not ") + "contain key"); if (resamplerSet != null) Console.WriteLine(resamplerSet.MaximumDestinationDimension + " " + o.DestinationDimension); // Failed for one or the other reason. Create or recreate it now. resamplerSet = new ResamplerSet(o); // Store it. Console.WriteLine("Finished creation for " + key + ": resamplerSets does " + (resamplerSets.ContainsKey(key) ? "" : "not ") + "contain key"); resamplerSets[key] = resamplerSet; Console.WriteLine("Stored created object " + key + ": resamplerSets does " + (resamplerSets.ContainsKey(key) ? "" : "not ") + "contain key"); } } // Return the resampler set we found or created. return resamplerSet; } static Dictionary<KeyValuePair<double, double>, ResamplerSet> resamplerSets = new Dictionary<KeyValuePair<double, double>, ResamplerSet>();
I can confirm that nothing in new ResamplerSet(o) calls GetSamplerSet(); there is no possibility of a re-entrant cycle.
One run of the program gives this:
Starting creation for [81.0215193155302, 40.0107596577651]: resamplerSets does not contain key Finished creation for [81.0215193155302, 40.0107596577651]: resamplerSets does not contain key Stored created object [81.0215193155302, 40.0107596577651]: resamplerSets does contain key Starting creation for [29.1093314235128, 14.0546657117564]: resamplerSets does not contain key Starting creation for [29.1093314235128, 14.0546657117564]: resamplerSets does not contain key Finished creation for [29.1093314235128, 14.0546657117564]: resamplerSets does not contain key Stored created object [29.1093314235128, 14.0546657117564]: resamplerSets does contain key Finished creation for [29.1093314235128, 14.0546657117564]: resamplerSets does contain key Stored created object [29.1093314235128, 14.0546657117564]: resamplerSets does contain key
It can be seen that two threads are in the lock at the same. The second-last line
Finished creation for [29.1093314235128, 14.0546657117564]: resamplerSets does contain key
should never occur: it explicitly confirms that the other thread has added the entry to the dictionary while this thread was creating the object.
Does perhaps the TPL do something tricky with its managed thread pool that makes lock() fail?
If I can't get lock() to work with the TPL, then I can't see how the TPL can be useful in general.
ThanksJohn Costella- Moved by Reed Copsey, JrMVP, Moderator Thursday, January 6, 2011 9:58 PM TPL specific question (From:Visual C# Language)
Tuesday, January 4, 2011 11:57 PM
Answers
-
Interesting thread.
Two comments.
#1
Reed is correct about the TPL inlining behavior. In short, if a thread blocks, that thread is burned and won't be reused for anything else (other than normal OS level things like APCs and the like, or if a SynchronizationContext overrides the CLR's waiting functionality, but that's very advanced and is not used by TPL). The only "caveat" to this implemented by TPL is that if that "blocking" happens by one thread Wait'ing on a Task (e.g. using Task.Wait/WaitAll/etc.), then if the task being waited on hasn't started executing yet, it may be possible for us to run the task on the current thread ("inlining" it) rather than simply blocking for another thread to pick it up. TPL doesn't do any other kind of pumping (e.g. when waiting for one task, it never goes back to the scheduler to say "do you have any other tasks you'd like me to run in the meantime... that's optional functionality we've considered as an opt-in kind of thing, but it doesn't exist in TPL in .NET 4 today, and it would be a bad behavior to do automatically when blocking or waiting, leading to all kinds of reentrancy problems like the one you mention).
With regards to Parallel.For, Parallel.For itself generates some number of tasks and waits for them, so if you have a nested set of loops, e.g.
Parallel.For(0, N, i =>
{
...
Parallel.For(0, M, j =>
{
});
...
});When the thread from the outer loop starts processing the inner parallel loop, it will almost certainly end up inlining one of the tasks generated by the Parallel.For. That way, we're able to reuse the outer thread rather than just blocking waiting for another thread to come along to process the loop. It would be deadly for nested parallelism like this if we always blocked, as you could need a significant number of threads in the pool in order to make forward progress. This is similar to a problem that OpenMP has with nested loops.
#2
For ConcurrentDictionary, you might consider instead of using a ConcurrentyDictionary<TKey,TValue>, using a ConcurrentDictionary<TKey,Lazy<TValue>>. Lazy<TValue> is by default thread-safe using a lock to protect the invocation of the initializer, e.g.
public static ResamplerSet GetResamplerSet(IFull o)
{
var key = ComputeKey(o);
return resamplerSets.GetOrAdd(key, (newKey) =>
new Lazy<ResamplerSet>(() =>
new ResamplerSet(o))).Value;
}If the key is already present, this will just return the previously initialized Lazy<> (the only additional cost you'll be paying is for the allocation of the closure object and delegate due to passing in the "o" value by closure, and you might even be able to avoid that). If the key isn't present, then you'll initialize a new Lazy<> instance (which itself is very cheap, at least in release builds), and accessing its Value property will result in instantiating your result object. If there's contention, multiple Lazy<> instances could get created, but again that's very cheap, and only one will end up getting published to the dictionary... then all of the threads will get the same Lazy<> instance which will provide the locking you need to ensure that the delegate function only runs once, and all threads will return the same instance.
I hope that helps. Let me know if I misunderstood anything. Also, John, I'd be interested in knowing more about your overall usage of TPL, your experience with the library, what kinds of things you're building, what kinds of issues you're running into, suggestions for the future, in particular considering your statement "You can see that I love the TPL, right?! :) :)" which of course warms my heart :) If you'd like to email me privately, please feel free to do so at stoub at microsoft dot com.
Thanks.
- Marked as answer by Dr John P. Costella Friday, January 7, 2011 12:11 AM
Thursday, January 6, 2011 11:48 PMModerator -
John,
I don't see the problem just looking at your code - it shouldn't be possible for the messages to display as you're seeing without some other issue occurring.
That being said, I'd strongly recommend considering a switch to ConcurrentDictionary instead. This would allow you to handle this situation in a lock-free manner, with simpler code.
Basically, this code can be rewritten as:
public static ResamplerSet GetResamplerSet(IFull o) { IFull keyedObject = o; // Compute the key. var key = ComputeKey(keyedObject); ResamplerSet resamplerSet = resamplerSets.GetOrAdd(key, (newKey) => { new ResamplerSet(keyedObject); }); // Return the resampler set we found or created. return resamplerSet; } private static ConcurrentDictionary<KeyValuePair<double, double>, ResamplerSet> resamplerSets = new ConcurrentDictionary<KeyValuePair<double, double>, ResamplerSet>();
Reed Copsey, Jr. - http://reedcopsey.com
If a post answers your question, please click "Mark As Answer" on that post and "Mark as Helpful".- Marked as answer by Dr John P. Costella Thursday, January 6, 2011 12:37 AM
Wednesday, January 5, 2011 12:44 AMModerator -
John,
It should work, as is - so I suspect you're correct, this is probably a Mono bug.
I believe ConcurrentDictionary will spawn multiple calls to new ResamplerSet (but not for the same key, I believe). If you really wanted locking, I'd use a small lock around the constructor (instead of a large lock around your entire operation), since that would make the locking much more fine grained.
That being said, if you do use the locking implementation, I'd recommend making a private, static object that you use for nothing but your locks. Locking on a shared object (resamplerSets) is problematic for various reasons...
That might actually correct your Mono problem.
-Reed
Reed Copsey, Jr. - http://reedcopsey.com
If a post answers your question, please click "Mark As Answer" on that post and "Mark as Helpful".- Marked as answer by Dr John P. Costella Thursday, January 6, 2011 12:37 AM
Wednesday, January 5, 2011 4:41 PMModerator -
The mono developer who has looked at this claims that this is valid: that the thread getting the lock can, at the inner Parallel.For(), call a Wait, at which point that thread can be allocated to another of the outer Parallel.For() iterations, which then goes back into the lock because it's the same thread that holds the lock.
I think that this is just plain wrong, because it destroys any usefulness of TPL. But I can't find anything in the MSDN documentation that backs me up.
What do you think?
Thanks again,
John
To me, this seems like a bug in Mono's implementation.
Parallel.For is going to potentially cause the current thread to block (I believe, internally, using a wait handle's WaitOne call).
In the Windows implementation, from my understanding, the calling thread will potentially get used to handle some of the tasks required for the internal Parallel.For(..) call, which makes complete sense and is a nice optimization. This happens because, and only because, the inner loop tasks are children of the outer task in the task hierarchy (they're created with AttachedToParent set). However, the thread running inside Foo (inside the lock) will not get used by another call to Foo, until the inner Parallel.For completes. This would completely prevent multiple instances from entering your lock.
It sounds like Mono's implementation is reusing the main calling thread for tasks that are not children of the currently executing task. This is allowing a second task to entire your lock (since it's the same thread ID), which in turns causes your behavior. I, personally, feel that is a bug, and not appropriate. From the sound of it, Mono's implementation is ignoring the task hierarchy entirely.
-Reed
PS - I've moved this thread to the Parallel Extensions forum. Stephen Toub or one of the other TPL architects may be able to give you a more complete answer, including links to documentation that describes why Mono is behaving incorrectly...
Reed Copsey, Jr. - http://reedcopsey.com
If a post answers your question, please click "Mark As Answer" on that post and "Mark as Helpful".- Marked as answer by Dr John P. Costella Thursday, January 6, 2011 10:23 PM
Thursday, January 6, 2011 9:57 PMModerator
All replies
-
John,
I don't see the problem just looking at your code - it shouldn't be possible for the messages to display as you're seeing without some other issue occurring.
That being said, I'd strongly recommend considering a switch to ConcurrentDictionary instead. This would allow you to handle this situation in a lock-free manner, with simpler code.
Basically, this code can be rewritten as:
public static ResamplerSet GetResamplerSet(IFull o) { IFull keyedObject = o; // Compute the key. var key = ComputeKey(keyedObject); ResamplerSet resamplerSet = resamplerSets.GetOrAdd(key, (newKey) => { new ResamplerSet(keyedObject); }); // Return the resampler set we found or created. return resamplerSet; } private static ConcurrentDictionary<KeyValuePair<double, double>, ResamplerSet> resamplerSets = new ConcurrentDictionary<KeyValuePair<double, double>, ResamplerSet>();
Reed Copsey, Jr. - http://reedcopsey.com
If a post answers your question, please click "Mark As Answer" on that post and "Mark as Helpful".- Marked as answer by Dr John P. Costella Thursday, January 6, 2011 12:37 AM
Wednesday, January 5, 2011 12:44 AMModerator -
Thanks, Reed: very nice. I haven't gotten much into the new concurrent collections yet. However, one question: the blocking was intentional: if 1000 threads call the method for the same object, I want them all to block while new ResamplerSet() goes about its business. Will the concurrent dictionary do that at GetOrAdd(), or will it spawn multiple calls to new ResamplerSet()?
Re the lock problem, I've tested the .exe under Windows and I can't get it to exhibit the same problem. I'm therefore assuming it's a bug in the Mono runtime. I will bump the issue to the Mono bugs people.
Thanks
John
- Marked as answer by Dr John P. Costella Thursday, January 6, 2011 12:37 AM
- Unmarked as answer by Dr John P. Costella Thursday, January 6, 2011 12:37 AM
Wednesday, January 5, 2011 1:11 AM -
John,
It should work, as is - so I suspect you're correct, this is probably a Mono bug.
I believe ConcurrentDictionary will spawn multiple calls to new ResamplerSet (but not for the same key, I believe). If you really wanted locking, I'd use a small lock around the constructor (instead of a large lock around your entire operation), since that would make the locking much more fine grained.
That being said, if you do use the locking implementation, I'd recommend making a private, static object that you use for nothing but your locks. Locking on a shared object (resamplerSets) is problematic for various reasons...
That might actually correct your Mono problem.
-Reed
Reed Copsey, Jr. - http://reedcopsey.com
If a post answers your question, please click "Mark As Answer" on that post and "Mark as Helpful".- Marked as answer by Dr John P. Costella Thursday, January 6, 2011 12:37 AM
Wednesday, January 5, 2011 4:41 PMModerator -
Thanks Reed. It's now been confirmed as a Mono bug: there is some problem with either the implementation of TPL or the underlying thread pool libraries that causes the nested Parallel.For() to somehow start executing one of the other outer (blocked) threads on the thread that obtained the lock.
I looked up MSDN on ConcurrentDictionary and it will indeed spawn multiple calls for the same key; that's the design (see the notes at the end): a parallel thread may have populated that key by the time you calculate it, but it's got a lock on the actual collection to prevent corruption. In most cases it's what you do want (I use that logic for other collections, with an explicit fine-grained lock; I should switch them to ConcurrentDictionary).
However, my case was an exception. Say you're resizing an image with 1000 rows and 800 columns. Pretend we have a future Intel chip with 1024 processors on it. The resampler will first do a Parallel.For() on all the rows. All 1000 threads go to get the resampler set for the row. (There is one resampler set for a given resampling factor and offset; each row uses that one resampler.) The first time that factor/offset is asked for, it needs to be generated. If I employed ConcurrentDictionary, all 1000 threads would go off and build the resampler set.
Instead, what I want to happen is that the first thread to arrive builds the resampler set, and the other 999 threads block. Why? Because the constructor for the resampler set itself builds it in parallel (i.e. a Parallel.For() over the 800 columns, each generating a unique resampling kernel). So what I am hoping will happen is that, as the 999 blocked threads are sleeping, their processors are handed over to the 800 threads in the nested Parallel.For(). This speeds up the construction of the resampler set by a factor of 800. Then that first thread can store the result and release the lock, and the blocked sleeping 999 other threads get the resampler set.
I don't know if there's a name for that design pattern, but it's going to become prevalent with any large-scale use of TPL (either explicitly or implicitly), because the TPL will be used in many different objects and nesting will be commonplace.
You can see that I love the TPL, right?! :) :)
(On the final point, the locking object resamplerSets is static and private to the static class Magic, and is only used by GetResamplerSet(); I'm not sure if you misread it.)
Thanks again,
John
Edit: Sorry I should have noted that the full source code and .exe for the toy application above is available in my bug report: https://bugzilla.novell.com/show_bug.cgi?id=662381
Thursday, January 6, 2011 12:37 AM -
Sorry, Reed, I was wondering if you know the answer to the following.
I boiled down the Mono bug to the following toy program:
using System; using System.Threading.Tasks; class MainClass { public static void Main (string[] args) { for (var run = 0; run < 100; run++) Parallel.For(0, 10, i => Foo()); } static void Foo() { lock (num) { num[0] = 0; Parallel.For(0, 10, k => {}); if (num[0] > 0) Console.WriteLine("Error: num[0] is " + num[0] + "instead of 0"); num[0]++; } } static int[] num = new int[1]; }
Under Windows, this never produces output. Under all .NET 4.0 versions of Mono, however, it produces output like this:
Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 2 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 2 instead of 0 Error: num[0] is 3 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 2 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 2 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 2 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 2 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 1 instead of 0 Error: num[0] is 2 instead of 0
The mono developer who has looked at this claims that this is valid: that the thread getting the lock can, at the inner Parallel.For(), call a Wait, at which point that thread can be allocated to another of the outer Parallel.For() iterations, which then goes back into the lock because it's the same thread that holds the lock.
I think that this is just plain wrong, because it destroys any usefulness of TPL. But I can't find anything in the MSDN documentation that backs me up.
What do you think?
Thanks again,
John
Thursday, January 6, 2011 9:34 PM -
The mono developer who has looked at this claims that this is valid: that the thread getting the lock can, at the inner Parallel.For(), call a Wait, at which point that thread can be allocated to another of the outer Parallel.For() iterations, which then goes back into the lock because it's the same thread that holds the lock.
I think that this is just plain wrong, because it destroys any usefulness of TPL. But I can't find anything in the MSDN documentation that backs me up.
What do you think?
Thanks again,
John
To me, this seems like a bug in Mono's implementation.
Parallel.For is going to potentially cause the current thread to block (I believe, internally, using a wait handle's WaitOne call).
In the Windows implementation, from my understanding, the calling thread will potentially get used to handle some of the tasks required for the internal Parallel.For(..) call, which makes complete sense and is a nice optimization. This happens because, and only because, the inner loop tasks are children of the outer task in the task hierarchy (they're created with AttachedToParent set). However, the thread running inside Foo (inside the lock) will not get used by another call to Foo, until the inner Parallel.For completes. This would completely prevent multiple instances from entering your lock.
It sounds like Mono's implementation is reusing the main calling thread for tasks that are not children of the currently executing task. This is allowing a second task to entire your lock (since it's the same thread ID), which in turns causes your behavior. I, personally, feel that is a bug, and not appropriate. From the sound of it, Mono's implementation is ignoring the task hierarchy entirely.
-Reed
PS - I've moved this thread to the Parallel Extensions forum. Stephen Toub or one of the other TPL architects may be able to give you a more complete answer, including links to documentation that describes why Mono is behaving incorrectly...
Reed Copsey, Jr. - http://reedcopsey.com
If a post answers your question, please click "Mark As Answer" on that post and "Mark as Helpful".- Marked as answer by Dr John P. Costella Thursday, January 6, 2011 10:23 PM
Thursday, January 6, 2011 9:57 PMModerator -
Thanks Reed -- yes, thank you greatly for moving it to the Parallel Extensions forum. I welcome advice from the gurus.
This is my latest bugzilla comment:
-- snip --
I don't believe it makes any sense for a nested Parallel.For to be able to break a lock. In general, if I write lock (someObject) { Baz(); } then Baz should not be able to break my lock and allow another thread in. You're saying that if Baz calls something which calls something else which happens to have a Task.Wait in it, then my lock can fail. That would break encapsulation and would mean that locks are unsafe on any Mono runtime that supports .NET 4, since you don't a priori know whether a method you call might use TPL and explode your lock.
-- end snip --
If you look at the bugzilla report, you'll see that I haven't convinced Jeremie that this really is a bug -- he has some experimental fixes but doesn't feel they need to be deployed (and he has just closed my bug report again).
I found this bug because I've been increasingly using TPL in my code (which scours through tens of gigabytes of DICOM and RTOG files and processes them in parallel, at multiple levels), and I've been getting all sorts of crashes (sometimes right down to the runtime GDI errors with C stack traces etc.). I would have been pulling my hair out if I had any, because I haven't been able to figure out which locks I've been missing. I never suspected that Mono could be losing the locks. It was only when I was refactoring my resampling classes when I realized that something was rotten.
If any of the TPL gurus could liaise with the good Mono people and get some agreement at a high level, it would be a great help. I love the TPL and think is the best thing for scientific programming since sliced bread, but I'm equally committed to Mono. I'll be devastated if there's a schism between them.
Thanks
John
- Edited by Dr John P. Costella Thursday, January 6, 2011 10:44 PM formatting
Thursday, January 6, 2011 10:40 PM -
Interesting thread.
Two comments.
#1
Reed is correct about the TPL inlining behavior. In short, if a thread blocks, that thread is burned and won't be reused for anything else (other than normal OS level things like APCs and the like, or if a SynchronizationContext overrides the CLR's waiting functionality, but that's very advanced and is not used by TPL). The only "caveat" to this implemented by TPL is that if that "blocking" happens by one thread Wait'ing on a Task (e.g. using Task.Wait/WaitAll/etc.), then if the task being waited on hasn't started executing yet, it may be possible for us to run the task on the current thread ("inlining" it) rather than simply blocking for another thread to pick it up. TPL doesn't do any other kind of pumping (e.g. when waiting for one task, it never goes back to the scheduler to say "do you have any other tasks you'd like me to run in the meantime... that's optional functionality we've considered as an opt-in kind of thing, but it doesn't exist in TPL in .NET 4 today, and it would be a bad behavior to do automatically when blocking or waiting, leading to all kinds of reentrancy problems like the one you mention).
With regards to Parallel.For, Parallel.For itself generates some number of tasks and waits for them, so if you have a nested set of loops, e.g.
Parallel.For(0, N, i =>
{
...
Parallel.For(0, M, j =>
{
});
...
});When the thread from the outer loop starts processing the inner parallel loop, it will almost certainly end up inlining one of the tasks generated by the Parallel.For. That way, we're able to reuse the outer thread rather than just blocking waiting for another thread to come along to process the loop. It would be deadly for nested parallelism like this if we always blocked, as you could need a significant number of threads in the pool in order to make forward progress. This is similar to a problem that OpenMP has with nested loops.
#2
For ConcurrentDictionary, you might consider instead of using a ConcurrentyDictionary<TKey,TValue>, using a ConcurrentDictionary<TKey,Lazy<TValue>>. Lazy<TValue> is by default thread-safe using a lock to protect the invocation of the initializer, e.g.
public static ResamplerSet GetResamplerSet(IFull o)
{
var key = ComputeKey(o);
return resamplerSets.GetOrAdd(key, (newKey) =>
new Lazy<ResamplerSet>(() =>
new ResamplerSet(o))).Value;
}If the key is already present, this will just return the previously initialized Lazy<> (the only additional cost you'll be paying is for the allocation of the closure object and delegate due to passing in the "o" value by closure, and you might even be able to avoid that). If the key isn't present, then you'll initialize a new Lazy<> instance (which itself is very cheap, at least in release builds), and accessing its Value property will result in instantiating your result object. If there's contention, multiple Lazy<> instances could get created, but again that's very cheap, and only one will end up getting published to the dictionary... then all of the threads will get the same Lazy<> instance which will provide the locking you need to ensure that the delegate function only runs once, and all threads will return the same instance.
I hope that helps. Let me know if I misunderstood anything. Also, John, I'd be interested in knowing more about your overall usage of TPL, your experience with the library, what kinds of things you're building, what kinds of issues you're running into, suggestions for the future, in particular considering your statement "You can see that I love the TPL, right?! :) :)" which of course warms my heart :) If you'd like to email me privately, please feel free to do so at stoub at microsoft dot com.
Thanks.
- Marked as answer by Dr John P. Costella Friday, January 7, 2011 12:11 AM
Thursday, January 6, 2011 11:48 PMModerator -
Hi Stephen, many thanks for your answers.
#1
That is all exactly how I envisaged it.
If you're friendly with the Mono TPL people, I'd be grateful if you could lean on them to avoid the "bad behavior". With the locks broken, my code throws random exceptions, stack dumps, and segmentation faults. Fortunately it's still in development phase. (I can always shift all the data to the Windows partition and run it there, right?) :)
#2
Ah yes that is the correct solution. I've been using Lazy extensively (don't want to load up a hundred thousand CT scans until you want them, right?) but haven't yet got all its use patterns burned into my brain PROM.
I will email you as suggested.
Cheers,
John
Friday, January 7, 2011 12:24 AM -
OK I implemented the ConcurrentDictionary (but in doing so had to take out the cute feature where destination dimensions smaller than that created for could use the existing object; bad design anyway; the key is now a Tuple<double, double, int>):
public static ResamplerSet GetResamplerSet(IFull o) { var key = ComputeKey(o); return resamplerSets.GetOrAdd(key, newKey => new Lazy<ResamplerSet>(() => new ResamplerSet(newKey))).Value; } static ConcurrentDictionary<Tuple<double, double, int>, Lazy<ResamplerSet>> resamplerSets = new ConcurrentDictionary<Tuple<double, double, int>, Lazy<ResamplerSet>>();
Works beautifully. :) :)
Of course, the Mono runtime lock bug means that occasionally the ConcurrentDictionary gets broken by the Parallel.For in the ResamplerSet constructor, e.g.
Exception(s) occurred : . [ Exception(s) occurred : . [ System.InvalidOperationException: The initialization function tries to access Value on this instance at System.Lazy`1[PeterMac.Magic+ResamplerSet].InitValue () [0x00000] in <filename unknown>:0 at System.Lazy`1[PeterMac.Magic+ResamplerSet].get_Value () [0x00000] in <filename unknown>:0 at PeterMac.Magic.GetResamplerSet (IFull o) [0x00000] in <filename unknown>:0 at PeterMac.Magic`1[PeterMac.Double4].Resample (IFull o) [0x00000] in <filename unknown>:0 at PeterMac.Extensions.Resample[Double4] (IFull o) [0x00000] in <filename unknown>:0 at PeterMac.PixmapMagic`1+ResampleableY[PeterMac.Double4]..ctor (PeterMac.Pixmap`1 source, PeterMac.Pixmap`1 destination, Double factor, Int32 i) [0x00000] in <filename unknown>:0 at PeterMac.PixmapMagic`1+<ResampleY>c__AnonStoreyB[PeterMac.Double4].<>m__12 (Int32 i) [0x00000] in <filename unknown>:0 at System.Threading.Tasks.Parallel+<For>c__AnonStorey44.<>m__47 (Int32 index, System.Threading.Tasks.ParallelLoopState state) [0x00000] in <filename unknown>:0 at System.Threading.Tasks.Parallel+<For>c__AnonStorey45.<>m__48 (Int32 i, System.Threading.Tasks.ParallelLoopState s, System.Object l) [0x00000] in <filename unknown>:0 at System.Threading.Tasks.Parallel+<For>c__AnonStorey46`1[System.Object].<>m__4B () [0x00000] in <filename unknown>:0 at System.Threading.Tasks.TaskFactory+<StartNew>c__AnonStorey13.<>m__B (System.Object o) [0x00000] in <filename unknown>:0 at System.Threading.Tasks.Task.InnerInvoke () [0x00000] in <filename unknown>:0 at System.Threading.Tasks.Task.ThreadStart () [0x00000] in <filename unknown>:0 ] ] at System.Threading.Tasks.Parallel.HandleExceptions (IEnumerable`1 tasks, System.Threading.Tasks.ExternalInfos infos) [0x00000] in <filename unknown>:0 at System.Threading.Tasks.Parallel.For[Object] (Int32 from, Int32 to, System.Threading.Tasks.ParallelOptions options, System.Func`1 init, System.Func`4 action, System.Action`1 destruct) [0x00000] in <filename unknown>:0 at System.Threading.Tasks.Parallel.For (Int32 from, Int32 to, System.Threading.Tasks.ParallelOptions options, System.Action`2 action) [0x00000] in <filename unknown>:0 at System.Threading.Tasks.Parallel.For (Int32 from, Int32 to, System.Threading.Tasks.ParallelOptions options, System.Action`1 action) [0x00000] in <filename unknown>:0 at System.Threading.Tasks.Parallel.For (Int32 from, Int32 to, System.Action`1 action) [0x00000] in <filename unknown>:0 at PeterMac.PixmapMagic`1[PeterMac.Double4].ResampleY (PeterMac.Pixmap`1 source, Double factor) [0x00000] in <filename unknown>:0 at PeterMac.Extensions.ResampleY[Double4] (PeterMac.Pixmap`1 source, Double factor) [0x00000] in <filename unknown>:0 at PeterMac.PixmapMagic`1[PeterMac.Double4].Resample (PeterMac.Pixmap`1 source, Double factor) [0x00000] in <filename unknown>:0 at PeterMac.Extensions.Resample[Double4] (PeterMac.Pixmap`1 source, Double factor) [0x00000] in <filename unknown>:0 at PeterMac.BitmapMagic.Resample (System.Drawing.Bitmap source, Double factor) [0x00000] in <filename unknown>:0 at PeterMac.Extensions.Resample (System.Drawing.Bitmap source, Double factor) [0x00000] in <filename unknown>:0 at PeterMac.MainClass.Main (System.String[] args) [0x00000] in <filename unknown>:0
Thanks again.
John
Friday, January 7, 2011 5:05 AM