Do you use LazyInitMode.AllowMultipleExecution?

Do you use LazyInitMode.AllowMultipleExecution?

  • Comments 4

In an effort to release simple, streamlined APIs, we spend a lot of time poring over every aspect of our types.

One of the types that we know is getting used a lot both internally and externally is LazyInit<T>. 

One of LazyInit<T>’s constructors takes in a LazyInitMode enum which allows you to initialize a value in one of three modes:

  • EnsureSingleExecution – which ensures that if multiple threads attempt to initialize a LazyInit<T> concurrently, that only one of the initializer delegates will execute
  • AllowMultipleExecution – which allows multiple threads to execute the initializer delegate and race to set the value of the LazyInit<T>
  • ThreadLocal – which allows multiple threads to execute the initializer delegate and stores a local copy of the value for each thread

AllowMultipleExecution is motivated primarily by performance.  If we allow the threads to race, we don’t need to take a lock and will never need to block any of the threads attempting to initialize the LazyInit<T>.  Additionally, it’s theoretically useful if you have an operation in your initializer delegate that you don’t want to occur while under a lock. 

The former motivation is validated by a quick and dirty perf test: AllowMultipleExecution typically performs 1-2x faster than EnsureSingleExecution for sufficiently small initializer delegates (longer running delegates typically see no improvement and also result in more wasted CPU time as the work that some of the threads produced will be discarded).  While 2x is great, to see significant perf gains, you’d essentially need a lot (thousands?) of LazyInit<T> instances that could all potentially be initialized by multiple threads.  Remember that this only affects the first time a LazyInit<T> is initialized, so calling Value many times would not affect performance. 

While the latter motivation is important, we have few concrete scenarios. 

On top of all this, the scenarios that might benefit from this mode are heavily limited.  Only under a specific set of circumstances would you choose to use AllowMultipleExecution: 

  • You’re sharing a LazyInit<T> instance between threads.
  • You are sure you won’t throw an exception in your initializer delegate (the exception semantics for this are very strange, e.g. if one thread fails and one succeeds which wins the race?).
  • Your initializer delegate doesn’t rely on some thread-local state that can result in different generated values.
  • The speed of your initializer delegate is just slow enough that taking a lock might block another thread for too long.
  • The speed of you intializer delegate is just fast enough that it won’t result in multiple CPUs wasting tons of cycles.

Given all the usage restrictions and the limited scenarios that would see a performance improvement, we’re unsure whether this mode is useful.  Before we make any decisions on whether to keep it or remove it, we thought it best to reach out to you first and ask:  are you using AllowMultipleExecution?  If not, would you?

Leave a Comment
  • Please add 4 and 4 and type the answer here:
  • Post
  • I use the same technique as AllowMultipleExecution in a few places where I think the collision is unlikely but the code still needs to be thread safe.

    In general there is also a concern for the nested locks (deadlock).

  • I have no opinion at the moment about whether or not AllowMultipleExecution should be included (although the weird exception situation suggests not) but I just wanted to applaud the attitude being taken here:

    1) Designing an API carefully and trying to streamline it as much as possible (instead of "kitchen sink" syndrome).

    2) *Explaining* to the audience what the pros and cons of a particular decision might be.

    3) Asking the community for input.

    Bravo!

    Jon

  • While we don't use LazyInit itself, we use a similar coding pattern for the following reasons:

    - The operation is slow enough we want to cache the result

    - The operation cannot block

    - Usually, there is only 1 thread doing this

    - We don't care if it returns different values on different threads

    This is used purely as a speed boost to operations that run a 2nd time.

  • i think it is not unlikely to have thousands of lazyinit instances. imagine a tree data structure whose children are lazily initialized by a cheap operation which is idempotent.

Page 1 of 1 (4 items)