Sometime back I worked on a case where the spool would go very high and I have been thinking to blog this for some time now and today I decided I will process this awaiting queued item in my mind.

I had a customer who was seeing a surge in the spool count whenever an orchestration instance started and spool would go back to its original count only once this instance get completed. As we all know, the high count of spool impact Biztalk processing, the customer was too worried about it.

Looking into the orchestration, they have got a loop shape which could loop up to 40K times or even more. In this loop, they were calling an orchestration with Biztalk message being passed as ref parameter.

We found that with REF we need to keep messages around until the orchestration completes, so that is why the spool count keeps growing till the loop completes and once the orchestration instance completes, Biztalk jobs would clean up messages from spool. This is by design.

To demonstrate this, we got a simple orchestration with a loop inside. This will loop for 50 times only.

image

Inside the loop, we are making a call to another orchestration and passing Biztalk messages as REF parameter.

image

If we test it, and monitor the spool count in perfmon, we will see that spool keeps growing until the loop completes and once the orchestration instance completes, Biztalk jobs are quick to clear the spool.

image

As a workaround for REF parameter, if we pass a message as IN parameter and have one more OUT parameter of the same type in the called orchestration to return the result, I see that spool growth is lower than while using REF parameter, but still for very large loops spool count can grow substantially.

image

Also, if possible we can try using variables instead - which can be reused as opposed to message.

So moral of the story is we have to avoid large loops with call orchestration shape passing ref parameter. We would suggest using smaller loop, or removing the call structure by using expression Shape or custom object etc.