Scaling and Queuing PowerShell Background Jobs

Rate This
  • Comments 24

A couple of months ago I had asked the PowerShell MVPs for suggestions on blog topics. Karl Prosser, one of our awesome MVPs, brought up the topic of scaling and queuing background jobs.

The scenario is familiar: You have a file containing a bunch of input that you want to process and you don’t want to overburden your computer by starting up hundreds of instances of PowerShell at once to process them.

After playing around for about an hour on Friday afternoon, here is what I came up with… This example assumes you have a text file containing the names of many event logs and you want to get the content of each log.

# How many jobs we should run simultaneously

$maxConcurrentJobs = 3;


# Read the input and queue it up

$jobInput = get-content .\input.txt

$queue = [System.Collections.Queue]::Synchronized( (New-Object System.Collections.Queue) )

foreach($item in $jobInput)






# Function that pops input off the queue and starts a job with it

function RunJobFromQueue


    if( $queue.Count -gt 0)


        $j = Start-Job -ScriptBlock {param($x); Get-WinEvent -LogName $x} -ArgumentList $queue.Dequeue()

        Register-ObjectEvent -InputObject $j -EventName StateChanged -Action { RunJobFromQueue; Unregister-Event $eventsubscriber.SourceIdentifier; Remove-Job $eventsubscriber.SourceIdentifier } | Out-Null





# Start up to the max number of concurrent jobs

# Each job will take care of running the rest

for( $i = 0; $i -lt $maxConcurrentJobs; $i++ )




The English version of this script is:

  • Given a file input.txt containing the name of many event logs, queue up each line of input
  • Kick off a small number of jobs to process one line of input each. Each job just gets the content of a particular log.
  • When a job finishes (determined by the StateChanged Event), start a new job with the next piece of input from the queue
  • Clean up the jobs corresponding to the event subscription so at the end we only have jobs containing event data

The “Synchronized” code you see when defining the queue is just for good measure to make sure that only one job can access it at a time.

Have something you want to see on the PowerShell blog? Leave a comment… Can’t promise we’ll get to everything but it’s nice to see what everyone is interested in.


Travis Jones
Windows PowerShell PM
Microsoft Corporation

Leave a Comment
  • Please add 7 and 8 and type the answer here:
  • Post
  • I have a script that pings all the servers in our domain and reports on how many are online/offline - a crude step towards identifying orphaned AD objects. This can takes a long time with over 1000 servers when the script checks one computer at a time. How could I use background jobs to speed this up? ... Say ping 10 or 50 machines at a time?

  • Nice article.

    I had to reformat the cod sample to understand what was happening. ; is not the easiest character to spot. :-)

    Did not know about the register-objectEvent command. What are the gains vs using wait-job?

  • @Ken:

    You could probably just use the same script - you just need to change the following line:

    $j = Start-Job -ScriptBlock {param($x); Get-WinEvent -LogName $x} -ArgumentList $queue.Dequeue()

    Change the bit in the script block from Get-WinEvent to a ping command.  You could use either the DOS ping command or .NET's Ping class, depending what you want to do with the information. shows how to use the .NET Ping class from Powershell:

    $ping = new-object System.Net.NetworkInformation.Ping

    $Reply = $ping.send($strComputer)

  • Thanks for this elegant script. I spent a good bit of a day trying to create this same functionality and never got around to finishing it up. Could you also make the input file a CSV that contains a ScriptBlock and one or more param columns so the RunJobFromQueue could process a list of different Posh expressions?

  • <#

    Thanks a lot for your script and the invitation to raise a question. My question would be:

    How does event forwarding work from a background runspace to the host runspace?


    Register-EngineEvent -SourceIdentifier Progress -Action { Write-Host $Event.MessageData } | Out-Null

    # Forwarding of events from a background runspace to the host runspace does not work as expected:

    $Runspace = [System.Management.Automation.Runspaces.RunspaceFactory]::CreateRunspace()


    $Pipeline = [System.Management.Automation.Powershell]::create()  

    $Pipeline.Runspace = $Runspace


       # forward events named "Progress" back to host

       Register-EngineEvent -SourceIdentifier Progress -Forward;

       $percent = 0;

       while ($percent -lt 100) {

           $percent += 20;

           # raise progress event and wait a second

           New-Event -SourceIdentifier Progress -MessageData "$percent% of the background job completed"  | Out-Null;

           Start-Sleep -Seconds 1;


    }) | Out-NUll  



    # Forwarding of events from a job to the job owner works as expected:

    Start-Job -Name "Progress" -ScriptBlock {

       # forward events named "Progress" back to job owner

       Register-EngineEvent -SourceIdentifier Progress -Forward

       $percent = 0

       while ($percent -lt 100) {

           $percent += 20

           # raise progress event and wait a second

           New-Event -SourceIdentifier Progress -MessageData "$percent% of the job completed"  | Out-Null

           Start-Sleep -Seconds 1


    } | Out-Null

    Write-Host 'done'

  • May I suggest a blog topic:  forwarding of engine events from a background runspace to its host runspace.

  • Here is an example where I run a script on a list of computers.  It is easy to identify the ones that failed to connect.  While I'm not exactly pining them, I get the result that you are looking for.  For this example lets say $computers is the collection of computers from a file or Active Directory.  The script block command is just a gpudate, but it could be any powershell command.

    $computers | %{Invoke-Command $ -AsJob -ScriptBlock { gpupdate.exe /force /wait:120 }

    # this will count the failed connections (offline computers, give it 30 sec to account for timeouts)

    get-job | group-object -property state

    #List the failures

    get-job -state Failed | ft location

    #clean up failures

    get-job -state Failed | Remove-Job

    #See everything left


    #see the results

    get-job | Receive-Job

    #clean up

    get-job | Remove-Job

  • Hello,

    Very Helpful Post! I wrote a function that I believe may be helpful to other people reading this post. The function expands on the information in the post and contains the following additional functionality:

    1.) Enforces a Job Runtime time limit.

    2.) Accepts a scriptblock variable containing code to run as the background job.

    3.) Accepts a scriptblock variable containing code to run to log the output of the background job.

    4.) Accepts a collection of items you would like to process.

    The code for the function will likely wrap on this page so please make sure to fix the formatting before you try to run it.


    Function Manage-Jobs($InputToProcess,$MaxConcurrentJobs,$Queue,$JobScriptBlock,$LogOutputScriptBlock,$MaxAllowedJobRuntime){

    If ($Queue -eq $null){

    $Queue = [System.Collections.Queue]::Synchronized((New-Object System.Collections.Queue))

    $InputToProcess | %{$Queue.Enqueue($_)}

    $LoopCounter = $MaxConcurrentJobs


    else{$LoopCounter = 1}

    for( $i = 0; $i -lt $LoopCounter; $i++ ){

       if($Queue.Count -gt 0){

    Write-Progress -activity ("Spawning Asynchronous Jobs") -Status ($Queue.Count.tostring() + " Items Remaining In The Queue.")

           $Job = Start-Job -ScriptBlock $JobScriptBlock -ArgumentList $Queue.Dequeue()

    #Create Event Subscriber for Job StateChange Event

           Register-ObjectEvent -InputObject $Job -EventName StateChanged -MessageData $Queue -Action {

    If ($Sender.State -eq "Completed"){

    Write-Host ("Job " + $Sender.ID + " State Has Changed")  -ForegroundColor Green

    #Log Job Output By Passing the Output Into The LogOutputScriptBlock For Processing

    Invoke-Command -ScriptBlock $LogOutputScriptBlock -ArgumentList ($Sender | Receive-Job)


    else{Write-Host ("Job " + $Sender.ID + " State Has Changed")  -ForegroundColor Red}

    Manage-Jobs -Queue $Event.MessageData -JobScriptblock $JobScriptBlock -LogOutputScriptBlock $LogOutputScriptBlock `

    -MaxAllowedJobRuntime $MaxAllowedJobRuntime

    $Sender | Remove-Job

    #Remove Event Subscriber For Completed Job

    Unregister-Event $eventsubscriber.SourceIdentifier

    Remove-Job $eventsubscriber.SourceIdentifier

    } | Out-Null

    #Create Job Timeout Timer

    $Timer = New-Object System.Timers.Timer

    $Timer.Interval = $MaxAllowedJobRuntime * 1000

    $Timer.Enabled = $True

    #Create Event Subscriber for Job Timeout Timer Elapsed Event

    Register-ObjectEvent -InputObject $Timer -EventName Elapsed -MessageData $Job -Action {

    if (get-job | ?{$_.InstanceID -eq $Event.MessageData.InstanceID}){

    Write-Warning ("Job " + $Event.MessageData.ID + " has exceeded the max allowed runtime and will be terminated")

    $Event.MessageData | Stop-Job



    Unregister-Event $eventsubscriber.SourceIdentifier

    Remove-Job $eventsubscriber.SourceIdentifier

    } | Out-Null

    If ($Queue.Count -eq 0){

    Write-Progress -activity ("Spawning Asynchronous Jobs") -Status "Asynchronous Jobs Have Been Started For All Queue Items." `

    -CurrentOperation "Please Wait For The Remaining Jobs To Complete"




    }#End Function Manage-Jobs

    #Begin Example Use Case

    $JobScriptBlock = {param($ToOutput)

    $Rand = New-Object System.Random


    start-sleep -seconds $TimeToSleep

    write-Output("<Job Slept For $TimeToSleep Seconds, and the following value was passed in: " + $ToOutput.tostring() +">")


    $LogOutputScriptBlock = {param($Output)

    write-host ("The Log Scriptblock was called and was passed[" + $Output +"]") -ForegroundColor Blue -BackgroundColor White


    $MaxConcurrentJobs = 5

    $MaxAllowedJobRuntime = 15

    Manage-Jobs -InputToProcess @(1..15) -MaxConcurrentJobs $MaxConcurrentJobs -JobScriptblock $JobScriptBlock `

    -LogOutputScriptBlock $LogOutputScriptBlock -MaxAllowedJobRuntime $MaxAllowedJobRuntime

    #End Example Use Case

  • @Ken above.

    For my ping monitor problem I used code by Dr. Tobias Weltner. It's a simple powershell function that takes an input list and uses "test-connection" as a series of parallel jobs.

    I was able to ping over 150 servers and get results back and formatted in under 30 seconds.

    See Dr. Tobias Weltner "network pack v3."

    I learned a lot about using parallel jobs just by reading his script.

    function Test-Online {

    # created by Dr. Tobias Weltner, MVP PowerShell


    [Parameter(Mandatory=$true, ValueFromPipeline=$true)]




    $throttleLimit = 300


    begin { $list = New-Object System.Collections.ArrayList }

    process {



    end {

    & {

    do {

    $number = [Math]::Min($list.Count, $throttleLimit)

    $chunk = $list.GetRange(0, $number)

    $job = Test-Connection $chunk –Count 1 –asJob

    $job | Wait-Job | Receive-Job | Where-Object { $_.StatusCode –eq 0 } | Select-Object –expandProperty Address

    Remove-Job $job

    $list.RemoveRange(0, $number)

    } while ($list.Count -gt 0)

    } | Sort-Object { [System.Version]$_ }



  • Hello,

    I wanted to post one more comment regarding a "Gotcha" related to the code in the original blog entry and my earlier post....

    Neither of these code snippets would execute properly if they were run as a script (ex, executing a .ps1 file) due to a scope issue. The exception to this would be if they were either cut+pasted or dot sourced from within an interactive PowerShell console session.


    I will be referencing the code snippet from the original blog article entry for illustration purposes. The event handler (defined in the code block passed to the -Action parameter of the cmdlet "Register-ObjectEvent") is calling the function "RunJobFromQueue". When the event handler is fired, it is executed outside of the script scope, and therefore does not have any access to anything defined within the script scope.

    If the script is not executed by either dot-sourcing or cut & pasting into an interactive powershell console session the function "RunJobFromQueue" will be created in the script scope and the event handler will not be able to access/call the function when the event is fired.

    The reason it works when you are either dot-sourcing or cut & pasting the script into an interactive powershell console session is that the function would then be created in the Global (a.k.a root) scope. Anything within the PowerShell session has access to information stored within the Global scope, including the event handler. If the "RunJobFromQueue" function is declared in the global scope, the code will run without issue.

    Although it typically goes against good coding practices, you can declare items outside of the current scope within PowerShell. In this particular scenario, I believe it is a justified use of this capability (I'm a systems analyst, not a developer, so take it for what it is worth).

    If the original blog entry code snippet function declaration was modified as follows:

    Function Global:RunJobFromQueue




    The code snippet would be able to be executed as a script without incident as it would force the function to be declared within the Global scope.



  • Here's an alternative using a named semaphore. Be mindful of named semaphore requirements and caveats (unmanaged code execution, kernel object namespaces, system privileges, et al).

    $scaleFactor = 2

    $numberOfWorkers = $scaleFactor * $env:NUMBER_OF_PROCESSORS

    $workerSemaphoreName = "workerSemaphore"

    $workerSemaphore = new-object System.Threading.Semaphore($numberOfWorkers, $numberOfWorkers, $workerSemaphoreName)

    # Self-contained worker script block.

    $worker = {

       param($semaphoreName, $data)

       # Do something with $data.


       # Release semaphore.

       [System.Threading.Semaphore]::OpenExisting($semaphoreName).Release() | out-null


    # For each line in .\input.txt, obtain semaphore and start new job.

    # $workerSemaphore.WaitOne() will block while semaphore count is zero.

    get-content .\input.txt | foreach-object { $workerSemaphore.WaitOne(); start-job -scriptblock $worker -argumentlist $workerSemaphoreName, $_ }

  • Thanks for sharing this information, as I believe it is very useful for the community. However when utilizing the original code posted by Travis to a ps1 file and adding the Global scope to the function name (per Scriptabit) it will only kick off the $maxConcurrentJobs and then exit, never starting the jobs for the remaining tasks in the queue. When I copy and paste the code inside the ps1 file to the shell it executes as expected and continues to work off the queue until all the items have been completed. If anyone has any suggestions or ideas I would appreciate the feedback. I'm am new to Powershell so feel free to point out things most you probably already know.

  • I personally hardly ever use jobs.  I've found them rather time consuming to use and of limited benefit.  Instead I use a function called Split-Job, created by Arnoud Jansveld.  He recently posted it to PoshCode and I’ve uploaded changes I’ve made to it.  You can find it at (His post is at  More info can be found on his blog at:

    When I was at PowerShell Deep Dive, I was surprised no one had really heard of it, so I thought it would be a great add on to this blog article.  I posted once a while ago, but I guess it didn’t make it through the filters.

    Some of the magic of Split-Job, is that it is pipeline friendly.  Jobs aren't.  You just pipe a set of objects to Split-Job and it takes them and uses hosted runspaces to run a script for each object in a runspace/pipeline.

    For example:  If you had 1000 computers you wanted to check the OS and Service Pack you could run the following without Split-Job:

    Get-Content c:\temp\Servers.txt | Foreach-Object { Get-WmiObject -ComputerName $_ -Class Win32_OperatingSystem | Select CSName, Caption, CSDVersion } | Export-CSV C:\Temp\ServersOSSP.csv

    If it takes 0.15 seconds per system, that's 150 seconds or 2 1/2 minutes.  If 10 of those systems are curently down and WMI has to timeout on them, then you're up to about 450 seconds (30 seconds each timeout) or 7 1/2 minutes.

    Using Split-Job is very easy in this case, just insert Split-Job before the Foreach-Object and wrap the Foreach in a script block.  You might want to sort the output since it won't be in order anymore (it outputs in the order it finishes, not the order it came in).

    Get-Content c:\temp\Servers.txt | Split-Job -MaxPipelines 20 { Foreach-Object { Get-WmiObject -ComputerName $_ Win32_OperatingSystem | Select CSName, Caption, CSDVersion } } | Sort CSName | Export-CSV C:\Temp\ServersOSSP.csv

    This will probably take about 45 seconds to 1 minute to run for 1000 servers if 10 of those servers are down.  As you can see it is faster using Split-Job with 10 servers down than to not use it with all the servers up (45 to 60 seconds versus 150 seconds).  If I need to be doing something else while this is running, I just open another copy of PowerShell.  I use it every day at work and it makes it so I can focus on what I need to get done rather than how to get it done quickly.

  • Hi Shaun,

    I'm not sure why the global scope isn't working for you (I would have to see your code to troubleshoot further). However, I have some good news and bad news for you. The good news is: I found a way to do this in a script without explicitly declaring global scope variables. The bad news is: this is going to be a long code post. I rewrote my Manage-Jobs function to make it a bit more robust and to address some issues that I found.

    1.) The new manage-jobs function accepts additional arguments for both the job scriptblock and the log output scriptblock.

    2.) I found that the unregister-event cmdlet can intermittently hang when used from within the event it is trying to register. When performing load testing on my script and stressing the test machine, I found that it would hang nearly 50% of the time causing the script to lock on the same input data. After moving the call to unregister-event outside of the event, the issue appears to be resolved.

    3.) Please keep in mind that as powershell can only process one event at a time and that it pauses the script during event processing, that the job timeout timer built into manage-jobs is a rough guideline. The length of the events being processed (ex: The Logging scriptblock) can affect how timely it will force jobs exceeding the time limit to stop.

    4.) The function will now wait for jobs it starts to either finish or be terminated (more script friendly).

    5.) Please do not skip setting the “name” parameter. The function uses this parameter to determine monitor its own event subscribers / jobs so that it can tell the difference between jobs / event subscribers managed by it vs. ones started by another process or earlier in the script.

    6.) The Example use scenario creates jobs that wait between 10 and 100 seconds, those running longer than ~ 1 minute should be terminated.

    As always, please check for things like format, line wrapping issues and the like when copying code from a web page.

    [If any of the moderators would like to remove the post with my first manage-jobs function, it would be appreciated]


    Function Manage-Jobs($Name, $InputToProcess,$MaxConcurrentJobs,$JobScriptBlock,$LogOutputScriptBlock,$MaxAllowedJobRuntimeInMinutes,$LogOutputScriptBlockArgs,$JobScriptBlockArgs){

    $Queue = [System.Collections.Queue]::Synchronized((New-Object System.Collections.Queue))

    $InputToProcess | %{$Queue.Enqueue($_)}

    #Begin Cleanup Event Subscriber Script Block

    register-engineevent ($Name + "_Cleanup_Event_Subscriber") -action {


    Unregister-Event -SourceIdentifier $Event.MessageData.SourceIdentifier

    Remove-Job $Event.MessageData.SourceIdentifier


    catch{$_.Exception | Out-Host}

    } | Out-Null

    #End Cleanup Event Subscriber Script Block

    #Begin Start Job Event Subscriber Script Block

    register-engineevent ($Name + "_Start_Job") -action {


    If ($Event.MessageData.Queue.Count -gt 0){

    $Job = Start-Job -ScriptBlock $Event.MessageData.JobScriptBlock -Name $Event.MessageData.Name `

    -ArgumentList @($Event.MessageData.Queue.DeQueue(), $Event.MessageData.JobScriptBlockArgs)

    #Create Job Timeout Timer

    $Timer = New-Object System.Timers.Timer

    $Event.MessageData.MaxAllowedJobRuntimeInMinutes * 60000

    $Timer.Interval = $Event.MessageData.MaxAllowedJobRuntimeInMinutes * 60000

    $Timer.AutoReset = $False


    #Begin Event Subscriber for Job Timeout Timer Elapsed Event

    $TimerEventJob = Register-ObjectEvent -InputObject $Timer -EventName Elapsed -MessageData @{"Job"=$Job;"Name"=$Event.MessageData.Name} -Action {

    if (get-job | ?{$_.InstanceID -eq $Event.MessageData.Job.InstanceID -and $_.state -eq "Running"}){

    Write-Warning ("Job " + $Event.MessageData.Job.ID + " has exceeded the max allowed runtime and will be terminated")

    try{$Event.MessageData.Job | Stop-Job}

    catch{$_.Exception | Out-Host}


    #Event Will Be Unregistered Via Job State Change Event


    #End Event Subscriber For Job Timeout Timer Elapsed Event

    $StateChangeJobMessageData = $Event.MessageData


    #Create Event Subscriber for Job StateChange Event

    Register-ObjectEvent -InputObject $Job -EventName StateChanged -MessageData $StateChangeJobMessageData -Action {

    #Cleanup The Timer Event and Remove It From the MessageData Hash

    Get-EventSubscriber -SourceIdentifier $Event.MessageData.TimerEventJob.Name | %{

    New-Event -SourceIdentifier ($Event.MessageData.Name + "_Cleanup_Event_Subscriber") -MessageData $_ | Out-Null



    If ($Sender.State -eq "Completed"){

    Write-Host ("Job " + $Sender.ID + " State Has Changed")  -ForegroundColor Green

    #Log Job Output By Passing the Output Into The LogOutputScriptBlock For Processing

    Invoke-Command -ScriptBlock $Event.MessageData.LogOutputScriptBlock -ArgumentList $($Sender | Receive-Job), $Event.MessageData.LogOutputScriptBlockArgs



    Write-Host ("Job " + $Sender.ID + " State Has Changed")  -ForegroundColor Red


    New-Event -SourceIdentifier ($Event.MessageData.Name + "_Start_Job") -MessageData $Event.MessageData | out-null

    $Sender | Remove-Job

    New-Event -SourceIdentifier ($Event.MessageData.Name + "_Cleanup_Event_Subscriber") -MessageData $eventsubscriber | Out-Null

    } | Out-Null

    #End Create Event Subscriber for Job StateChange Event

    If ($Event.MessageData.Queue.Count -eq 0){

    New-Event -SourceIdentifier ($Event.MessageData.Name + "_Queue_Empty") | out-null


    }#End Queue Count Check


    catch{$_.exception | Out-Host}

    } | Out-Null #End Start Job Event Subscriber Script Block

    for( $i = 0; $i -lt $MaxConcurrentJobs; $i++ ){

       if($Queue.Count -gt 0){

    $StartJobMessageData = @{"Name"=$Name;








    New-Event -SourceIdentifier ($Name + "_Start_Job") -MessageData $StartJobMessageData | out-null



    #Keep the function running from proceeding until the Queue is Empty

    Wait-Event -SourceIdentifier ($Name + "_Queue_Empty") | Remove-Event

    #Wait For Running Jobs

    while (get-job -state Running | ?{$_.Name -eq $Name})


    get-job -state Running | ?{$_.Name -eq $Name} | Wait-Job -timeout 2 | Out-Null


    #Wait For Event Jobs

    while (Get-EventSubscriber | ?{($ -eq $Name) -and ($_.Action.State -ne "Complete") -and ($_.Action.State -ne "Failed")})


    Start-Sleep -Seconds 2


    #Cleanup Support Event Handlers

    "_Start_Job", "_Cleanup_Event_Subscriber" | %{


    Unregister-Event -SourceIdentifier ($Name + $_)

    Remove-Job ($Name + $_)


    catch{$_.Exception | Out-Host}


    }#End Function Manage-Jobs


    #Begin Example Use Case

    [scriptblock]$JobScriptBlock = {param($QueueItem, $JobSBArgs)

    $Rand = New-Object System.Random


    start-sleep -seconds $TimeToSleep

    Write-Output("[Job Scriptblock]`tThe following Queue Item was passed: <" + $QueueItem +">")

    Write-Output("[Job Scriptblock]`tThe following Arg was passed in: <" + $JobSBArgs + ">")

    Write-Output("[Job Scriptblock]`tJob Slept For $TimeToSleep Seconds>")


    [scriptblock]$LogOutputScriptBlock = {param($JobOutput, $LogSBArgs)

    Write-Host ("[Log Scriptblock]`tThe following Arg was passed in:`t<" + $LogSBArgs + ">") -ForegroundColor Blue -BackgroundColor Gray

    Write-Host "[Log Scriptblock]`tThe following Job Output was passed:" -ForegroundColor Blue -BackgroundColor Gray

    $JobOutput | %{Write-Host ("`t" + $_) -ForegroundColor DarkGreen -BackgroundColor Gray}


    $MaxConcurrentJobs = 10

    $MaxAllowedJobRuntimeInMinutes = 1

    $JobSeriesName = "TestSeries"

    $JobArgs = "Job Test Argument"

    $LogArgs = "Log Test Argument"

    Manage-Jobs -Name $JobSeriesName -InputToProcess @(1..9) -MaxConcurrentJobs $MaxConcurrentJobs `

    -JobScriptblock $JobScriptBlock -LogOutputScriptBlock $LogOutputScriptBlock `

    -MaxAllowedJobRuntimeInMinutes $MaxAllowedJobRuntimeInMinutes -LogOutputScriptBlockArgs $LogArgs `

    -JobScriptBlockArgs $JobArgs

    #End Example Use Case

  • Hi Scriptabit & Shaun,

    I would suggest giving Split-Job a try ( on PoshCode - See Above).  Since it isn't using eventing and isn't using jobs, you won't see some of the downsides with eventing and jobs you currently see.  It was initially written for PowerShell V1, before the current job system existed (Current Version Needs V2).  It uses hosted runspaces inside of PowerShell ( using InvokeAsync on the Pipeline) to get the same effect of jobs.  It actually uses a synchronized queue in each pipeline that is synchronized to the queue in the main process of Split-Job.  Each of the pipelines pop an entry out of the queue on their own without a dependency on the main process.  The process' main job is to monitor the different pipelines, the status of the queue, and taking the output from the pipelines and putting it back into the main pipe.

    There are a number of features that make it useful.

    -Can specify variable names to import to the different pipelines

    -Can specify function names to import to the different pipelines

    -Max Duration for the entire input to be processed - you still get the output of the ones that finish

    -Progress indicator showing percent completed and estimated time remaining.

    -InitializeScript parameter – scriptblock to execute in each pipeline before starting to process the input (Import-Module...)

    -Handles CTRL-C and ESC gracefully in both PowerShell.exe and PowerShell_ise.exe. (uses finally block to do a StopAsync on all running hosted pipelines)

    -Allows abort processing in PowerShell.exe by hitting ESC - it cancels running hosted pipelines, but lets the output that has been generated still be handled by the rest of the pipeline.

    In this example it runs for a maximum on 45 seconds, for each input a pipeline will sleep from 1 to 30 seconds, will have a maximum of 10 pipelines, creates a new object and puts it back into the pipeline, Sorts the output, and finally formats all the output as a table.

    1..100 | Split-Job { % { $sleep = Get-Random -Minimum 1 -Maximum 30; Sleep $sleep; New-Object PSObject -Property @{'Input'=$_;'Sleep'=$sleep} } } -MaxDuration 45 -MaxPipelines 10 | Sort Sleep | Format-Table -AutoSize

Page 1 of 2 (24 items) 12

Scaling and Queuing PowerShell Background Jobs