This post is about how I created a utility that allows me to send PowerShell commands to a specific Azure instance without the need to publicly expose a port or use Azure Connect.

First, I think I should explain why I did this.

In performance analysis, the more information you can capture, the better. Profiles, process dumps, logs, etc. are all useful pieces of information essential to finding and fixing performance problems. In a typical application performance analysis happens at the same time as the other development work, which makes it very important to automate as much as possible. For instance, the infrastructure used to performance test WCF instruments for memory, throughput, and latency and captures profiles all on a daily basis. When a regression is detected, we can instantly pull up the logs and/or profiles and do a comparison to find out what happened. I wanted to be able to achieve the same thing in my new role working on Azure applications.

The WCF performance lab has a private network with dedicated network hardware. We can use a tool like psexec to run commands. There is a network share available to copy logs and other files. A typical test might go something like this:

  1. Copy bits to server and client machines
  2. Setup profilers and/or log collection agents on server(s)
  3. Start services on the server(s)
  4. Start client(s)
  5. Wait for test execution to complete
  6. Close profilers/log collection agents
  7. Copy files from client and server machines to network share
  8. Analyze results and upload to a database/website for reporting

Azure gives us a new challenge in several areas: copying bits on and off the machines, starting/stopping profilers or agents, and controlling clients. When we deploy the cspkg to Azure, that is how we get bits onto the server machines. The clients are specialized roles that will download test bits from blob storage and have communication ports open to accept commands for running tests. But on the server instances we need a way to get files on and off the machines as well as send commands without changing the code that was deployed in the cspkg. We want to test that code as it would be deployed in production.

The solution to this starts with a previous post on how to use Azure tables as a transport mechanism. This allows me to create a WCF service that does not need to open a port. The minimal impact I could think of was to add a startup task to the csdef and a few binaries to the cspkg that would start a Windows service on the Azure instance.

While I could create a WCF service that accepts specific commands like "iisreset" or "start test", it's not a very flexible solution. I chose to do PowerShell not only because it is very powerful but also because it allows for custom hosting.

In my code, I started with the sample linked above and made a few changes. The sample is meant to give you a custom shell that runs in a console window. It does a lot of things like change the foreground/background colors, get/set cursor positions, and gets the dimensions of the console window. All of that has to be adjusted. Also, there is an interface called IHostUISupportsMultipleChoiceSelection. If you were executing a command that asks for verification, such as for overwriting a file, the custom host can decide not to support that. In all of my Prompt methods I throw NotImplementedException. It makes the HostUserInterface class pretty simple:

As you can see, the only thing that's really implemented here is the Write method, which just writes to a StringBuilder. I use this to capture all the output from a command. When a command is done executing, I empty the contents of the StringBuilder and pass it back in the response.

PSListenerConsoleSample is pretty much verbatim copied from the example. It starts the UI. In my case, I don't call the Run method, only the Execute method. The WCF service inherits from PSListenerConsoleSample:

The DumpOutput method just gets the current string from the StringBuilder and then empties it. SendCommand takes the string in the request, executes it, and then returns the output as a response.

One interesting thing I noticed is that in the example, the output of a command will typically go to the console window that is running the PowerShell host. Since mine is running as a Windows service I thought I would have to find a way to redirect that output. Much to my happiness, PowerShell's code is already redirecting it and I didn't have to do anything.

The RawUserInterface is the last thing to make changes to. You have to replace the data it's trying to get from the Console. Here is my version:

The buffer size I chose is 1000 wide by 9999 high. If you connect to this PowerShell with a console client, be sure to change the buffer size on the console to at least the same width.

The rest of the code should be pretty easy to understand if you've been following the posts. There is a console client and server and a Windows service. In subsequent posts, I'll cover how to add this to an Azure role and how to get profiles.

The code is available here: https://github.com/dmetzgar/azure-remote-powershell