I should finish up at least an overview of what I built for a hardware simulation infrastructure during Win8, so here goes...
At the core of everything, and essential to it all- is my "QA root bus" driver. It acts as a replacement for the kernel's Pnp root driver (but it is a proper root-enumerated Pnp driver itself), in the sense that all resource and bus translation requests for simulated hardware get satisfied here. But it holds a great deal more than that:
After Win8 went to RTM, I added one more thing- as we presented many years ago at the DDC before Windows 7 shipped, we used the DCOM support in the Device Simulation Framework to build a number of test drivers for KMDF DDI tests. You perhaps noticed DSF is not in the Windows 8 WDK, and I'm not going to say much more than that. Being proactive, I basically "borrowed" the code from DSF that allows kernel drivers to be treated as user-mode COM objects- but not code that allows COM events to go from kernel back to user mode, as our tests don't use it- and stuffed it in here. I made a few other changes to go along with it- for instance, I made it register the drivers when they report themselves to it, instead of requiring an external regsvr32-like utility to be used. I also tweaked things enough that you can use this and still use DSF- but I'm no longer using it- I'll probably get back to that in a future article, as it's another project that might be of some interest somewhere.
We used to use devcon everywhere, and got a lot of surprises- especially around things like it returning before the devices installed using it had started. We use TAEF for our test apps- it's a standard for automated test in Windows, and rightly so- but device installation gets logged somewhere else, meaning your test job has to go out and collect them (and then you get the whole history of the machine, not just the part you did)- and my list can go on and on longer than this sentence, of things that were annoying or time-consuming or otherwise less-than-optimal about doing this.
A related problem was continually updating a single devnode with multiple drivers or multiple configurations- meaning you have to do an update for every test. But an update actually results in re-checking and usually re-staging the driver package- and all that activity slows down your test for no good reason.
So, I've got a bus driver that reports anything I want it to- so I just create a bunch of hardware IDs- one for each unique test configuration- and I put all of that along with my root bus into one huge "mother of all INFs". We catalog and sign all of our content in our setup job after we put it on the machine- allowing for easy insertion of private builds, picking up special-cases to work around known bugs, and so on. So let the setup job preinstall said INF after the signing's done, create the root devnode for the root bus, and voila- easy installation of everything. Tell the root bus to report an ID, the drivers get installed if needed, and settings adjusted if needed- and if you want to check the devnode state, you know the name without any searching. Fast and convenient- I like that.
The final weapon in my arsenal is sharing memory and events between the user-mode test code (and UMDF drivers) and the kernel. This allows faster and more reliable transfer and sharing of information between these pieces (and much easier coding, allowing us to concentrate on diagnosability and control), as opposed to having to define huge sets of IOCTLs and sending things along IO paths with all the requisite packing and unpacking and buffer checks- particularly when we're often doing deliberately nasty things to those paths and their participants. Yes, this would be a horrible product idea- I know that- but I'm not developing product- I'm testing it- and if I can do that more effectively and cheaper, I ought to be able to do that- so I have.
Memory is shared by sending an IOCTL to a non-Pnp driver which locks down the buffer, and associates it with a GUID. Kernel callers ask for memory shared under the GUID, and get a pointer to it and the size. They also provide a callback routine that tells them when the memory is no longer available. I track the process that shared the memory and if it exits without sending the IOCTL to let it all go, we get notified of the termination, and tell all the kernel participants it's going away, and then unlock it before letting the process continue termination- so no bugchecks even on premature termination of the original process. That particular driver also has a few other useful toys of mine- it's a registry filter, for instance. It's been a busy 7+ years, I suppose. I use GUIDs for the events too, although I could (and do) share them by name. After Win8, in fact, I beefed this up so I use the GUID to create a name- so now everyone can find it no matter where they live.
So in testing UMDF hardware access, one of these shared structures defines multiple sets of resource requirements for multiple devices, and has space for reporting what the assigned resources are- there are also some things added so we can test cases that might cause a simulated interrupt storm, and handle them gracefully [but still catch them gracefully if they occur when we are not expecting one to occur]. The test app fills it in, the simulated device's driver reads it, looks at its bus address [Pnp capabilities] to figure out which device it is, and reports its resources. When they're assigned, it records the assigned values in the shared region (which I call a whiteboard)- and now the UMDF test driver can check and see if the resources UMDF told it belong to it are actually the ones the PDO has assigned to it. Not that the framework has ever actually had it wrong- but I like to be prepared, when I can.
I've been busy extending all of that in recent days, but I'll save that for another time. I need to get back to doing what I get paid for- stuff like what I've just been talking about.
In closing, holiday wishes to anyone who's actually read along this far. Thanks for being there!