Well, today, as I babysit a new simulator I'm integrating into one of my older WDF test suites, I'll tell you about how I put together what I refer to as my "simulation infrastructure".
I don't mean user versus kernel- because obviously something has to be in the kernel, that's the only place you're allowed to do most I/O operations. DSF lets you do a lot in user mode, but that is because it provides all the support for doing so- in a way similar (once you're abstracted enough, anyway) to the way UMDF lets you control devices in user mode- and that's a lot of infrastructure for any one person to duplicate. Now, DSF was my initial choice for my simulation needs, but it turned out it didn't fully support everything I needed- and integrating the support in the kernel side of DSF would bring me back to that bit about making it all work in user mode as well. My initial goal was to test hardware access features in UMDF, not to design and build a Cadillac, or add features to one. I don't mean that as a slam to DSF, either; I simply had to keep focused on what was necessary versus what would be nice to have. That theme- or excuse or rationale or whatever you would like to label it as- is going to be ever-prevalent in this set of articles. Being the guy who made Windows 8 late or underpowered wasn't an option I was particularly interested in exploring.
So to move past the underlying reason for these choices, on to the choices themselves: In DSF, as some readers may have noticed, the device simulator is a lower filter on the driver stack- so for USB, the EHCI simulator is at the bottom of the stack, a redirected version of the OS EHCI driver is the function driver, and the rest of the stack and support is the same as it is for a real EHCI controller. This is a sound choice, particularly for DSF, as it had some goals my own project I did not share- but no need to get into that, here.
My choice was different- my simulators are in a stack by themselves; each one reports a PDO, along with its hardware IDs and so forth- so the controlling driver stack looks the same as it does for real hardware. One factor driving this initially was that if we could use virtualization, we wouldn't need to do any redirection, or at the very least we could do it via detouring- and that meant we could use production or even 3rd party drivers on our simulated devices. A related factor in this decision was test asset inventory control, and it still applies- with this architectural choice, a single INF will work with real or simulated hardware- they both look the same. Also, we sometimes use multiple test drivers for the same hardware and update the driver for different tests- so again, only needing a single INF is a benefit.
The single PDO per simulated device FDO choice was the simpler one to begin with- we do have cases where we have multiple devices of the same type, so in some respects the presence of the intervening stack is a bit unnatural. However, it turned out that by doing so we helped make some parts of Plug-and-Play even better than they already were- our tests typically surprise remove the simulated device rather than the stack it is reporting- similar to disabling an entire bus, and that is not a particularly common scenario.
The tests I'm working on now do in fact have a single simulator reporting multiple PDOs- but this essentially turns it into a bus of only one device type, which meant I had to add a programming interface for that, thus adding to the work needed to produce a simulator versus the earlier model. So from a packaging standpoint, the choice I made previously makes writing new simulators easier- and while our team has reason for making multiple copies of a single device (interrupt sharing scenarios, for instance), that's probably not terribly important to most potential users of an infrastructure such as this.
The mechanism I initially chose here I called Simulator Aware Hardware Access, or SAHA for something with a lot fewer syllables. The concept was to package the redirection logic, a way to differentiate real and simulated hardware, and any related glue function- into a library the driver could be built with. In the end a series of functions (C-style, although as usual I use C++ everywhere) was defined and a set of header files created to describe them- I went with long names: SimulatorAwareHardwareAccess.H defines the function signatures, while UsingSimulatorAwareHardwareAccess.H is included by the driver- the latter header #defines READ_REGISTER_ULONG to SahaReadDWordRegister, for instance- and then an import library (for an export driver) is used to link them up.
The implementation evolved over time (at roughly 3 years between releases, Windows cycles have time for this sort of thing to happen)- at first, a pure export driver was used, by the somewhat unimaginative name of Saha.sys. At device start time, the Saha code would send a special IRP_MN_QUERY_INTERFACE to its PDO, and if the PDO was from a simulator, it returned a bunch of VTABLE pointers to its redirection interfaces. But as I began deploying tests using that early infrastructure, we started having trouble finding unassigned system resources by reporting requirements to the real system arbiter. In other words. our simulators could say their PDOs needed particular resources, but the OS wouldn't let us have them because the PCI bus, ACPI and others had already clamed them for redistribution, and as a result- no simulation.
In my usual straightforward fashion, I decided that meant I needed my own arbiter. That would give me better control over what resources a simulator could use. After all, the simulated resources aren't "real", so why ask for real ones I won't actually need to use- particularly when they were getting to be scarce? That decision (and I'll get back to it, it's a key part of the overall picture) had some interesting synergistic side effects: The pure export driver disappeared, and instead the SAHA implementations moved into what now became my "alternative Pnp root bus" (there's still an import library, but now the imports are from a "real" driver).
I also didn't have to do the query interface. Why? Because: an arbiter already knows what resources are assigned to each PDO. If you use !arbiter extension command in your kernel debugger, you can take a quick look at the real arbiters and see which PDOs each has assigned resources to, what resources they were assigned, and so forth. With the redirection, SAHA, and arbiter /bus translation code now living in the same driver, you give me a PDO at "Prepare Hardware" time, and I know if it has simulated hardware or not, because I know what PDOs have virtual resources assigned to them. As a side benefit, if it's a "real" PDO, the OS has provided the translated resources it has been assigned by the "real" arbiter tree- so I can cause a rebalance for any simulated devices that happen to have been given the same resources. In the interests of accuracy, I haven't implemented that rebalance (just a TODO note)- none of my test scenarios now or in the immediate future need it. In practice it has turned out simulation is beneficial enough we're not using real hardware in automation. We do use it, of course- it would be incredibly foolish not to, but we cover the critical stuff quite effectively with simulation, and tests using hardware, while often still automated, are not run in our central test labs.
The core SAHA mechanism, though, remains unchanged from what I'd originally expected- at the abstract level, the Saha I/O code knows in each case it is called what resource (I/O address, memory address, or interrupt vector) it is working with, and since it knows all the simulated resource, it can redirect those- and if they're not simulated, it will call the system API that were originally coded in the driver which is doing its IO through SAHA. I did spend some time trying to keep that process efficient, but it could be improved- there is inevitably some search time involved in most cases.
My initial plan was far too ambitious, although pieces of it still lie about if I ever regain that ambition. I wanted to have a set of classes allowing me to describe a device in terms of its registers and interrupts, and then write the simulation in terms of those- in C++ and using inheritance. I could scoop up the register set within a block of addresses and report the appropriate resources, without the simulator writer having to deal with it. Similarly, I could route all those Saha calls into virtual methods the simulator writer could use- so their code for an "Interrupt Control Register", just had to deal with what bits did what when they were written to. Again, had to be practical, so I've wound up doing this reporting and routing explicitly for each device I've done. That hasn't been a lot of duplication, but that's primarily so because we're deliberately using very simple devices- after all, we want to make sure when your UMDF driver writes a register, we really write that register after it's been through the framework- no need to construct some elaborate set of internal device operations just to check something that simple.
But one choice about how simulators get written really did make life very easy- all of the simulators are KMDF drivers. So reporting that PDO and initializing it was quite simple, and we get plenty of usable behaviors (and since our FDOs shouldn't get any IO unless we want to define some to achieve some simulator control function, they're also quite simple).
I defined the redirection interfaces right from the beginning, and those I've largely kept. They're C++ VTable interfaces, and not a lot fancy or unobvious about them- but rather than being on an individual port or register, they're interfaces on the blocks of port or memory addresses managed by our built-in arbiter. A set of "simulator core" functions are included in our exported set that allow a simulator to intercept IO within a range if they need to (and they can still call underlying implementations that have the port / register as basic read/write storage), and also to do things like fire interrupts.
In these articles so far, I've stuck to a fairly high-level description of what this project has looked like, and tried to provide a sense of how it all morphed as time went along. So this has been sort of a "how I spent my summer" narrative. From this point forward, however, I'm hoping to provide deeper dives into things with these articles serving as a point of reference for the framework in which the more detailed articles will fit.