This week has been a bit of fun for me.  After the USB simulation infrastructure I started to write about in my last post- and never got close to finishing the description of, I began working on my dream test setup, for use in some work I knew was coming.

The core concept for me was to have a context shared among all the participants in a test- the test application and all the drivers in the stack, to have it fully readable and writable by all of them, to have provision for them to signal one another in a structured way- and to not utilize the usual I/O paths to do so.  Then to package that in some reusable library code with suitable application-specific extension points.

I'm not done by any means, but what I've got so far is proving usable and it makes me happier than what I've had to deal with previously:

  • If I have a test driver needing to report something, I don't have it print to the debugger, or push stuff to ETW for some later form of post-processing, or package it in some sort of ad-hoc structure to return in some IO path- it can write it into the shared context (it can even print a nice string if it wants to- although Rtlxxx DDI limitations says it has to be at passive level when it does that), it can signal results are available, and the test app will get it right out of the context.  It's simple, and it's easy- I suppose neither should appeal to me, but they actually do.
  • All the test drivers and infrastructure are preinstalled in one setup job- and I use the bus driver I mentioned earlier to create devices as needed- I don't litter the test machine with dozens of root-enumerated devices that won't go away and behave oddly once the test application they were designed to work with isn't around to coddle them.
  • As a result, I can provide more detailed insight in job logs as to what's going on, which ought to translate into easier troubleshooting and diagnosis down the road.  For instance, the infrastructure can, in a multi-device, multi-driver scenario, automatically identify which driver in which device stack produced a piece of information- and when they produced it, and on what processor and in what thread and process context...
  • All information in the context can easily include descriptive text describing a record- and I make liberal use of it, allowing me to dump the context memory and read what's in there- so on a bugchecked or hung machine, I can see what's going on all in one place, without trying to hunt down dozens of threads and individual driver logs and cetera.  For details, I'll eventually want a debugger extension- but for now, I can get a lot more than usual with "db".

Well, this week, I started on Monday afternoon to combine that with the USB simulation infrastructure to address a current test need (yes, I'm being purposefully vague).  On Wednesday evening, I had:

  • a TAEF test DLL that pushed records into the shared context asking for a "generic USB device" to be reported, then issued a series of records targeted to that driver (specifically, to the PDO owner of the test stack, since my simulated devices report PDOs).
    • Those records describe the device, configuration, interface and string descriptors for that device, and the simulated device just uses what it was told to use.  I'll add pipes soon- didn't need it at the moment, and this made it easier to build the whole shebang.
    • The simulator then obligingly signals an event when it has reported the PDO- I usually do this later, but test DLL doesn't immediately respond with anything needing the PDO, it just gives me an affirmative indicator of progress.
  • a KMDF and UMDF test driver (each one a function driver), differentiated by HWID (which of course I can select by putting different vid and pid in the device descriptor) that run through some WDF USB DDI of interest, and report the results of those calls- then signal completion- in PrepareHardware callback.
  • It then processes the results and then yanks the device- even though the stack may not yet have fully started, mind you- but I like shaving that time off the test- and my tests do run faster than they used to thanks to little tweaks like that.

Not a lot, but it proved it could be done, and provides a foundation to build on- and it even found a product bug (again, can't tell you where- but you're never going to see it, anyway- I got there and got it fixed before it made its way out to you).

Not particularly trend-setting- twitter's not going to be ablaze with this, and I'm not going to be winning any awards because of this- but I'm going to be happy about it, anyway.  I'll get paid for it, and I don't think anyone really got hurt by all of that.  When I move on, I still believe most of it will get thrown away the first time anyone looks closely at my code (I've decided to keep my reasons for believing that to myself).  But that's OK- I get to eat, kid gets through college- I'm not expecting fame or glory, and that's good, because its not likely I'll be seeing either.  LOL...

Today it's verifying bug fixes, but I'll get back to work on it tomorrow- add pipes and I've got more test-type stuff needs doing before I can put this toy aside...