Yuk Lai's Blog
I am a Software Development Engineer in Test (SDET) in the Windows Sound Team. If you don't know what Windows Sound Team is, then think of it as the Windows Audio Team. We own user mode and kernel mode audio platforms and drivers that together delivers the audio experience in Windows. When I say "own", I don't mean that all these components are ours. It's a Microsoft term that indicates responsibility. The audio experience on Windows platform is together delivered by many partners internally and externally. But eventually when it comes to audio, most likely it involves the work my team does. I have debated with my colleagues on whether our team is a platform team or a driver team, and we are both. We are in a rare team that owns a very vertical stack.
I have been practically working in the same team since I joined the company in July 2005, after my MSEE in Computer Engineering from the University of Texas at Austin. I researched in the area of Symbolic Execution of Software. I define my career as a Software Engineer who happens to work on audio features in Windows. I enjoy software development, but I also like to think about project management. I have my opinion about what are the right and wrong things to do both in software and in process, but it occurs to me that my thinking changes with time, for good or for bad. I will write what I think in this blog, but I have no guarantee that what I think is actually right. I guess that's in the definition of blogs.
The questions are usually: "Your test is failing. Can you take a look?"; the questions are never: "Your test is passing. Can you take a look?"
There appears to be a misconception that "A good test is one that passes" or "Tests should pass". A few years ago I even had requests to send 100% passing test results to prove that the tests are in good quality. You can only scratch your head on such requests.
I spend most of my time trying to design the test to fail. It's what I'm paid for and what my products - tests - are used for. Obviously, it's easy to write tests that fails, but the challenge is in making sure the tests pass and fail under the right conditions, and that they do it consistently.
False passes are hard to find. They are usually hidden from the regular test passes. There are some mitigation to false passes though:
That's why I haven't lost slept over false passes. But I always wonder since we have regular test "passes", we don't have regular test "failures"?