Larry Franks and Brian Swan on Open Source and Device Development in the Cloud
I have a confession to make; In college, I was the guy who wrote his flowcharts after writing the program (yes flowcharts describing program flow, it’s been that long since I was in college.) It just seemed a worthless exercise to me at the time. I also didn’t do a lot of testing. If it worked when I ran it, I was done. Over the years I’ve learned the value of specifications and testing, but the Ruby programming world is a bit different than what I’m used to; you write your tests before you write the program, and the tests describe how the program should behave. So the tests are also the specification in a way.
I decided that I should buckle down and learn how to do things the Ruby way, or rather the Behavior Driven Development way. BDD, in a nutshell, is a methodology that involves writing down how the stakeholders expect an application to behave, writing tests based on those expectations, and then writing code to fulfill those expecatations. So you code from the outside in, starting with failing tests. You then write the program to satisfy the test requirements, refactoring the program and the tests as you go along.
Now, there’s a bazillion test frameworks for Ruby. I’m not going to attempt to describe them all here. I settled on RSpec and Cucumber, and have been learning them in my spare time. I recently picked up The RSpec Book to accelerate the learning process, and it’s turned out to be a really great book. While it says it’s about RSpec and Cucumber, it actually does a really good job of teaching BDD practices in general.
For my first project, I decided to revisit the Access Control Service sample I posted previously and walk through the process of rewriting it from scratch, starting with tests. This was a worthwhile learning experience, as it made me think a lot more about how someone would actually use the code and how it could be better put together.
In the original ACS example, there were several values I used constants for. These need to go away and turn into something read from the environment, a configuration file, or passed in at runtime. Also, the code didn’t seem very DRY in spots, but for the most part it didn’t require a complete rethink of the behavior I initially came up with.
After the ACS code, I decied to try the Windows Phone 7 notification sample. I quickly realized that my initial design was bad, and that instead of sending notification to a WP7 device, the behavior I really wanted was to send notifications to people. For example, I own a WP7 device, two iOS devices, and I found out the other day that Windows 8 will accept notifications. What I need is a service that I can tell “Send Larry this message,” and have the message reach me no matter what device I’m on.
I also want to be able to do some normalization of notifications across devices. For instance, if I send a notification that has a tile background (WP7,) a comment (WP7,) and a numeric badge (WP7 and iOS,) I want it to gracefully ‘degrade’ to the lowest common denominator and just send the badge to iOS instead of returning an error or requiring me to build a seperate message for each device. So I’m repurposing that completely into a sort of ‘generic push notification’ package that is more user centric than device centric.
Once I’ve finished cleaning up the ACS code, and rewriting the notification project, I plan on publishing these to https://github.com/. I’ll add a pointer here once those are live. Beyond that, I plan to continue using BDD methedology in my projects as it seems to useful in establishing both a description of how the code should behave and tests that ensure you remain on the right path during development.
Let me know if you have any thoughts or comments on the above, such as other testing frameworks I should look at or if you have feedback on ways I could improve the ACS and notification projects.
The problem that I always run into is too much implementation in my tests. If the concept isn't abstracted enough, the tests become as brittle as untested code. I haven't gotten over that hump yet, I feel like that's my big Ah-Ha moment.
The problem I run into is that the stakeholders are not clear on what they want the application to do, other than 'people will buy it because it uses technology X'
For brittle code, I suspect that this is something that you have to develop a personal feel for, sort of like art. How much intimacy with your application code are you willing to accept in your tests vs. abstracting to the point of no longer being valid tests. There's a happy middle ground somewhere in there, but you're the one who has to be happy with it (unless you're graded on your tests.) I think being concerned over the quality of your tests is a virtuous trait though. Or at least it's better than just writing passing tests and calling it a day.
For stakeholders not being clear, that's a tough one. Ultimately all you can do is hold them accountable for the reception of the end product, but that's not always a winning path for the developer. Maybe a brief presentation on how a clear understanding of what the application should do has lead to success on projects in the past, along with some negotiation up front on what each stakeholder needs to bring to the table.