Being Cellfish

Stuff I wished I've found in some blog (and sometimes did)

March, 2013

Change of Address
This blog has moved to blog.cellfish.se.
Posts
  • Being Cellfish

    An offshoring experiment

    • 0 Comments

    Last week I read this article on an experiement a team did to compare off-shoring to co-location of the team. Pretty good summarizes my experiences. I've only ever seen two acceptable off-shoring projects.

    The first one was not really off-shoring. It was when I first joined Microsoft and our team was located in Sweden and one by one the team members moved over to the US so for a while half the team was in the US and half in Sweden. We definitely saw some of the bad things of a distributed team and we definitely saw some of the good things with having different time zones since severe bugs could be worked on around the clock. But I think the main reason this worked was because we were a co-located team that got distributed so we were already working well together and knew each other so syncing across the pond was easier. Still an overhead, but not as bad as I imagine hiring random people across the world.

    The second good implementation I saw was a partner company my old (pre Microsoft) company had. What hey did is that they flew in parts of their off-shore team a few months each year and had them work co-located with the rest of the team. This way the team got to know each other better and the team culture in Sweden could spread to the off-shore team. The second thing they did was to give the off-shore team separate things to work on as much as possible making daily synchronization between the different time zones minimal.

  • Being Cellfish

    Certificates are hard

    • 0 Comments

    Almost to the day, Azure had another certificate related outage. Last year was more interesting I think and this year it was something different. My initial guess (remember I don't work for Azure nor do I have any knowledge about the details other than what has been communicated to the public) was that a few years ago when they were first creating Azure they generated some certificates to be used with SSL. My guess was that the certificates generated were valid for a long time. Probably like 5 years or something. Since the certificates were created long before azure was available to customers nobody thought about adding monitoring to detect when the certificates expired. Turns out I was wrong and that there were good alerting in place, but that the process had other flaws.

    So how can you prevent this from happening in your Project? I would suggest you do the following:

    • During development use certificates with very short lifetime. Just one or two months. This way you get used to your certificates expiring and the monitoring you add to warn you when a certificate is about to expire will be tested on a regular basis.
    • When you do a risk analysis of your system remember to be detailed and consider the implications of your certificates expiring.
    • Learn from others! When things like this happens to the big companies, think about if it could happen to you.

    The funniest thing about this outage is an article in Australia thinking this was caused by hackers... I guess I shouldn't get surprised that news papers don't check their facts...

  • Being Cellfish

    Analysing code coverage

    • 0 Comments

    I was recently asked to look at a project that had around 60% code coverage and was asked to give my recommendations on what area to focus on to increase the code coverage. There were a lot of unit tests and actually there was around 10% more unit test code than production code so I was a little surprised that the coverage was as low as 60%. Obviously something must be wrong with the tests. A closer look showed that the code that implemented the logic of the application had very high coverage in general and that only the code dealing with external dependencies had low coverage. Also there wasn't that much code so the boilerplate code just to initialize the application was around 15% of the code. The code dealing with external dependencies and that low coverage was slightly less than 10%. So all in all 60 of the possible 75% was covered. Relatively speaking that is 80% - a number I've often heard managers be very happy with (but remember code coverage itself is a shady property to measure).

    So what was missing? Well there were a few cases where a few small classes completely were missing unit tests so they would be easy to fix. There were also a bunch of classes that had unit tests exclusively for the happy path of execution and those can be fixed easily too. But I still think it would be possible to bring coverage up above 90%. How is that possible you might think? Well, the boiler plate code had some, but more important the code dealing with external dependencies had a lot of room for improvement. They were much more than simple pass through objects needing no testing and with a simple refactoring like extracting the external call as a protected virtual method of the class you could easily add unit tests for a huge portion of that code to I think.

    So my conclusion? The original coverage was not bad. It was covering the important pieces and functional tests (not used when I analysed code coverage) covered a big portion of the code not covered by unit tests. So the situation was not bad. But when you use a code coverage report the right way and look at what and why it is not covered you can always find areas of improvement.

  • Being Cellfish

    Using HTML as your web service format

    • 0 Comments

    In the past I've seen examples of clever use of style sheets to turn XML responses into browsable results that are human readable and more important; human navigable. But this was the first time I heard about using HTML straight up to create the content of a web service API. It is interesting that there is so much focus of defending HTML as shorter than JSON when I think size should not really be a factor here. Sure, mobile clients want smaller payload but with faster and faster networks and clients with more memory the availability of parser are far more important. but if we can assume that all client have both JSON and XML parsers that is not a deciding factor. I think the most important factor is what the client wants. The linked article mentions that using compressed payload is common and as such it is used as an argument why HTML is not really larger than JSON. But whether or not compression is used is something the clients requests in its request headers. Which content type a client can handled is also an HTTP header so I think it is a no-brainer for a web service to support multiple content types if needed by its clients. Supporting both XML and JSON should be trivial these days and implementing a HTML serializer should be trivial.

    I also think the format the author used for HTML was a little bit surprising. My first thought was to convert the example JSON used into something like this:

    <html><body>
    <dl>
     <dt>contacts</dt>
     <dd><ol>
      <li><dt>firstname</dt><dd>Jon</dd><dt>lastname</dt><dd>Moore</dd></li>
      <li><dt>firstname</dt><dd>Homer</dd><dt>lastname</dt><dd>Simpson</dd></li>
     </ol></dd>
    </dl>
    </body></html>

    That even turned out to be slightly smaller than what was suggested in the article. Not that size matters...

Page 1 of 1 (4 items)