Posts
  • Eric Gunnerson's Compendium

    You suck at TDD #3 – On Design sensitivity and improvement

    • 0 Comments

    Through an above-average display of ineptness, I managed to overwrite my first version of this post with a response I wrote to the comments. Since I'm too lazy to try to reconstruct it, I'm going to go a different direction. In a recent talk about design, I told the attendees that I was going to insult them during the talk, and it worked pretty well, so I'm going to try the same thing here:

    You don't know how to do design. You would not know a good design if it came up and bit you.

    To explain this, I feel it necessary to talk a bit about ignorance. I will loop back to the topic at hand sometime in the future.

    In a really cool article written back in 2000, Phillip Armour wrote about 5 orders of ignorance, which I'll summarize here:

    • Oth Order Ignorance (0OI) – Lack of ignorance, or in other words, knowledge. I have 0OI about skiing.
    • 1st Order Ignorance (1OI) – Known lack of knowledge. I have 1OI about how to knit, but I can think of a number of ways to convert this to 0OI.
    • 2nd order Ignorance (2OI) – Lack of awareness. I have 2OI when I don't know that I don't know something. I cannot – and this is the point – give examples here.
    • 3rd order Ignorance (3OI) – Lack of process. I have 3OI when I don't know a suitably efficient way to find out that I don't know that I don't know something.
    • 4th order ignorance (4OI) – Meta ignorance. I don't know about the 5 orders of ignorance.

     

    Like any model, this is a simplification of what is really going on, but it's a useful model.

    Back to the insult, but this time I'll reword it using the orders of ignorance:

    You have 20I about design; the code that you are writing has obvious problems with it but you do not see them because you are 2OI about these problems and therefore cannot see the problems yourself.

    It's a little less catchy, and that's why this series is named "You suck at TDD", not "Systemic influences and limitations in human knowledge acquisition have limited your effectiveness at TDD in interesting ways".

    You can probably think back when you learned something about coding – for example, there was probably a time when you learned it was good not to repeat code. Before you learned that, you had 2OI ignorance about it, and now you no longer do.

    We *all* have at least some 2OI about design; it's just a feature of how knowledge acquisition works. That is not the real problem.

    The real problem is that most developers have both 3OI and 4OI when it comes to design. They don't understand how 2OI works and therefore think that they have good design skills, and – probably more importantly – they don't have an efficient way of identifying specific instances of 2OI and converting them to 1OI and ultimately 0OI.

    Or, to put it more concisely, most developers think they are good at design because they can't see the problems in their designs, they don't really understand that this is the case, and they don't have any plan to improve.

    If you couple this with our industry's typical focus on features, features, features, it is no longer surprising that there is so much bad code out there; it becomes surprising that there is code that works at all.

    So, tying this whole thing back to TDD, the reason that TDD does not work for a lot of people is that they don't have sufficient design skills to see the feedback that TDD is giving to them. They get to the refactor step, shrug, and say, "looks good to me; let's write the next test". And these 2OI issues add up, and they make it harder to do TDD on the code, and it gets hard to do.

    This is not surprising. TDD is about evolutionary design, and if you aren't very good at design, it's not going to work very well.

    Improving your design skills

    The remainder of this series is about improving your design skills. I will probably focus on specific patterns that I find especially useful in the TDD world, but I'd like to provide some general advice here. These are things that will help you move from 2OI to 1OI in specific areas of design. And please, if you have other ideas, add them in the comments:

    1. Pair with somebody who cares about design. They will be 1OI or 0OI in areas where you are 2OI, and by having discussions you will learn together.
    2. Read and study. There is a *ton* of useful information out there. If you like abstract, read – and then re-read Fowler's book on Refactoring. If you like more specificity, go read information about a single code smells, and then find it – and fix it – in your code. If you want a bigger example, just search for refactoring, and you'll find examples like this.
    3. Share and teach. You may think that you know something about a specific area of design, but when you go to teach it to others, that is when you really learn it.
    4. Practice refactoring skills. If you are lucky enough to work in a language with good refactoring tools, train yourself to use them instead of hand-editing. Not only will this speed you up and reduce the errors you make, it will train you to think differently about code.
    5. Practice TDD and experiment with different approaches. What does "tell, don't ask" do to your code? Can a more functional programming approach have benefits?
  • Eric Gunnerson's Compendium

    Agile Open Northwest 2016 recap

    • 0 Comments

    Last week, I spent three days at Agile Open Northwest 2016, a small (300-ish people) agile conference held at the Exhibition Hall at Seattle Center.

    Well, perhaps 'conference" is the wrong word; one of the first things that I read about the conference said the following:

    Agile Open Northwest runs on Open Space Technology (OST).

    I am generally both amused and annoyed by the non-ironic appending of the word "technology" to phrases, but in this case, I'm going to cut them some slack, because Open Space really is different.

    I've done a lot of conferences in my time, and one of the problems with conferences is that they are never really about the things that you want them to be about. So, you pull out the conference program, try to figure out what the talks are really going to be like, and go from there.

    Open space technology is basically a solution to that problem. It's a self-organizing approach where a minimal bit of structure is put into place, and then the conference is put on by the attendees.

    So, somebody like me could host a session on TDD and design principles and #NoMocks, and – in collaboration with a few others (including Arlo) – we could have very fun and constructive session.

    And – far more cool in my book – somebody who knew just a little about value stream mapping to could host a session on applying value stream mapping to software projects and have the people who showed up teach me.

    The other non-traditional thing with the conference is the law of personal mobility, which says that it's okay – no, it's required – that if you aren't learning or contributing at a session you have chosen, you leave and find a better use of their time. Which means that people will circulate in and out of the sessions

    With the exception of one session, I enjoyed and learned something at all of the sessions that I went to.

    The one downside of this format is that you need engaged people to make it work; if you take a bunch of disinterested people and ask them to come up with sessions, nobody is going to step up.

    I also got to do a couple of Lean Coffee sessions at breakfast Thursday and Friday. These are a great way to get ideas quickly and cover a lot of topics in a short amount of time.

    Overall, I had a great time. If you have passion around this area, I highly recommend this conference.

  • Eric Gunnerson's Compendium

    You suck at TDD #4 – External dependencies

    • 2 Comments

    When I started doing TDD, I thought it was pretty clear what to do with external dependencies. If your code writes to a file system – for example – you just write a file system layer (what would typically be called a façade, though I didn’t know the name of the pattern back then), and then you can mock at that layer, and write your tests.

    This is a very common approach, and it mostly works in some scenarios, and because of that I see a lot of groups stick at that level. But it has a significant problem, and that problem is that it is lacking an important abstraction. This lack of abstraction usually shows up in two very specific ways:

    • The leakage of complexity from the dependency into the application code
    • The leakage of implementation-specific details into the application code

    Teams usually don’t notice the downside of these, unless a very specific thing happens: they get asked to change the underlying technology. Their system was storing documents in the file system, and it now needs to store them in the cloud. They look at their code, and they realize that the whole structure of their application is coupled to the specific implementation. The team hems and haws, and then comes up with a 3 month estimate to do the conversion. This generally isn’t a big problem for the team because it is accepted pretty widely that changing an underlying technology is going to be a big deal and expensive. You will even find people who say that you can’t avoid it – that it is always expensive to make such a change.

    If the team never ends up with this requirement, they typically won’t see the coupling nor will the see the downside of the leakage. In my earlier posts I talked about not being sensitive to certain problems, and this is a great example of that. Their lives will be much harder, but they won’t really notice.

    Enter the hexagon

    A long time ago in internet time, Alistair Cockburn came up with a different approach that avoids these problems, which he called the Hexagonal Architecture. The basic idea is that you segment your application into two different kinds of code – there is the application code, and then there is the code that deals with the external dependencies.

    About this time, some of you are thinking, “this is obvious – everybody knows that you write a database layer when you need to talk to a database”. I’ll ask you to bear with me for a bit and keep in mind the part where if you are not sensitive to a specific problem, you don’t even know the problem exists.

    What is different about this approach – what Cockburn’s big insight is – is that the interface between the application and the dependency (what he calls a “port”) should be defined by the application using application-level abstractions. This is sometimes expressed as “write the interface that you wish you had”. If you think of this in the abstract, the ideal would be to write all of the application code using the abstraction, and then go off an implement the concrete implementation that actually talks to the dependency.

    What does this give us? Well, it gives us a couple of things. First of all, it typically gives us a significant simplification of the interface between the application and the dependency; if you are storing documents, you typically end up with operations like “store document, load document, and get the list of documents”, and they have very simple parameter lists. That is quite a bit simpler than a file system, and an order of magnitude simpler than most databases. This makes writing the application-level code simpler, with all of the benefits that come with simpler code.

    Second, it decouples the application code from the implementation; because we defined the interface at the application level, if we did it right there are no implementation-specific details at the app layer (okay, there is probably a factory somewhere with some details – root directory, connection string, that sort of thing). That gives us the things we like from a componentization perspective, and incidentally makes it straightforward to write a different implementation of the interface in some other technology.

    At this point there is somebody holding up their hand and saying, “but how are you going to test the implementation of port to make sure it works?” BTW, Cockburn calls the implementation of a port an “adapter” because it adapts the application view to the underlying dependency view, and the overall pattern is therefore known as “port/adapter”.

    This is a real concern. Cockburn came up with the pattern before TDD was really big so we didn’t think about testing in the same way, and he was happy with the tradeoff of a well-defined adapter that didn’t change very often and therefore didn’t need a lot of ongoing testing because the benefits of putting the “yucky dependency code” (my term, not his) in a separate place was so significant. But it is fair to point to that adapter code and say, “how do you know that the adapter code works?”

    In the TDD world, we would like to do better. My first attempt did what I thought was the logical thing to do. I had an adapter that sat on top of the file system, so I put a façade on the file system, and wrote a bunch of adapter tests with a mocked-out file system, and verified that the adapter behaved as I expected it to. Which worked because the file system was practical to mock, but would not have worked with a database system because of the problem with mocking.

    Then I read something that Arlo wrote about simulators, and it all made sense.

    After I have created a port abstraction, I need some way of testing code that uses a specific port, which means some sort of test double. Instead of using a mocking library – which you already know that I don’t like – I can write a special kind of test double known as a simulator. A simulator is simply an in-memory implementation of the port, and it’s generally fairly quick to create because it doesn’t do a ton of things. Since I’m using TDD to write it, I will end up with both the simulator and a set of tests that verify that the simulator behaves properly. But these tests aren’t really simulator tests, they are port contract tests.

    So, I can point them at other implementations of the port (ie the ones that use the real file system or the real database), and verify that the other adapters behave exactly the way the simulator does. And that removes the requirement to test the other adapters in the traditional unit-tested way; all I care about is that all the adapters behave the same way. And it actually gives me a stronger sense of correctness, because when I used the façade I had no assurance that the file system façade behaved the same way the real file system did.

    In other words, the combination of the simulator + tests has given me a easy & quick way to write application tests, and it has given me a way to test the yucky adapter code. And it’s all unicorns and rainbows from then on. Because the simulator is a real adapter, it supports other uses; you can build a headless test version of the application that doesn’t need the real dependency to work. Or you can make some small changes to the simulator and use it as an in-memory cache that sits on to of the real adapter.

    Using Port/Adapter/Simulator

    If you want to use this pattern – and I highly recommend it – I have a few thoughts on how to make it work well.

    The most common problem people run into is in the port definition; they end up with a port that is more complex than it needs to be or they expose implementation-specific details through the port.

    The simplest way to get around this is to write from the inside out. Write the application code and the simulator & tests first, and then only go and write the other adapters when that is done. This makes it much easier to define an implementation-free port, and that will make your life easier far easier.

    If you are refactoring into P/A/S, then the best approach gets a little more complex. You probably have application code that has implementation-specific details. I recommend that you approach it in small chunks, with a flow like this:

    1. Create an empty IDocumentStore port, an empty DocumentStoreSimulator class, and an empty DocumentStoreFileSystem class.
    2. Find an abstraction that would be useful to the application – something like “load a document”.
    3. Refactor the application code so that there is a static method that knows how to drive the current dependency to load a document.
    4. Move the static method into the file system adapter.
    5. Refactor it to an instance method.
    6. Add the method to IDocumentStore.
    7. Refactor the method so that the implementation-dependent details are hidden in the adapter.
    8. Write a simulator test for the method.
    9. Implement the method in the simulator.
    10. Repeat steps 2-9.

    Practice

    I wrote a few blog posts that talk about port/adapter/simulator and a practice kata. I highly recommend doing the kata to practice the pattern before you try it with live code; it is far easier to wrap your head around it in a constrained situation than in your actual product code.

  • Eric Gunnerson's Compendium

    Lean, Toyota, and how it relates to agile.

    • 0 Comments

    I like to read non-software-development books from time to time, and during my holiday vacation, I read “The Toyota Way to Continuous Improvement: Linking Strategy and Operational Excellence to Achieve Superior Performance”.

    What? Doesn’t everybody relax by reading books about different organizational approaches and business transformation during their holidays?

    I highly recommend the book if you are interested in agile processes in the abstract; there’s an interesting perspective that I haven’t been seeing in the part of the agile world I’ve been paying attention to. 

    I will note that there are a number of people who have explored and written about the overlap between lean and agile, so I don’t think I’m breaking ground here, but there are a few things that I think are worth sharing.

    Value stream mapping

    Part of my definition of “being agile” involves change; if you aren’t evolving your process on an ongoing basis, you aren’t agile. I’ve spent a lot of time looking at processes and can now look at a team’s process and have a pretty good idea where they are wasting time and what a better world might look like.

    Unfortunately, that isn’t especially useful, because “Do what Eric says you should do” is not a particularly good approach. I don’t scale well, and I have been wrong on occasion.

    Shocking, I know.

    It also removes the tuning to a specific team’s needs, which is pretty important.

    I do know how to teach the “determine an experiment to try during retrospective” approach, and have had decent luck with that, but teams tend to go for low-hanging fruit and tend to ignore big opportunities. Just to pick an example, you can experiment your way into a better process for a lot of things, but the project build that takes 20 minutes and the code review process that takes 8 hours is now dominating your inner loop, and those are the things that you should think about.

    I don’t currently have a great way to make these things obvious to the team, a way to teach them how to see the things that I’m seeing. I’m also missing a good way to think about the process holistically, so that the big issues will at least be obvious.

    Enter value stream mapping, which is a process diagram for whatever the team is doing. It includes the inputs to the team, the outputs from the team, and all of the individual steps that are taken to produce the output. It also typically includes the amount of time each operation takes, how long items sit in a queue between steps, whether there are rework steps, etc. Here’s a simple diagram from Net Objectives:

    The times in the boxes are the average times for each step, and I think the times underneath the boxes are the worst cases. The times between are the average queue times. We also show some of the rework that we are spending time (wasting time) on.

    Given all of this data, we can walk all the boxes, and figure out that our average time to implement (ignoring rework) is about 266 hours, or over 6 weeks. Worse, our queue time is just killing us; the average queue time is 1280 hours, or a full 8 *months*. So, on average, we can expect that a new request takes over 9 months to be deployed. We can then look at what the world would be like if we combined steps, reduced queue sizes, or reduced rework. It gives us a framework in which we can discuss process.

    This is a simple example; I’m sure that the real-world flow also has a “bugfix” path, and there is likely a “high-priority request” section that is also at work.

    I’m also interested in the details inside the boxes. We could decompose the Code box into separate steps:

    1. Get current source and build
    2. Write code
    3. Test code
    4. Submit code for code review
    5. Handle code review comments
    6. Submit code to gated checkin system

    Each of these steps has queue times between them, there are likely rework loops for some of them, and the “test” step likely varies significantly based on what you are testing.

    Coming up with a value stream mapping is typically the first thing you do with the Toyota approach. I like the fact that it’s a holistic process; you cover all the inputs and the outputs rather than focusing on the things you know about.

    I have not tried doing this for a software team yet, but I find the approach very promising and hope to try it soon. I’m especially hoping that it will highlight the impact that big organizational systems have on team agility.

    Implementing an improvement

    The continuous improvement approach used by Toyota is know as PDCA (either plan-do-check-act or plan-do-check-adjust). Here’s a more detailed explanation of “plan”:

    1. Define the problem or objective
    2. Establish targets
    3. Understand the physics (use 5 whys to figure out what is really going on).
    4. Brainstorm alternatives
    5. Analyze and rank alternatives
    6. Evaluate impact on all areas (this is a “what could go wrong?” step)
    7. Create a visual, detailed, double-ended schedule

    I like that it’s organized into steps and overall focuses on the “let’s think about this a bit”. That is good, and I especially like #3 and #6.

    On the other hand, agile teams who aren’t used to making changes can easily get stuck in analysis paralysis and can’t actually agree on something to try, and a detailed set of steps could easily make that worse. I’m more concerned that they try *something* at the beginning rather than it be the absolute best thing to try.

    So, I’m not sure about this one yet, but it is interesting.

    Organic vs Mechanistic

    In the book, the talk about two ways of implementing lean. The organic approach is close to the Toyota approach; you send a very experienced coach (sensei) into the group, and they teach the group how to do continuous improvement and stick around for a while. This gives great results, but requires a lot of work to teach the team members and reinforcement to make sure the whole team understands that continuous improvement is their job. It is also sensitive to the environment the group is embedded inside; some groups had made the transition but it didn’t take because management didn’t understand the environment it took to foster the improvement in the first place.

    I’m sure that seems familiar to many of you trying to do agile.

    The mechanistic approach comes from the Six Sigma folks. It focuses more on top-down implementation; you train up a centralized group of people and they go out across the company to hold Kaizen (improvement) events. That gives you breadth and consistency across the organization – which are good things – but the results aren’t as significant and – more importantly – teams do not keep improving on their own.

    As you might have figured out, I’m a big fan of the organic approach, as I see it as the only way to get the ongoing continuous improvement that will take you someplace really great – the only way that you will get a radically more productive team. And I’ve seen a lot of “scrum from above” implementations, and – at best – the have not been significant successes. So, I’m biased.

    Interestingly, the book has a case study of a company that owned two shipyards. One took an organic approach and the other took the mechanistic approach. The organic approach worked great where it was tried, but it was difficult to spread across that shipyard without support and it was very easy for the group to lose the environment that they needed to do continuous improvement.

    The mechanistic shipyard had not seen the sort of improvements that the organic one saw, but because they had an establish program with executive sponsorship, the improvements were spread more broadly in the enterprise and stuck around a bit better.

    The consultants said that after 5 years is was not clear which shipyard had benefitted more. Which I find to be very interesting in how you can do something organic but it’s really dependent on the individuals, and to make something lasting you need the support of the larger organization.

    The role of employees

    In the Toyota world, everybody works on continuous improvement, and there is an expectation that every employee should be able to provide an answer around what the current issues are in their group and how that employee is helping make things better.

    That is something that is really missing in the software world, and I’m curious what sort of improvements you would see if everybody knew it was their job to make things better on an ongoing basis.

    The role of management

    One of the interesting questions about agile transitions is how the role of management evolves, and there are a lot of different approaches taken. We see everything from “business as usual” to people managers with lots of reports (say, 20-50) to approaches that don’t have management in the traditional sense.

    I’m a big believer in collaborative self-organizing approaches, and the best that I’m usually hoping for is what I’d label was “benign neglect”. Since that is rare, I hadn’t spent much time thinking about what optimal management might be.

    I think I may now have a partial answer to this. Toyota lives and breathes continuous improvement, and one of the most important skills in management is the ability to teach their system at that level. They have employees whose only role is to help groups with continuous improvement. I think the agile analog is roughly, “what would it be like if your management was made up of skilled agile coaches who mostly focused on helping you be better?”

    Sounds like a very interesting world to be in – though I’m not sure it’s practical for most software companies.  I do think that having management focusing on the continuous improvement part – if done well – could be a significant value-add for a team.

  • Eric Gunnerson's Compendium

    Response to comments : You suck at TDD #3–Design sensitivity and improvement

    • 9 Comments

    I got some great comments on the post, and I answered a few in comments but one started to get very long-winded so I decided to convert my response into a post.

    Integration tests before refactoring

    The first question is around whether it would be a good idea to write an integration test around code before refactoring.

    I hate integration tests. It may be the kinds of teams that I've been on, but in the majority of cases, they:

    1. Were very expensive to write and maintain
    2. Took a long time to run
    3. Broke often for hard-to-determine reasons (sometimes randomly)
    4. Didn't provide useful coverage for the underlying features.

     

    Typically, the number of issues they found was not worth the amount of time we spent waiting for them to run, much less the cost of creating and maintaining them.

    There are a few cases where I think integration tests are justified:

    1. If you are doing something like ATDD or BDD, you are probably writing integration tests. I generally like those, though it's possible they could get out of hand as well.
    2. You have a need to maintain conformance to a specification or to previous behavior. You probably need integration tests for this, and you're just going to have to pay the tax to create and maintain them.
    3. You are working in code that is scary.

     

    "Scary" means something very specific to me. It's not about the risk of breaking something during modification, it's about the risk of breaking something in a way that isn't immediately obvious.

    There are times when the risk is significant and I do write some sort of pinning tests, but in most cases the risk does not justify the investment. I am willing to put up with a few hiccups along the way if it avoids a whole lot of time spent writing tests.

    I'll also note that to get the benefit out of these tests, I have to cover all the test cases that are important. The kinds of things that I might break during refactoring are the same kind of things I might forget to test. Doing well at this makes a test even more expensive.

    In the case in the post, the code is pretty simple and it seemed unlikely that we could break it in non-obvious way, so I didn't invest the time in an integration test, which in this case would have been really expensive to write.  And the majority of the changes were done automatically using Resharper refactorings that I trust to be correct.

    Preserving the interface while making it testable

    This is a very interesting question. Is it important to preserve the class interface when making a class testable, or should you feel free to change it? In this case, the question is whether I should pull the creation of the LegacyService instance out of the method and pass it in through the constructor, or instead use another technique that would allow me to create either a production or test instance as necessary.

    Let me relate a story…

    A few years ago, I led a team that was responsible for taking an authoring tool and extending it. The initial part had been done fairly quickly and wasn't very well designed, and it had only a handful of tests.

    One day, I was looking at a class, trying to figure out how it worked, because the constructor parameters didn't seem sufficient to do what it needed to do. So, I started digging and exploring, and I found that it was using a global reference to an uber-singleton that give it access to 5 other global singletons, and it was using these singletons to get its work done. Think of it as hand-coded DI.

    I felt betrayed and outraged. The constructor had *lied* to me, it wasn't honest about its dependencies.

    And that started a time-consuming refactoring where I pulled out all the references to the singletons and converted them to parameters. Once I got there, I could now see how the classes really worked and figure out how to simplify them.

    I prefer my code to be honest. In fact, I *need* it to be honest. Remember that my premise is that in TDD, the difficulty of writing tests exerts design pressure and that in response to that design pressure, I will refactor the code to be easier to test and that aligns well with "better overall". So I am hugely in preference of code that makes dependencies explicit, both because it is more honest (and therefore easier to reason about), and because it's messy and ugly and that means I'm more likely to convert it to something that is less messy and ugly.

    Or, to put it another way, preserving interfaces is a non-goal for me. I prefer honest messiness over fake tidiness.

  • Eric Gunnerson's Compendium

    You suck at TDD #2–Mocking libraries

    • 1 Comments

    Note: I am focusing only on the design impact of TDD. To better understand the overall impact, see this series of posts by Jay Bazuzi.

    My first experience with TDD was back in 2002 or so, and it was in C++, so there weren't any mocking libraries available. That meant that I had to use hand-written mocking classes.

    When hand-mocking, you need to create separate classes, write each of the methods that you need, etc. If the scenario is complex, you may have to write several classes that coordinate with each other to accomplish the mock. It's more than a little pain at times, and creating a new class always seems like a bit of an interrupt in my train of thought.

    That is as it should be. One of my foundational principles of running development teams is that pain – which, in this context, means, "tedious things I have to do instead of writing new code" – is a great incentive. That which is tedious is an automatic target for reduction and elimination. So, you have developers fix their own bugs because it will cause them pain to do so.

    (This is, of course, not a panacea; there are plenty of organizations where this will not result in better quality because the incentive towards writing bugs is so strong, but I digress…)

    Then mocking libraries showed up the scene. No need to write new classes, just write some mocking code and you're done. That reduced the pain and reduced the design pressure that 'the test is hard to write" was exerting, which reduces the improvement.

    Then a few mocking libraries showed up that let you mock statics, which allows you to test things that were totally untestable before, and that further reduced the design pressure. You can do some pretty ugly things with those libraries…

    Refactoring gets harder

    Many of the mocking libraries have another problem – they use a string-based approach for defining their mocks. This means that they are not refactoring-friendly; after you refactor your code, you find out that your tests won't even run; you have to go and hand-modify them so that they now match your new refactoring. This makes refactoring more painful, and makes it more likely that you will just skip the refactoring

    Discussion

    As you have probably gathered, I am not a fan of mocking libraries. They make it too easy to do things that I think you shouldn't, and they short circuit the feedback that you would otherwise be feeling. Embrace the pain of writing your own mocks, and use that to motivate you towards better solutions. I'll be talking more about better solutions in future posts.

    There is one situation where mocking libraries are great; if I need to bring an existing codebase under test so that I won't break it as I work on it. In that case, I need their power, and I will plan to get rid of them in the longer term.

  • Eric Gunnerson's Compendium

    You suck at TDD #1: Rewrite the steps

    • 6 Comments

    I've been paying attention to TDD for the past few years – doing it myself, watching others doing it, reading about it, etc. - and I've been seeing a lot of variation in the level of success people are having with it. As is my usual approach, I wrote a long and tedious post about it, which I have mercifully decided not to inflict on you.

    Instead, I'm going to do a series of posts about the things I've seen getting in the way of TDD success. And, in case it isn't obvious, I've engaged in the majority of the things that I'm going to be writing about, so, in the past, I sucked at TDD, and I'm sure I haven't totally fixed things, so I still suck at it now.

    Welcome to "You suck at TDD"…

    Rewrite the steps

    The whole point of TDD is that following the process exerts design pressure on your code so that you will refactor to make it better (1). More specifically, it uses the difficulty in writing simple test code as a proxy for the design quality of the code that is being tested.

    Let's walk through the TDD steps:

    1. Write a test that fails

    2. Make the test pass

    3. Refactor

    How does this usually play out? Typically, we dive directly into writing the test, partly because we want to skip the silly test part and get onto the real work of writing the product code, and partly because TDD tells us to do the simplest thing that could possible work. Writing the test is a formality, and we don't put a lot of thought into it.

    The only time this is not true is when it's not apparent how we can actually write the test. If, for example, a dependency is created inside a class, we need to do something to be able to inject that dependency, and that usually means some refactoring in the product code.

    Now that we have the test written, we make the test pass, and then it's time to refactor, so we look at the code, make some improvements, and then repeat the process.

    And we're doing TDD, right?

    Well…. Not really. As I said, you suck at TDD…

    Let's go back to what I wrote at the beginning of the section. I said that the point of TDD was that the state of our test code (difficult to write/ugly/etc) forced us to improve our product code. To succeed in that, that means that our test code has to either be drop-dead-simple (setup/test/assert in three lines) or it needs to be evolving to be simpler as we go. With the exception of the cases where we can't write a test, our tests typically are static. I see this all the time. 

    Let's try a thought experiment. I want you to channel your mindset when you are doing TDD. You have just finished making the test pass, and you are starting the refactor set. What are you thinking about? What are you looking at?

    Well, for me, I am focused on the product code that I just wrote, and I have the source staring me in the face. So, when I think of refactoring, I think about things that I might do to the product code. But that doesn't help my goal, which is to focus on what the test code is telling me, because it is the proxy for whether my product code is any good.

    This is where the three-step process of TDD falls down; it's really easy to miss the fact that you should be focusing on the test code and looking for refactorings *there*. I'm not going to say that you should ignore product code refactorings, but I am saying that the test ones are much more important.

    How can we change things? Well, I tried a couple of rewrites of the steps. The first is:

    1. Write a test that fails

    2. Make the test pass

    3. Refactor code

    4. Refactor test

    Making the code/test split explicit is a good thing as it can remind us to focus on the tests. You can also rotate this around so that "refactor tests" is step #1 if you like. This was an improvement for me, but I was still in "product mindset" for step 4 and it didn't work that great. So, I tried something different:

    1. Write a test that fails

    2. Refactor tests

    3. Make the test pass

    4. Refactor code

    Now, we're writing the test that fails, and then immediately stopping to evaluate what that test is telling us. We are looking at the test code and explicitly thinking about whether it needs to improve. That is a "good thing".

    But… There's a problem with this flow. The problem is that we're going to be doing our test refactoring while we have a failing test in our test suite, which makes the refactoring a bit harder as the endpoint isn't "all green", it's "all green except for the new test".

    How about this:

    1. Write a test that fails

    2. Disable the newly failed assertion

    3. Refactor tests

    4. Re-enable the previously failing assertion

    5. Make the test pass

    6. Refactor code

    That is better, as we now know when we finish our test refactoring that we didn't break any existing tests.

    My experience is that if you think of TDD in terms of these steps, it will help put the focus where it belongs – on the tests. Though I will admit that for simple refactorings, I often skip disabling the failing test, since it's a bit quicker and it's a tiny bit easier to remember where I was after the refactoring.

  • Eric Gunnerson's Compendium

    Agile Transitions Aren't

    • 0 Comments

    A while back I was talking with a team about agile. Rather than give them a typical introduction, I decided to talk about techniques that differentiated more successful agile teams from less successful ones. Near the end of the talk, I got a very interesting question:

    "What is the set of techniques where, if you took one away, you would no longer call it 'agile'?"

    This is a pretty good question. I thought for a little bit, and came up with the following:

    • First, the team takes an incremental approach; they make process changes in small, incremental steps
    • Second, the team is experimental; they approach process changes from a "let's try this and see if it works for us" perspective.
    • Third, the team is a team; they have a shared set of work items that they own and work on as a group, and they drive their own process.

    All of these are necessary for the team to be moving their process forward. The first two allow process to be changed in low risk and reversible way, and the third provides the group ownership that makes it possible to have discussions about process changes in the first place. We get process plasticity, and that is the key to a successful agile team – the ability to take the current process and evolve it into something better.

    Fast forward a few weeks later, and I was involved in a discussion about a team that had tried Scrum but hadn't had a lot of luck with it, and I started thinking about how agile transitions are usually done:

    • They are implemented as a big change; one week the team is doing their old process, then next they (if they are lucky) get a little training, and then they toss out pretty much all of their old process and adopt a totally different process.
    • The adoption is usually a "this is what we are doing" thing.
    • The team is rarely the instigator of the change.

    That's when I realized what had been bothering me for a while…

    The agile transition is not agile.

    That seems more than a little weird. We are advocating a quick incremental way of developing software, and we start by making a big change that neither management or the team really understand on the belief that, in a few months, things will shake out and the team will be in a better place. Worse, because the team is very busy trying to learn a lot of new things, it's unlikely that they will pick up on the incremental and experimental nature of agile, so they are likely going to go from their old static methodology to a new static methodology.

    This makes the "you should hire an agile coach" advice much more clear; of course you need a coach because otherwise you don't have much chance of understanding how everything is supposed to work. Unfortunately, most teams don't hire an agile coach, so it's not surprising that they don't have much success.

    Is there a better way? Can a team work their way into agile through a set of small steps? Well, the answer there is obviously "Yes", since that's how the agile methods were originally developed.

    I think we should be able to come up with a way to stage the changes so that the team can focus on the single thing they are working on rather than trying to deal with a ton of change. For example, there's no reason that you can't establish a good backlog process before you start doing anything else, and that would make it much easier for the agile teams when they start executing.

  • Eric Gunnerson's Compendium

    Resharper tip #1: Push code into a method / Pull code out of a method

    • 5 Comments

    Resharper is a great tool, but many times that operation that I want to perform isn’t possible with a single refactoring; you need multiple refactorings to get the result that you want. I did a search and could find these anywhere, so I thought I’d share them with you.

    If you know where more of these things are described and/or you know a better way of doing what I describe, please let me know.

    Push code into a method

    Consider the following code:

       1: static void Main(string[] args)
       2: {
       3:     DateTime start = DateTime.Now;
       4:  
       5:  
       6:     DateTime oneDayEarlier = start - TimeSpan.FromDays(1);
       7:     string startString = start.ToShortDateString();
       8:  
       9:     Process(oneDayEarlier, startString);
      10:  
      11: }
      12:  
      13: private static void Process(DateTime oneDayEarlier, string startString)
      14: {
      15:     Console.WriteLine(oneDayEarlier);
      16:     Console.WriteLine(startString);
      17: }

    Looking at the code in Main(), there are a couple of variables that are passed into the Process() method. A little examination shows that things would be cleaner they were in the Process method, but there’s no “move code into method” refactoring, so I’ll have to synthesize it out of the refactorings that I have. I start by renaming the Process() method to Process2(). Use whatever name you want here:

       1: class Program
       2: {
       3:     static void Main(string[] args)
       4:     {
       5:         DateTime start = DateTime.Now;
       6:  
       7:  
       8:         DateTime oneDayEarlier = start - TimeSpan.FromDays(1);
       9:         string startString = start.ToShortDateString();
      10:  
      11:         Process2(oneDayEarlier, startString);
      12:  
      13:     }
      14:  
      15:     private static void Process2(DateTime oneDayEarlier, string startString)
      16:     {
      17:         Console.WriteLine(oneDayEarlier);
      18:         Console.WriteLine(startString);
      19:     }
      20: }

    Next, select the lines that I want to include into the method plus the method call itself, and do an Extract Method refactoring to create a new Process() method:

       1: static void Main(string[] args)
       2: {
       3:     DateTime start = DateTime.Now;
       4:  
       5:  
       6:     Process(start);
       7: }
       8:  
       9: private static void Process(DateTime start)
      10: {
      11:     DateTime oneDayEarlier = start - TimeSpan.FromDays(1);
      12:     string startString = start.ToShortDateString();
      13:  
      14:     Process2(oneDayEarlier, startString);
      15: }
      16:  
      17: private static void Process2(DateTime oneDayEarlier, string startString)
      18: {
      19:     Console.WriteLine(oneDayEarlier);
      20:     Console.WriteLine(startString);
      21: }

    Finally, inline the Process2() method:

       1: class Program
       2: {
       3:     static void Main(string[] args)
       4:     {
       5:         DateTime start = DateTime.Now;
       6:  
       7:  
       8:         Process(start);
       9:     }
      10:  
      11:     private static void Process(DateTime start)
      12:     {
      13:         DateTime oneDayEarlier = start - TimeSpan.FromDays(1);
      14:         string startString = start.ToShortDateString();
      15:  
      16:         Console.WriteLine(oneDayEarlier);
      17:         Console.WriteLine(startString);
      18:     }
      19: }

    Three quick refactorings got me to where I wanted, and it’s about 15 seconds of work if you use the predefined keys.

    Pull Code Out of a Method

    Sometimes, I have some code that I want to pull out of a method. Consider the following:

       1: class Program
       2: {
       3:     static void Main(string[] args)
       4:     {
       5:         int n = 15;
       6:  
       7:         WriteInformation("Information: ", n);
       8:     }
       9:  
      10:     private static void WriteInformation(string information, int n)
      11:     {
      12:         File.WriteAllText("information.txt", information + n);
      13:     }
      14: }

    I have a few options here. We can pull “information.txt” out easily by selecting it and using Introduce Parameter:

       1: class Program
       2: {
       3:     static void Main(string[] args)
       4:     {
       5:         int n = 15;
       6:  
       7:         WriteInformation("Information: ", n, "information.txt");
       8:     }
       9:  
      10:     private static void WriteInformation(string information, int n, string filename)
      11:     {
      12:         File.WriteAllText(filename, information + n);
      13:     }
      14: }

    I could use that same approach to pull out “information + n”, but I’m going to do it an alternate way that works well if I have a chunk of code. First, I introduce a variable:

       1: class Program
       2: {
       3:     static void Main(string[] args)
       4:     {
       5:         int n = 15;
       6:  
       7:         WriteInformation("Information: ", n, "information.txt");
       8:     }
       9:  
      10:     private static void WriteInformation(string information, int n, string filename)
      11:     {
      12:         string contents = information + n;
      13:         File.WriteAllText(filename, contents);
      14:     }
      15: }

    I rename the method:

       1: class Program
       2: {
       3:     static void Main(string[] args)
       4:     {
       5:         int n = 15;
       6:  
       7:         WriteInformation2("Information: ", n, "information.txt");
       8:     }
       9:  
      10:     private static void WriteInformation2(string information, int n, string filename)
      11:     {
      12:         string contents = information + n;
      13:         File.WriteAllText(filename, contents);
      14:     }
      15: }

    And I now extract the code that I want to remain in the method to a new method:

       1: class Program
       2: {
       3:     static void Main(string[] args)
       4:     {
       5:         int n = 15;
       6:  
       7:         WriteInformation2("Information: ", n, "information.txt");
       8:     }
       9:  
      10:     private static void WriteInformation2(string information, int n, string filename)
      11:     {
      12:         string contents = information + n;
      13:         WriteInformation(filename, contents);
      14:     }
      15:  
      16:     private static void WriteInformation(string filename, string contents)
      17:     {
      18:         File.WriteAllText(filename, contents);
      19:     }
      20: }

    And, finally, I inline the original method:

       1: class Program
       2: {
       3:     static void Main(string[] args)
       4:     {
       5:         int n = 15;
       6:  
       7:         string contents = "Information: " + n;
       8:         WriteInformation("information.txt", contents);
       9:     }
      10:  
      11:     private static void WriteInformation(string filename, string contents)
      12:     {
      13:         File.WriteAllText(filename, contents);
      14:     }
      15: }
  • Eric Gunnerson's Compendium

    Port/Adapter/Simulator and error conditions

    • 0 Comments

    An excellent question on an internal alias came up today, and I wanted to share my response more widely.

    The question is around simulating error conditions when doing Port/Adapter/Simulator.

    For example, if my production adapter is talking to a database, the database might be unreachable and the real adapter would throw a timeout exception. How can we get the simulator to do that, so we can write a test that verifies that our code behaves correctly in that scenario?

    Before I answer, I need to credit Arlo, who taught me at least part of this technique…

    Implement common behaviors across all adapters

    The first thing to do is to see if we can figure out how to make the simulator behavior mirror the behavior of the real adapter.

    If we are implementing some sort of store, the real adapter might throw an “ItemNotFound” exception if the item isn’t there, and we can just make the simulator detect the same situation and throw the same exception. And we will – of course – write a test that we can use to verify that the behavior matches.

    Or, if there is a restriction on the names in a store (say one of our adapters stores items in a file, and the name is just the filename), then all of the adapters much implement that restriction (though I’d consider whether I wanted to do that or use an encoding approach to get rid of the restriction for the file adapter).

    Those are the simple cases, but the question was specifically about timeouts. Timeouts are random and not-deterministic, right?

    Yes, they are non-deterministic in actual use, but there might be a scenario that will always throw a timeout. What does the real adapter do if we pass in a database server that does not exist – something like “DatabaseThatDoesNotExist”? If we can figure out a developer/configuration error that sets up the scenario we want, then we can implement the same behavior in all of our adapters, and our world will be simple.

    However, the world is not always that simple…

    Cheat

    I’ll note here that Arlo did not teach me this technique, so any stupidity belongs to me…

    If I can’t find out a deterministic way to get a scenario to happen, then I need to implement a back door. I do this by adding a method to the simulator (not the adapter) that looks something like this:

    public void SimulateTimeoutOnLoad();

    Note that it is named with “Simulate”<scenario><method-name>, so that it’s easy to know that it isn’t part of the adapter interface and what it does. I will write a unit test to verify that the simulator does this correctly, but – alas – I cannot run this test against the real adapter because the method is not part of the adapter. That means it’s a decent idea to put these tests in a different file from the ones that target the adapter interface.

    Now that I have the new method, the test is pretty simple:

    Fetcher fetcher = new FetcherSimulator();

    ObjectToTest ott = new ObjectToTest(fetcher);

    fetcher.SimulateTimeoutOnLoad();

    ott.Load();
    Assert.Whatever(…);

    I’m just using the method to reach into the simulator and tell it to do something specific in a specific scenario.

    My goal is to do this as little as possible because it reduces the benefit we get from P/A/S, so I try to look very hard to find a way to not cheat.

    I should also note that if you are using P/A/S on the UI side, you are pretty much stuck using this technique, because the things that you are simulating are user actions, and none of them can be triggered through using the real adapter.

  • Eric Gunnerson's Compendium

    Agile team evaluation

    • 1 Comments

    I’ve been thinking a bit about team evaluation. In the agile world, this is often done by looking at practices – is the team doing pairing, are they doing story mapping, how long is their iteration length?

    This is definitely a useful thing to do, but it can sometimes be too prescriptive; a specific practice needs to be good for a team where they are right now, and that’s always clear. I’m a big fan of not blocking checkin on code review, but I need something to replace it (continuous code review through pairing or mobbing) before it makes sense.

    Instead, I’ve tried to come up with a set of questions that focus out the outcomes that I think are most important and whether the team is getting better at those outcomes. I’ve roughly organized them into four categories (call them “Pillars” if you must).

     

    1: Delivery of Business Value

    • Is the team focused on working on the most important things?
    • Are they delivering them with a quality they are proud of?
    • Are they delivered in small, easy-to-digest chunks?
    • Is the team getting better?

    2: Code Health

    • Is the code well architected?
    • Are there tests that verify that the code works and will continue to work?
    • Is the team getting better over time?
      • Is the architecture getting cleaner?
      • Is it easier to write tests?
      • Is technical debt disappearing?
      • Are bugs becoming less frequent?
      • Are better technologies coming in?

    3: Team Health

    • Is the team healthy and happy?
    • Is there “esprit de corps” in the team?
    • Are team members learning to be better at existing things?
    • Are team members learning how to do new things?
    • Does the team have an experimental mindset?

    4: Organization Health

    • Are changes in approaches by the team(s) leading to changes in the overall organization?
    • Are obstacles to increase speed and efficiency going away?
    • Are the teams trying different things and sharing their findings? Or is the organization stuck in a top-down, monocultural approach?
    • Is there a cleared vision and charter for the organization?
    • Does the organization focus on “what” and “why” and let the teams control the “how”?
  • Eric Gunnerson's Compendium

    Agile management

    • 2 Comments

    A friend at work posted a link to the following article:

    I’m Sorry, But Agile Won’t Fix Your Products

    and I started to write a quick note but thought it deserved a bigger response

    +++++++++++++

    Okay, so, I agree with pretty much all of the article.

    As is often the case, I went and wrote some analysis, didn’t like it, tried to make it better, and ended up abandoning it to write something else. I agree with the comments in the article about command-and-control, but I think there is another aspect that is worth discussing. I’ll note that some of this is observational rather than experiential.

    Collectively, management tends to value conformity pretty highly. If, for example, your larger group creates two-year products plans, you will be asked about your two-year product plans and – even if you have a great reason for only doing three-month plans that your manager agrees with – you will become an outlier. Being an outlier puts you at risk; if, for example, things don’t go as well as expected for you, there is now an obvious cause for the problem – your nonconformity. Or, your manager get promoted, and the new manager wants two-year plans.

    This effect was immortalized in a saying dating back to the days of mainframes:

    Nobody ever got fired for buying IBM…

    Because of this effect, you end up with what I call a “Group Monoculture”, where process is mostly fixed, and inefficiency and lack of progress are fine as long as they are the status quo.

    It is a truism that, whatever skills they might also possess, there is one commonality amongst all the managers; they possess the ability to be hired and/or promoted in the existing corporate culture. That generally means that they are good at following the existing process and good at conformity. This reinforces and cements the monoculture. Any changes that happen are driven from high up the chain and just switch the group to a different monoculture.

    Different is bad, which, last time I checked, was not one of the statements in the Agile Manifesto…

    How can agility happen in such an org? Well, it happens due to the actions of what I call process adapters. A process adapter adapts the process that exists above a group in the organization to be closer to the process that the team wants to have. For example, the adapter might keep that two-year plan up to date but allow the team below to work as if short planning cycles were the norm. Or an adapter might adapt the team’s one-week iteration cycle to the overall group’s 12-week cycle.

    Adaptation is not a panacea. The adaptation is always imperfect and some of the process from above leaks down, and it can be pretty stressful to the adapter; they are usually hiding some details from their manager, fighting battles so that they can be different, and running a very real career risk. As their team gets more agile and self-guided, the adaptation gets more leaky, and the adapter runs more risks; the whole thing can be derailed by investing time in reducing technical debt which slows them down, some unexpected questions by the agile team members to management, or the adapter getting a new manager.

    I’ve seen quite a few first-level (aka “lead”) adapters; leads tend to be focused more down at their team than up and out and can usually get away with more non-conformity; leads are viewed as less experienced and there’s often a feeling that they should have a lot of latitude in how they run their teams. Leads are also more likely to be senior and technically astute, which gives them more options to “explore different opportunities” both inside and outside the company.

    I haven’t seen any second-level adapters be successful for more than a year or so, though I have seen a few try really hard.

    Sometimes, adapters get promoted into the middle of the hierarchy or are hired from outside. This is often a frustrating position for the adapter. As Joe Egan and Gerry Rafferty wrote back in 1972:

    Clowns to the left of me,
    Jokers to the right,
    Here I am
    Stuck in the middle…

    One of two things tend to happen.

    Either the adapter gets frustrated with the challenges of adapting and trying to drive broader change and decide to do something else, or the adapter gets promoted higher. Further promotion often doesn’t have the hoped-for effect; as the adapter moves up they get broader scope, and the layers underneath them are managed by – you guessed it – the rank and file managers who are devoted to the existing monoculture. Not to mention that the agile “teams are self-organizing and drive their own approach” tenet means that adapters tend to give less direction to their reports.

  • Eric Gunnerson's Compendium

    A little something that made me happy…

    • 0 Comments

    Last week, I was doing some work on a utility I own. It talked to some servers in the background that could be slow at times and there was no way to know what was happening, so I needed to provide some way of telling the user that it was busy.

    I started writing a unit test for it, and realized I needed an abstraction (R U busy? Yes, IBusy):

    public interface IBusy
    {
        void Start();
        void Stop();
    }

    I plumbed that into the code, failed the test, and then got it working, but it wasn’t very elegant. Plus, the way I have my code structured, I had to pass it into each of the managers that do async operations, and there are four of those.

    The outlook was not very bright, but I can suffer when required, so I started implementing the next set.

    Halfway through, I got an idea. When I added in the asynchronous stuff, I needed a way to abstract that out for testing purposes, so I had defined the following:

    public interface ITaskCreator
    {
        void StartTask<T>(Func<T> func, Action<T> action  );
    }

    This is very simple to use; pass in the function you want to happen asynchronously and the action to process your result. There is a TaskCreatorSynchronous class that I use in my unit tests; it looks like this:

    public void StartTask<T>(Func<T> func, Action<T> action)
    {
        action(func());
    }

    What I realized was that the times I needed to show the code was busy were exactly the times when I was running a task, and I already had a class that knew how to do that.  I modified TaskCreator:

    public class TaskCreator : ITaskCreator
    {
        public EventHandlerEmpty StartedTask;
        public EventHandlerEmpty FinishedTask;

        public void StartTask<T>(Func<T> func,
            Action<T> action  )
        {
            if (StartedTask != null)
            {
                StartedTask();
            }

            Task<T>.Factory.StartNew(func)
                .ContinueWith((task) =>
                {
                    action(task.Result);
                    if (FinishedTask != null)
                    {
                        FinishedTask();
                    }
                }, TaskScheduler.FromCurrentSynchronizationContext());
        }
    }

    It now has an event that is called before the task is started and one that is called after the task is completed. All my main code has to do is hook up appropriately to those events, and any class that uses that instance to create tasks will automatically get the busy functionality.

    I am happy when things turn out so neatly.

  • Eric Gunnerson's Compendium

    What makes a good metric?

    • 0 Comments

    I got into a discussion at work today about metrics – a discussion about correctness vs utility – and I wrote something that I thought would be of general interest.

    ------

    The important feature of metrics is that they are useful, which generally means the following:

    a) Sensitive to the actual thing that you are trying to measure (ie when the underlying value changes, the metric changes).

    b) Positively correlated with the thing you are trying to measure (a change in the underlying value produces a move in the correct direction of the metric).

    c) Not unduly influenced by other factors outside of the underlying value (ie a change in the underlying usage does not have a significant effect on the metric).

    Those give you a decent measure. It’s nice to have other things – linearity, where a 10% in the underlying value results in a 10% move in the metric – but they aren’t a requirement for utility in many cases.

    To determine utility, you typically do a static analysis, where you look at how the metric is calculated, figure out how that relates to what you are trying to measure, and generally try to come up scenarios that would break it. And you follow that up with empirical analysis, where you look at how it behaves in the field and see if it is generating the utility that you need.

    The requirements for utility vary drastically across applications. If you are doing metrics to drive an automated currency trading system, then you need a bunch of analysis to decide that a metric works correctly. In a lot of other cases, a very gross metric is good enough – it all depends on what use you are going to make of it.

    ------

    Two things to add to what I wrote:

    Some of you have undoubtedly noticed that my definition for the goodness of a metric – utility – is the same definition that is use on scientific theories. That makes me happy, because science has been so successful, and then it makes me nervous, because it seems a bit too convenient.

    The metrics I was talking about were ones coming out of customer telemetry, so the main factors I was worried about were how closely the telemetry displayed actual customer behavior and whether we were making realistic simplifying assumptions in our data processing. Metrics come up a lot in the agile/process world, and in those cases confounding factors are your main issue; people are very good at figuring out how to drive your surrogate measure in artificial ways without actually driving the underlying thing that you value.

  • Eric Gunnerson's Compendium

    Port/Adapter/Simulator and UI

    • 0 Comments

    I’ve been working on a little utility project, and I’ve been using port/adapter/simulator on both the server-side parts and on the UI parts. It has been working nicely, though it took me a while to get there.

    Initially, I started with a single UI class. After a bit of extension, it looked a bit ugly, so I decided to break it apart by functional area – there’s a main working area, there’s a favorites area, there’s an executing area, and there’s a config area. For each area, it looks something like this:

    IUIFavorites

    -> UIFavorites
    -> UIFavoritesSimulator (really more of a mock than a simulator)

    FavoritesManager(IUIFavorites, IUIStore, etc. )

    The UI side handles just that – the UI – and the manager part handles the business logic. The UI part exposes events for user actions a properties and methods for modification.

    There was one slightly sticky part of this. There are times when the working area manager needs to add itself to the favorites. Options I thought of:

    1) Passing the UIWorking object to the favorites manager.

    2) Passing the working manager to the favorites manager.

    3) Hooking the UIworking event to a favorites manager method in the main creation code.

    4) Hooking a working manager event to a favorites manager method in the main creation code.

    I didn’t like #1 or #2, so I ended up doing #4. #3 also seemed okay.

  • Eric Gunnerson's Compendium

    The no bugs journey part 3–what kind of bug is this?

    • 0 Comments

    If you are in a buggy group, you have a lot of bugs.

    After writing the preceding, I’ll endeavor to make the rest of this post slightly less obvious.

    Anyway, so, you have a lot of bugs, and you need to get better. How can you get started?

    Well, there’s a common technique in agile called root-cause analysis. It’s a good technique, but your team isn’t ready for it and you can’t afford to do it.

    Instead, I recommend a technique I came up (probably not uniquely) that I call “bug categorization”. I started this when I team that I was on had a notably buggy iteration, and I noticed that a fair number of the bugs were just sloppy – not reading the spec, not running the app after they had made a change, that sort of thing.

    What I wanted was a way to help the team to clean up their act without being prescriptive.

    So, I went through all the bugs – something like 120 – and did a really quick classification on each of them, putting them into one of the following categories:

    • Foreseeable: This is a bug that we should have caught.
    • External: Somebody else broke us.
    • Existing: This bug existed in past versions of our software.
    • Spec: There was a problem with the spec.
    • Other: Something else.

    At our retrospective, I presented a graph of the bugs, and then put up a post-it that said, “too many bugs”.

    And then I let the team work on the problem. And they did great, cutting the foreseeable bugs significantly in just a few iterations despite me expanding the definition of foreseeable to include more bugs.

    And while my initial impetus was to reduce the sloppy bugs, the team spent a lot of time fixing bugs related to definition and communication. For example, they invented a dev/test/pm meeting that the start of a story to make sure everybody was on the same page.

    So, that’s bug categorization.

  • Eric Gunnerson's Compendium

    No bugs journey episode 2: It’s a matter of values…

    • 6 Comments

    Read episode 1 first.

    Here we are at episode 2, and time for another question. Of the three, which one do you value the most?

    1. Shipping on schedule
    2. Shipping with a given set of features
    3. Shipping with high quality

    Write down your answer.

    Ha ha! It was a trick question. Pretty much everybody is going to say, “it depends”, because it’s a tradeoff between these three. Let’s try an alternate expression:

    1. Build it fast
    2. Build a great product
    3. Build it right

    All of these are obviously important. Hmm. Let’s try a different tact…

    If you get in a situation where you need to choose between schedule, features, and quality, which one would you choose?

    1. Shipping on schedule with the right set of features but adequate quality
    2. Shipping late with the right set of features and high quality.
    3. Shipping on schedule and with high quality but with fewer features than we had hoped

    Write down your answer.

    Let’s talk about option #1. First off, it doesn’t exist. The reason it doesn’t exist is that there is a minimum quality level at which you can survive and grow as a company. If you are trying to hold features and schedule constant, as you develop features you are forcing the quality level down – it *has to* go down because something has to give if you get behind (short estimates, etc.). That means you start with a product that you just shipped, and then it degrades in quality as you do features, and then at some point you want to ship, so you realize you need quality, so you focus on getting back to *adequate*. Unfortunately, fixing bugs is the activity with the most schedule uncertainty, so there is no way you are shipping on time.

    It’s actually worse than this. Typically, you’ve reached a point where your quality is poor and you know that there is no way that you are going to get the features done and reach acceptable quality, so you cut features.

    And you end up with a subset of features that ships late with barely adequate quality. There’s an old saying in software that goes, “schedule/features/quality – pick 2”, but what few people realize is that it’s very easy to pick zero.

    I hope I’ve convinced you that the first approach doesn’t work, and given that I’m saying this series is about “no bugs”, you probably aren’t surprised. Let’s examine the other options.

    I’ve see #2 work for teams; they had a feature target in their head, built their software to high quality standards, and then shipped when they were done. This was common in the olden days when box products were common, when the only way to get your product out there was produce a lot of CDs (or even diskettes) and ship them out to people. It worked, in the sense that companies that used the approach were successful.

    It is, however, pretty disruptive on the business as a whole; it’s hard to run a business where you don’t know:

    • If new revenue is going to show up 1 year or 3 years from now
    • When you’ll be spending money on sales and marketing
    • Whether the market has any interest in buying what you build

    Not to mention making it tough on customers who don’t know when they’ll get that bug fix and enhancement they requested.

    Which leaves option #3, which through an amazing coincidence, is the one that is the most aligned with agile philosophy. Here’s one way to express it:

    Given that:

    • It is important to the rest of the business and our customers that we be predictable in our shipping schedule
    • Bugs slow us down, make customers unhappy, and pose significant schedule risk
    • The accuracy at which we can make estimates of the time to develop features is poor
    • Work always expands (people leave the team or get sick, we need to spend time troubleshooting customer issues, engineering systems break), forcing us to flex somewhere

    The only rational option that we have is to flex on features. If we are willing to accept that features don’t show up as quickly as we would like (or had originally forecast), it is possible to ship on time with high quality.

    I chose the phrase “it is possible” carefully; it is possible to build such a system but not get the desired result.

    Flexing on features effectively

    Time for another exercise. Studies have shown that you will get a much better review of a plan if you spend some focused time thinking about how it might go wrong.

    Thinking about the world I just described, what could go wrong? What might keep us from shipping on time with high quality?

    Write down your answers.

    Here’s my list:

    1. At the time we need to decide to cut features, we might have 10 features that are 80% done. If we move out all 10 of them to the next iteration, we have nothing to ship.
    2. We might have a hard time tracking where we are early enough to make decisions; most people have seen a case where all the features were on schedule until a week before shipping and suddenly 25% were behind by a week or more. If this happens, it may be too late to adapt.
    3. We might have teams that are dependent on each other; the only way to make my schedule is to cut work from team A, but that means that teams B & C can’t finish their work as planned, and they will have to adjust, using time we don’t have.
    4. This release was feature A and a few bugfixes, and feature A isn’t on schedule. We’ll look silly if we just shipped bugfixes.

    (At this point I’m really hoping that you don’t have something important on your list that I’ve missed. If so, that’s what comments are for…)

    How can we mitigate? Well, for the first one, we can focus on getting one feature done before we move on to the next one. That means that we would have 8 features 100% done instead of 10 features 80% done. This is one of the main drivers for the agile “work together as a team” approach.

    This mitigation works for the second one as well. If we make our tracking “is the feature fully complete and ready to ship”, we can tell where we are (3/10 features current done and ready to ship (“done done” in agile terminology)) and we have a better chance of predicting where we are going. This is another driver for “work together as a team”. Note that for both the first and the second one, the more granular our features are, the easier it is to make work; it works great if the team has 5-10 items per iteration but poorly if it only has two. This is the driver for “small self-contained stories” and “velocity measurement” in agile.

    I have a few thoughts on the third one. You can mitigate by using short cycles and having teams B and C wait until A is done with their work. You can try to break the work A does into parts so the part the other teams need can be delivered further.  Or you can go with a “vertical team” approach, which works great. 

    For the fourth one, the real problem is that we put all of our eggs in one basket. Chopping feature A up will give us some granularity and the chance to get part way there. I also think that a shorter cycle will be our friend; if we are giving our customers updates every month, they will probably be fine with a message that says, “this month’s update only contains bugfixes”.

    To summarize, if we have small stories (a few days or less) and we work on them sequentially (limiting how many we are working on at one time), our problems about tracking and having something good enough to ship become much more tractable. We can predict early what features are not going to make it, and simply shift them to the next cycle (iteration). That is our pressure-relief valve, the way that we can make sure we have enough time to get features completed on the shipping schedule with great quality.

    The world isn’t quite as simple as I’ve described it here, but I’ve also omitted a number of advanced topics that help out with that, so I think it’s a pretty fair overview.

    Before I go on, I’d like to address one comment I’ve heard in relation to this approach.

    If we flex on features, then I’m not going to be able to forecast where we will be in 6 months

    The reality is that nobody has ever been able to do that, you were just pretending. And when you tried, you were often spending time on things that weren’t the most important ones because of new priorities.

    Instead, how about always working on whatever is the highest priority for the business, and having the ability to adjust that on a periodic basis? How does that sound?

    The culture of commitment

    Time for another question:

    What are the biggest barriers to making this work in your organization?

    Write down your answers.

    I have a list, but I’m only going to talk about one, because it is so much more important than the rest. It’s about a culture of commitment.

    Does your organization ask development teams to *commit* to being done in the time that they estimated? Does it push back when developer estimates are too large? Do developers get emails or visits from managers telling them they need to be done on time?

    If so, you are encouraging them to write crappy code. You have trained them that being on the “not done” list is something to be avoided, and they will do their best not to avoid it. They can either work harder/longer – which has limited effectiveness and is crappy for the company in other ways – or they can cut corners. That’s all they can do.

    Pressure here can be pretty subtle. If my team tracks “days done” and I have to update my estimate to account for the fact that things were harder than I thought, that puts pressure on me to instead cut corners.

    This is one reason agile uses story points for estimation; it decouples the estimation process from the work process.

    Changing the culture

    Here are my suggestions:

    1. Get rid of any mention the words “committed” or “scheduled” WRT work. The team is *planning* what work they will attempt in the next iteration.
    2. Change the “are you going to be done?” interaction to ask, “is there anything you would like to move out to the next iteration?” Expect that initially, people aren’t going to want to take you up on this, and you may have to have some personal interactions to convince somebody that it’s really okay to do this.
    3. When you have somebody decide to move something out, make sure that you take public note of it.

      “To adjust the amount of work to the capacity of the team and maintain shippable quality, the paypal feature has been moved out to the next iteration.”
    4. When teams/individuals check in unfinished features (buggy/not complete) instead of letting those features make it into the common build, force them to remove them or disable them. Make this part of the team-wide status report for the iteration (“the ‘email invoice’ feature did not meet quality standards and has been disabled for this iteration”).
    5. I highly recommend switching to a team-ownership approach. Teams are much more likely to change how they work and improve over time than individuals.

    You will need to be patient; people are used to working in the old ways.

    This cultural change is the most important thing you can do to reduce the number of bugs that you have.

  • Eric Gunnerson's Compendium

    No Bugs Journey Episode 1: Inventory

    • 3 Comments

    Over the past few years I had the opportunity to work in an environment in which we achieved a significant reduction in bugs and an associated increase in quality and developer satisfaction.

    This series will be about how to start the journey from wherever you currently are to a place with fewer bugs, and a bunch of things to think about and perhaps try on your team. Some will be applicable for the individual developer, some will be about an engineering team, some will be for the entire group.

    As with all things agile, some techniques will work great for your team, some may require some modification, and some may not work at all. Becoming agile is all about adopting an experimental mindset.

    Whether you are on a team awash in bugs or you are already working towards fewer bugs, I hope you will find something of value.

    Inventory

    We’re going to start by stepping back, taking stock of the situation, and practicing the “Inspect” part of the agile “Inspect and adapt” cycle. We’ll be using a feedback technique that is often used in agile groups.

    <aside>

    The agile software community is very effective at finding useful techniques outside of the field and adopting them, which is great. They are not great at giving credit for where those techniques were created, so there are a lot of “agile” practices that really came from somewhere else. Which is why I wrote “is often used in agile groups” rather than “agile technique”…

    The technique we’re going to use is known as “affinity mapping”, which has been around for a long time.

    </aside>

    (Note: the following exercise works great as a team/group activity as well. If you want to do it as a group exercise, go find a few videos on affinity mapping and watch them first).

    Find some free time and a place where you won’t be disturbed. Get a stack of sticky notes and a pen.

    Your task is to write down the issues that are contributing to having lots of bugs, one per sticky note. Write down whatever comes into your head, and don’t spend a lot of time getting it perfect. You can phrase it either as a problem (“lots of build breaks”) or a solution (“set up a continuous integration server”).

    Start with what is in the forefront of your mind, and write them down until you run out of things to write. There is no correct number to end up with; some people end up with 4 notes, some people end up with 20.

    Now that you have done that, look at this list below, and see if thinking about those areas leads you to get any more ideas. Write them down as well.

    1. Planning
    2. Process tracking and management
    3. Reacting to changes
    4. Interactions between roles (dev, test, management, design, customer)
    5. Recognition – what leads to positive recognition, what leads to negative recognition
    6. Who is respected on the team, and why?
    7. Promotion – what sort of performance leads to promotion
    8. Developer techniques
    9. How is training handled
    10. Who owns getting something done

    This second part of the exercise is about thinking different (aka “out of the box”), in looking for non-obvious causes. I’m explicitly having you do it to see if you come up with some of the topics that I’m going to cover in the future.

    Clustering

    We now have a big pile of sticky notes. Our next task is to review the notes to see if there is any obvious clustering in them. Put all of the notes up on a wall/whiteboard, and look at them. If you find two notes that are closely related, put them next to each other. If you are doing this as an individuals, you will probably only find one or two clusters; if doing it as a group, you will find more.

    Note that we are not doing categorization, where each sticky is in a group. At the end I expect to see a lot of notes that aren’t in clusters. That is expected.

    Reflection

    Take a few minutes and look at what you have. My guess is that you have a decent list of issues, and my hope is that you thought of things that you weren’t thinking about before.

    At this point, you might be saying, “If I know what the issues are, why shouldn’t I just go off and start working to address them? Why do I need to read what Eric says?”

    If you said that, I am heartily in support of the first statement. Determine which sticky/cluster you (or your team) wants to tackle, figure out a small/cheap/low-risk experiment that you can try, and start making things better.

    As for my value-add, my plan is for it to add value in two ways:

    First, I’ve been through this a few times, and my guess is that you missed a few issues during the exercise. I know that I missed a ton of stuff the first time I tried this.

    Second, I’m hoping to give some useful advice around what techniques might be used for a specific issue, common pitfalls, and ordering constraints (you may have to address issue B to have a decent chance of addressing issue A).

    Next time

    Next time I’m to talk about the incentives that are present in a lot of teams and how they relate to bugs.

    No bugs journey episode 2: Stop encouraging your developers to write bugs

    It’s about to get real.

  • Eric Gunnerson's Compendium

    Unit test success using Ports, Adapters, & Simulators–kata walkthrough

    • 0 Comments

    You will probably want to read my conceptual post on this topic before this one.

    The kata that I’m using can be found at github here. My walkthrough is in the EricGuSolution branch, and I checked in whenever I hit a good stopping point. When you see something like:

    Commit: Added RecipeManager class

    you can find that commit on the branch and look at the change that I made. The checkin history is fairly coarse; if you want a more atomic view, go over to the original version of the kata, and there you’ll find pretty much a per-change view of the transformations.

    Our patient

    We start with a very simple Windows Forms application for managing recipes. It allows users to create/edit/delete recipes, and the user can also decide where to store their recipes. The goal is to add unit tests for it. The code is pretty tiny, but it’s pretty convoluted; there is UI code tied in with file system code, and it’s not at all clear how we can get tested.

    I’m going to be doing TDD as much as possible here, so the first thing to do is to dive right in and start writing tests, right?

    The answer to that is “nope”. Okay, if you are trying to add functionality, you can use the techniques in Feather’s excellent book, “Working Effectively with Legacy Code”, but let’s just pretend we’ve done that and are unhappy with the result, so we’re going to refactor to make it easier to test.

    The first thing that I want you to do is to look at the application & code, and find all the ports, and then write down a general description of what each port does. A port is something that a program uses to interface with an external dependency. Go do that, write them down, and then code back.

    The Ports

    I identified three ports in the system:

    1. A port that loads/saves/lists/deletes recipes
    2. A port that loads/saves the location of the recipes
    3. A port that handles all the interactions with the user (ie “UI”)

    You could conceivably break some of these up; perhaps the UI port that deals with recipes is different than the one that deals with the recipe storage directory. We’ll see what happens there later on.

    If you wanted, you could go to the next level of detail and write out the details of the interface of each port, but I find it easier to pull that out of the code as I work.

    How do I do this without breaking things?

    That’s a really good question. There are a number of techniques that will reduce the chance of that happening:

    1. If your language has a refactoring tool available, use it. This will drastically reduce the number of bugs that you introduce. I’m working in C#, so I’m going to be using Resharper.
    2. Run existing tests (integrated tests, other automated tests, manual tests) to verify that things still work.
    3. Write pinning tests around the code you are going to change.
    4. Work in small chunks, and test often.
    5. Be very careful. My favorite method of being very careful is to pair with somebody, and I would prefer to do it even if I have pretty good tests.

    Wherever possible, I used resharper to do the transformations.

    Create an adapter

    An adapter is an implementation of a port. I’m going to do the recipe one first. My goal here is to take all the code that deals with these operations and get it in one place. Reading through the code in Form1.cs, I see that there the LoadRecipes() method. That seems like something our port should be able to do. It has the following code:

    private void LoadRecipes()
    {
        string directory = GetRecipeDirectory();
        DirectoryInfo directoryInfo = new DirectoryInfo(directory);
        directoryInfo.Create();
    
        m_recipes = directoryInfo.GetFiles("*")
            .Select(fileInfo => new Recipe { Name = fileInfo.Name, Size = fileInfo.Length, Text = File.ReadAllText(fileInfo.FullName) }).ToList();
    
        PopulateList();
    }

    I see three things going on here. First, we get a string from another method, then we do some of our processing, then we call the “PopulateList()” method. The first and the last thing don’t really have anything to do with the concept of dealing with recipes, so I’ll extract the middle part out into a separate method (named “LoadRecipesPort()” because I couldn’t come up with a better name for it).

    private void LoadRecipes()
    {
        string directory = GetRecipeDirectory();
        m_recipes = LoadRecipesPort(directory);
    
        PopulateList();
    }
    
    private static List<Recipe> LoadRecipesPort(string directory)
    {
        DirectoryInfo directoryInfo = new DirectoryInfo(directory);
        directoryInfo.Create();
    
        return directoryInfo.GetFiles("*")
            .Select(
                fileInfo =>
                    new Recipe
                    {
                        Name = fileInfo.Name,
                        Size = fileInfo.Length,
                        Text = File.ReadAllText(fileInfo.FullName)
                    })
            .ToList();
    }

    Note that the extracted method is static; that verifies that it doesn’t have any dependencies on anything in the class.

    I read down some more, and come across the code for deleting recipes:

    private void DeleteClick(object sender, EventArgs e)
    {
        foreach (RecipeListViewItem recipeListViewItem in listView1.SelectedItems)
        {
            m_recipes.Remove(recipeListViewItem.Recipe);
            string directory = GetRecipeDirectory();
    
            File.Delete(directory + @"\" + recipeListViewItem.Recipe.Name);
        }
        PopulateList();
    
        NewClick(null, null);
    } 

    There is only one line there – the call to File.Delete(). I pull that out into a separate method:

    private static void DeleteRecipe(string directory, string name)
    {
        File.Delete(directory + @"\" + name);
    }

    Next is the code to save the recipe. I extract that out:

    private static void SaveRecipe(string directory, string name, string directions)
    {
        File.WriteAllText(Path.Combine(directory, name), directions);
    }

    That is all of the code that deals with recipes.

    Commit: Extracted recipe code into static methods

    <aside>

    You may have noticed that there is other code in the program that deals with the file system, but I did not extract it. That is very deliberate; my goal is to extract out the implementation of a specific port. Similarly, if I had been using a database rather than a file system, I would extract only the database code that dealt with recipes.

    This is how this pattern differs from a more traditional “wrapper” approach, and is hugely important, as I hope you will soon see.

    </aside>

    The adapter is born

    I do an “extract class” refactoring and pull out the three methods into a RecipeStore class. I convert all three of them to instance methods with resharper refactorings (add a parameter of type RecipeStore to each of them, then make them non-static, plus a bit of hand-editing in the form class). I also take the directory parameter and push it into the constructor. That cleans up the code quite a bit, and I end up with the following class:

    public class RecipeStore
    {
        private string m_directory;
    
        public RecipeStore(string directory)
        {
            m_directory = directory;
        }
    
        public List<Recipe> Load()
        {
            DirectoryInfo directoryInfo = new DirectoryInfo(m_directory);
            directoryInfo.Create();
    
            return directoryInfo.GetFiles("*")
                .Select(
                    fileInfo =>
                        new Recipe
                        {
                            Name = fileInfo.Name,
                            Size = fileInfo.Length,
                            Text = File.ReadAllText(fileInfo.FullName)
                        })
                .ToList();
        }
    
        public void Delete(string name)
        {
            File.Delete(m_directory + @"\" + name);
        }
    
        public void Save(string name, string directions)
        {
            File.WriteAllText(Path.Combine(m_directory, name), directions);
        }
    }
    Commit: RecipeStore instance class with directory in constructor

    Take a look at the class, and evaluate it from a design perspective. I’m pretty happy with it; it does only one thing, and the fact that it’s storing recipes in a file system isn’t apparent from the method signature. The form code looks better as well.

    Extract the port interface & write a simulator

    I now have the adapter, so I can extract out the defining IRecipeStore interface.

    public interface IRecipeStore
    {
        List<Recipe> Load();
        void Delete(string name);
        void Save(string name, string directions);
    }

    I’ll add a new adapter class that implements this interface:

    class RecipeStoreSimulator: IRecipeStore
    {
        public List<Recipe> Load()
        {
            throw new NotImplementedException();
        }
    
        public void Delete(string name)
        {
            throw new NotImplementedException();
        }
    
        public void Save(string name, string directions)
        {
            throw new NotImplementedException();
        }
    }

    The simulator is going to be an in-memory implementation of the recipe store, which which will make it very good for unit tests. Since it’s going to be in-memory, it doesn’t have any dependencies and therefore I can write unit tests for it. I’ll do that with TDD.

    Commit: RecipeStoreSimulator with tests

    It was a very simple interface, so it only took me about 15 minutes to write it. It’s not terribly robust, however; it has no error-handling at all. I now have a simulator that I can use to test any code that uses the RecipeStore abstraction. But wait a second; the tests I wrote for the simulator are really tests for the port.

    If I slightly modify my tests so that they use an IRecipeStore, I can re-purpose them to work with any implementation of that port. I do that, but I start seeing failures, because the tests assume an empty recipe store. If I change the tests to clean up after themselves, it should help…

    Once I’ve done that, I can successfully run the port unit tests against the filesystem recipestore.

    Commit: Unit tests set up to test RecipeStore

    RecipeStoreLocator

    We’ll now repeat the same pattern, this time with the code that figures out where the RecipeStore is located. I make the methods static, push them into a separate class, and turn them back into instance methods.

    When I first looked at the code, I was tempted not to do this port, because the code is very specific to finding a directory, and the RecipeStore is the only thing that uses it, so I could have just put the code in the RecipeStore. After a bit of thought, I decided that “where do I store my recipes” is a separate abstraction, and therefore having a locator was a good idea.

    Commit: RecipeStoreLocator class added

    I create the Simulator and unit tests, but when I go to run them, I find that I’m missing something; the abstraction has no way to reset itself to the initial state because the file persists on disk. I add a ResetToDefault() method, and then it works fine.

    Commit: Finished RecipeStoreLocator + simulator + unit tests

    Status check & on to the UI

    Let’s take a minute and see what we’ve accomplished. We’ve created two new port abstractions and pulled some messy code out of the form class, but we haven’t gotten much closer to be able to test the code in the form class itself. For example, when we call LoadRecipes(), we should get the recipes from the store, and then push them out into the UI. How can we test that code?

    Let’s try the same sort of transformations on the UI dependency. We’ll start with PopulateList():

    private void PopulateList()
    {
        listView1.Items.Clear();
    
        foreach (Recipe recipe in m_recipes)
        {
            listView1.Items.Add(new RecipeListViewItem(recipe));
        }
    }

    The first change is to make this into a static method. That will require me to pass the listview and the recipe list as parameters:

    private static void PopulateList(ListView listView, List<Recipe> recipes)
    {
        listView.Items.Clear();
    
        foreach (Recipe recipe in recipes)
        {
            listView.Items.Add(new RecipeListViewItem(recipe));
        }
    }
    And I’ll pull it out into a new class:
    public class RecipeManagerUI
    {
        private ListView m_listView;
    
        public RecipeManagerUI(ListView listView)
        {
            m_listView = listView;
        }
    
        public void PopulateList(List<Recipe> recipes)
        {
            m_listView.Items.Clear();
    
            foreach (Recipe recipe in recipes)
            {
                m_listView.Items.Add(new RecipeListViewItem(recipe));
            }
        }
    }

    This leaves the following implementation for LoadRecipes():

    private void LoadRecipes()
    {
        m_recipes = m_recipeStore.Load();
    
        m_recipeManagerUI.PopulateList(m_recipes);
    }

    That looks like a testable bit of code; it calls load and then calls PopulateList with the result. I extract it into a RecipeManager class (not sure about that name right now), make it an instance method, add a constructor to take the recipe store and ui instances, and pull the list of recipes into this class as well. I end up with the following:

    public class RecipeManager
    {
        private RecipeStore m_recipeStore;
        private RecipeManagerUI m_recipeManagerUi;
        private List<Recipe> m_recipes; 
    
        public RecipeManager(RecipeStore recipeStore, RecipeManagerUI recipeManagerUI)
        {
            m_recipeManagerUi = recipeManagerUI;
            m_recipeStore = recipeStore;   
        }
    
        public List<Recipe> Recipes { get { return m_recipes; } }
    
        public void LoadRecipes()
        {
            m_recipes = m_recipeStore.Load();
    
            m_recipeManagerUi.PopulateList(m_recipes);
        }
    }
    Commit: Added RecipeManager class 

    Now to test LoadRecipes, I want to write:

    [TestMethod()]
    public void when_I_call_LoadRecipes_with_two_recipes_in_the_store__it_sends_them_to_the_UI_class()
    {
        RecipeStoreSimulator recipeStore = new RecipeStoreSimulator();
        recipeStore.Save("Grits", "Stir");
        recipeStore.Save("Bacon", "Fry");
    
        RecipeManagerUISimulator recipeManagerUI = new RecipeManagerUISimulator();
    
        RecipeManager recipeManager = new RecipeManager(recipeStore, new RecipeManagerUISimulator());
    
        recipeManager.LoadRecipes();
    
        Assert.AreEqual(2, RecipeManagerUI.Recipes.Count);
        RecipeStoreSimulatorTests.ValidateRecipe(recipeManagerUI.Recipes, 0, "Grits", "Stir");
        RecipeStoreSimulatorTests.ValidateRecipe(recipeManagerUI.Recipes, 1, "Bacon", "Fry");
    }

    I don’t have the appropriate UI simulator, so I’ll extract the interface and write the simulator, including some unit tests.

    Commit: First full test in RecipeManager

    In the tests, I need to verify that RecipeManager.LoadRecipes() passes the recipes off to the UI, which means the simulator needs to support a property that isn’t needed by the new class. I try to avoid these whenever possible, but when I have to use them, I name them to be clear that they are something outside of the port interface. In this case, I called it SimulatorRecipes.

    We now have a bit of logic that was untested in the form class in a new class that is tested.

    UI Events

    Looking at the rest of the methods in the form class, they all happen when the user does something. That means we’re going to have to get a bit more complicated. The basic pattern is that we will put an event on our UI port, and it will either hook to the actual event in the real UI class, or to a SimulateClick() method in the simulator.

    Let’s start with the simplest one. NewClick() looks like this:

    private void NewClick(object sender, EventArgs e)
    {
        textBoxName.Text = "";
        textBoxObjectData.Text = "";
    }

    To move this into the RecipeManager class, I’ll need to add abstractions to the UI class for the click and for the two textbox values.

    I start by pulling all of the UI event hookup code out of the InitializeComponent() method and into the Form1 constructor. Then, I added a NewClick event to the UI port interface and both adapters that implement the interface. It now looks like this:

    public interface IRecipeManagerUI
    {
        void PopulateList(List<Recipe> recipes);
    
        event Action NewClick;
    
        string RecipeName { get; set; }
        string RecipeDirections { get; set; }
    }

    And, I’ll go off and implement these in the UI class, the simulator class, and the simulator test class.

    <aside>

    I’m not sure that NewClick is the best name for the event, because “click” seems bound to the UI paradigm. Perhaps NewRecipe would be a better name…

    </aside>

    Commit: Fixed code to test clicking the new button

    Note that I didn’t write tests for the simulator code in this case. Because of the nature of the UI class, I can’t run tests across the two implementations to make sure they are the same (I could maybe do so if I did some other kind of verification, but I’m not sure it’s worth it). This code mostly fits in the “if it works at all, it’s going to work” category, so I don’t feel that strongly about testing it.

    The test ends up looking like this:

    [TestMethod()]
    public void when_I_click_on_new__it_clears_the_name_and_directions()
    {
        RecipeManagerUISimulator recipeManagerUI = new RecipeManagerUISimulator();
    
        RecipeManager recipeManager = new RecipeManager(null, recipeManagerUI);
    
        recipeManagerUI.RecipeName = "Grits";
        recipeManagerUI.RecipeDirections = "Stir";
    
        Assert.AreEqual("Grits", recipeManagerUI.RecipeName);
        Assert.AreEqual("Stir", recipeManagerUI.RecipeDirections);
    
        recipeManagerUI.SimulateNewClick();
    
        Assert.AreEqual("", recipeManagerUI.RecipeName);
        Assert.AreEqual("", recipeManagerUI.RecipeDirections);
    }

    That works. We’ll keep going with the same approach – choose an event handler, and go from there. We’re going to do SaveClick() this time:

    private void SaveClick(object sender, EventArgs e)
    {
        m_recipeStore.Save(textBoxName.Text, textBoxObjectData.Text);
        m_recipeManager.LoadRecipes();
    }
    We’ll try writing the test first:
    [TestMethod()]
    public void when_I_click_on_save__it_stores_the_recipe_to_the_store_and_updates_the_display()
    {
        RecipeStoreSimulator recipeStore = new RecipeStoreSimulator();
        RecipeManagerUISimulator recipeManagerUI = new RecipeManagerUISimulator();
    
        RecipeManager recipeManager = new RecipeManager(recipeStore, recipeManagerUI);
    
        recipeManagerUI.RecipeName = "Grits";
        recipeManagerUI.RecipeDirections = "Stir";
    
        recipeManagerUI.SimulateSaveClick();
    
        var recipes = recipeStore.Load();
    
        RecipeStoreSimulatorTests.ValidateRecipe(recipes, 0, "Grits", "Stir");
    
        recipes = recipeManagerUI.SimulatorRecipes;
    
        RecipeStoreSimulatorTests.ValidateRecipe(recipes, 0, "Grits", "Stir");
    }  

    That was simple; all I had to do was stub out the SimulateSaveClick() method. The test fails, of course. About 10 minutes of work, and it passes, and the real UI works as well.

    Commit: Added Save
    Commit: Added in Selecting an item in the UI
    Commit: Added support for deleting recipes

    To be able to support changing the recipe directory required the recipe store to understand that concept. This was done by adding a new RecipeDirectory property, and implementing it in both IRecipeStore adapters.

    Commit: Added support to change recipe store directory

    All done

    Let’s look at what is left in the form class:

    public partial class Form1 : Form
    {
        private RecipeManager m_recipeManager;
    
        public Form1()
        {
            InitializeComponent();
    
            var recipeManagerUI = new RecipeManagerUI(listView1, 
                buttonNew, 
                buttonSave, 
                buttonDelete, 
                buttonSaveRecipeDirectory, 
                textBoxName, 
                textBoxObjectData, 
                textBoxRecipeDirectory);
    
            var recipeStoreLocator = new RecipeStoreLocator();
            var recipeStore = new RecipeStore(recipeStoreLocator.GetRecipeDirectory());
            m_recipeManager = new RecipeManager(recipeStore, recipeStoreLocator, recipeManagerUI);
            m_recipeManager.Initialize();
        }
    }

    This is the entirety of the form class; it just creates the RecipeManagerUI class (which encapsulates everything related to the UI), the RecipeStoreLocator class, the RecipeStore class, and finally, the RecipeManager class. It then calls Initialize() on the manager, and, at that point, it’s up and running.

    Looking through the code, I did a little cleanup:

    1. I renamed RecipeDirectory to RecipeLocation, because that’s a more abstract description.
    2. I renamed Recipe.Text to Recipe.Directions, because it has been buggin’ me…
    3. Added in testing for Recipe.Size

    Commit: Cleanup

  • Eric Gunnerson's Compendium

    Unit Test Success using Ports, Adapters, and Simulators

    • 2 Comments

    There is a very cool pattern called Port/Adapter/Simulator that has changed my perspective about unit testing classes with external dependencies significantly and improved the code that I’ve written quite a bit. I’ve talked obliquely about it and even wrote a kata about it, but I’ve never sat down and written something that better defines the whole approach, so I thought it was worth a post. Or two – the next one will be a walkthrough of an updated kata to show how to transform a very simple application into this pattern.

    I’m going to assume that you are already “down” with unit testing – that you see what the benefits are – but that you perhaps are finding it to be more work than you would like and perhaps the benefits haven’t been quite what you hoped.

    Ports and Adapters

    The Ports and Adapters pattern was originally described by Alistair Cockburn in a topic he called “Hexagonal Architecture”. I highly recommend you go and read his explanation, and then come back.

    I take that back, I just went and reread it. I recommend you read this post and then go back and read what he wrote.

    I have pulled two main takeaways from the hexagonal architecture:

    The first is the “hexagonal” part, and the takeaway is that the way we have been drawing architectural diagrams for years (User with a UI on top, app code in between (sometime in several layers), database and other external dependencies at the bottom) doesn’t really make sense. We should instead delineate between “inside the application” and “outside of the application”.  Each thing that is outside of the application should be abstracted into what he calls a port (which you can just think of as an interface between you and the external thing). The “hexagonal” thing is just a way of drawing things that emphasizes the inside/outside distinction.

    Dealing with externals is a common problem when we are trying to write unit tests; the external dependency (say, the .NET File class, for example) is not designed with unit testing in mind, so we add a layer of abstraction (wrapping it in a class of our own), and then it is testable.

    This doesn’t seem that groundbreaking; I’ve been taking all the code related to a specific dependency – say, a database – and putting it into a single class for years. And,  if that was all he was advocating, it wouldn’t be very exciting.

    The second takeaway is the idea that our abstractions should be based on what we are trying to do in the application (the inside view) rather than what is happening outside the application. The inside view is based on what we are trying to do, not the code that we will write to do it.

    Another way of saying this is “write the interface that *you wish* were available for the application to use”.  In other words, what is the simple and straightforward interface that would make developing the application code simple and fun?

    Here’s an example. Let’s assume I have a text editor, and it stores documents and preferences as files. Somewhere in my code, I have code that accesses the file system to perform these operations. If I wanted to encapsulate the file system operations in one place so that I can write unit tests, I might write the following:

    class FileSystem
    {
        public void CreateDirectory(string directory) { }
        public string ReadTextFile(string filename) { }
        public void WriteTextFile(string filename, string contents) { }
        public IEnumerable<string> GetFiles(string directory) { }
        public bool FileExists(string filename) { }
    }

    And I’ve done pretty well; I can extract an interface from that, and then do a mock/fake/whatever to write tests of the code that uses the file system. All is good, right? I used to think the answer is “yes”, but it turns out the answer is “meh, it’s okay, but it could be a lot better”.

    Cockburn’s point is that I’ve done a crappy job of encapsulating; I have a bit of isolation from the file system, but the way that I relate to the code is inherently based on the filesystem model; I have directories and files, and I do things like reading and writing files. Why should the concept of loading or saving a document be tied to this thing we call filesystem? It’s only tied that way because of an accident of implementation.

    To look at it another way, ask yourself how hard it would be to modify the code that uses FileSystem to use a database, or the cloud? It would be a pretty significant work item. That also means that my encapsulation is bad.

    What we are seeing – and this is something Cockburn notes in his discussion – is that details from the implementation are leaking into our application. Instead of treating the dependency technology as a trivial choice that we might change in the future, we are baking it into the application. I’m pretty sure that somewhere in our application code we’ll need to know file system specifics such as how to parse path specifications, what valid filename characters are, etc.

    A better approach

    Imagine that we were thinking about saving and loading documents in the abstract and had no implementation in mind. We might define the interface (“port” on Cockburn’s lingo) as follows:

    public interface IDocumentStore
    {
        void Save(DocumentName documentName, Document document);
        Document Load(DocumentName documentName);
        bool DoesDocumentExist(DocumentName documentName);
        IEnumerable<DocumentName> GetDocumentNames();
    }

    This is a very simple interface – it doesn’t need to do very much because we don’t need it to. It is also written fully using the abstractions of the application – Document and DocumentName instead of string, which makes it easier to use. It will be easy to write unit tests for the code that uses the document store.

    Once we have this defined, we can write a DocumentStoreFile class (known as an “adapter” because it adapts the application’s view of the world to the underlying external dependency).

    Also note that this abstraction is just what is required for dealing with documents; the abstraction for loading/saving preferences is a different abstraction, despite the fact that it also uses the file system. This is another way this pattern differs from a simple wrapper.

    (I should note here that this is not the typical flow; typically you have code that it tied to a concrete dependency, and you refactor it to something like this. See the next post for more information on how to do that).

    At this point, it’s all unicorns and rainbows, right?

    Not quite

    Our application code and tests are simpler now – and that’s a great thing - but that’s because we pushed the complexity down into the adapter. We should test that code, but we can’t test that code because it is talking with the non-testable file system. More complex + untestable doesn’t make me happy, but I’m not quite sure how to deal with that right now, so let’s ignore it for the moment and go write some application unit tests.

    A test double for IDocumentStore

    Our tests will need some sort of test double for code that uses the IDocumentStore interface. We could write a bunch of mocks (either with a mock library or by hand), but there’s a better option

    We can write a Simulator for the IDocumentStore interface, which is simply an adapter that is designed to be great for writing unit tests. It is typically an in-memory implementation, so it could be named DocumentStoreMemory, or DocumentStoreSimulator, either would be fine (I’ve tended to use “Simulator”, but I think that “Memory” is probably a better choice).

    Nicely, because it is backed by memory, it doesn’t have any external dependencies that we need to mock, so we can write a great set of unit tests for it (I would write them with TDD, obviously) that will define the behavior exactly the way the application wants it.

    Compared to the alternative – mock code somewhere – simulators are much nicer than mocks. They pull poorly-tested code out of the tests and put it into a place where we can test is well, and it’s much easier to do the test setup and verification by simply talking to the simulator. We will write a test that’s something like this:

    DocumentStoreSimulator documentStore = new DocumentStoreSimulator();
    DocumentManager manager = new DocumentManager(documentStore);
    Document document = new Document("Sample text");
    DocumentName documentName = new DocumentName("Fred");
    manager.Save(documentName);
    
    Assert.IsTrue(documentStore.DoesDocumentExist(documentName));
    Assert.AreEqual("Sample text", documentStore.Load(documentName).Text);

    Our test code uses the same abstractions as our product code, and it’s very easy to verify that the result after saving is correct.

    A light bulb goes off

    We’ve now written a lot of tests for our application, and things mostly work pretty well, but we keep running into annoying bugs, where the DocumentStoreFile behavior is different than the DocumentStoreMemory behavior. This is annoying to fix, and – as noted earlier – we don’t have any tests for DocumentStoreFile.

    And then one day, somebody says,

    These aren’t DocumentStoreMemory unit tests! These are IDocumentStore unit tests – why don’t we just run the tests against the DocumentStoreFile adapter?

    We can use the simulator unit tests to verify that all adapters have the same behavior, and at the same time verify that the previously-untested DocumentStoreFile adapter works as it should.

    This is where simulators really earn their keep; they give us a set of unit tests that we can use both to verify that the real adapter(s) function correctly and that all adapters behave the same way.

    And there was much rejoicing.

    In reality, it’s not quite that good initially, because you are going to miss a few things when you first write the unit tests; things like document names that are valid in one adapter but not another, error cases and how they need to be handled, etc. But, because you have a set of shared tests and they cover everything you know about the interface, you can add the newly-discovered behavior to the unit tests, and then modify the adapters so they all support it.

    Oh, and you’ll probably have to write a bit of code for test cleanup, because that document that you stored in your unit tests will be there the next time if you are using the file system adapter but not the memory adapter, but these are simple changes to make.

    Other benefits

    There are other benefits to this approach. The first is that adapters, once written, tend to be pretty stable, so you don’t need to be running their tests very much. Which is good, because you can’t run the tests for any of the real adapters as part of your unit tests suite; you typically need to run them by hand because they use real versions of the external dependencies and require some configuration.

    The second is that the adapter tests give you a great way to verify that a new version of the external dependency still works the way you expect.

    The simulator is a general-purpose adapter that isn’t limited to the unit test scenario. It can also be used for demos, for integration tests, for ATDD tests; any time that you need a document store that is convenient to work with. It might even make it into product code if you need a fast document cache.

    What about UI?

    The approach is clearest when you apply it to a service, but it can also be applied to the UI layer. It’s not quite as cool because you generally aren’t about to reuse the simulator unit tests the same way, but it’s still a nice pattern. The next post will delve into that a bit more deeply.

  • Eric Gunnerson's Compendium

    Tricks you can play on yourself #789–Linq

    • 2 Comments

    I was profile some code this morning, and came across some interesting behavior.

    Basically, we had some low level code that looked something like this:

    IEnumerable<Guid> GetSpecialQuestionIds()
    {
        return
          GetAllSpecialItems()
            .Select(specialItemXml => specialItemXml .CreateFromXml(questionXml))
            .SelectMany(specialItem => specialItem.Questions.Select(question => question.UniqueIdentifier)).Distinct();
            .Distinct();
    }

    So, it’s taking special item xml, deserializing each special item, and then grabbing all the unique question ids that are referenced by the special items. Perfectly reasonable code.

    Elsewhere, we did the following (the original code was spread out over 3 different classes but I’ve simplified it for you):

    var specialQuestionIds = GetSpecialQuestionIds();

    foreach (Item item in items)
    {
        var questions = item.Questions.Where(question => specialQuestionIds.Contains(question.UniqueIdentifier);
    }

    That also looks fine, but when I looked at the profile, I found that it was heavily dominated by the CreateFromXml() call. Well, actually, I did the profile first, and then looked at the code.

    The problem is the call to Contains(). It will walk every entry in specialQuestionIds, which normally would be fine, but because it’s never been realized, it will deserialize all the special items… for every question in every item.

    The fix is pretty simple – I changed GetSpecialQuestionIds() to call .ToList(), so that the deserialization only happened once, and the deserialization dropped from 65% down to 0.1% in the profile. And there was much rejoicing.

    The lesson is that you should be careful whenever you return an IEnumerable<T> that isn’t super-cheap, because the person who gets it may enumerate it over and over.

  • Eric Gunnerson's Compendium

    Simulators or not?

    • 0 Comments

    I’ve been spending some time playing with Cockburn’s hexagonal architecture  (aka “ports and adapters”), and the extension I learned from Arlo, simulators. I’ve found it to be quite useful.

    I was writing some code, and I ended up at a place I didn’t expect. Here’s the situation. I have the following external class (ie “port”).

    class EntityLoader
    {
        public EntityLoader(string connectionInformation) {}

        public IEnumerable<Entity> Fetch(EntithyType itemType) { … }
    }

    I need to use this class to some different kinds of entities, do some light massaging of data, and then query against the data. I’ll start figuring out what the adapter should be, and I’ll define it by the question that I want to ask it:

    interface IPeopleStore
    {
        IEnumerable<Employee> GetAllEmployeesForManager(Employee employee);
    }

    Now that I have the interface, I can use TDD to write a simulator that implements the interface:

    class PeopleStoreSimulator: IPeopleStore
    {
        public IEnumerable<Employee> GetAllEmployeesForManager(Employee employee) { ...}
    }

    The implementation for this will be pretty simple; I just add a way to get the list of employees for a manager into the simulator. Now I have unblocked my team members; they can code against the interface and use the simulator for their testing while I figure out how to write the version that talks to the EntityLoader.

    And this is where it got interesting…

    One of the cool things about port/simulator/adapter is that you can write one set of tests and run them against all of the adapters, including the simulator. This verifies that the simulator and the real adapter have the same behavior.

    That’s going to be problematic because the interface for Entity doesn’t give me any way to put data into it, so I can’t use the simulator tests on it. It will also do two things; fetch the data from the entity and implement the GetAllEmployeesForManager() method, and because I can’t put data into it, I don’t have a way to write a test for the method().

    It also violates one of my guidelines, which is to separate the fetching of data and the processing of data whenever possible. The problem is that we have the lookup method logic in a class that we can’t test – ie so we can’t adapt the data into what we need. That’s a good sign that adapter may not be a good choice here. How about a simpler approach, such as wrapper?

    Let’s start with the lookup logic. We’ll make PeopleStore a simple container class, and that will make the lookup logic trivial to test.

    class PeopleStore
    {
        IList<Employee> m_employees;
        IList<Employee> m_managers;

        public PeopleStore(IList<Employee> employees, IList<Employee> managers)
        {
            m_employees = employees;
            m_managers = managers;
        }
       
        public IEnumerable<Employee> GetAllEmployeesForManager(Employee employee)
        {
            …
        }
    }

    Now, I’ll work on the wrapper level. After going with an interface, I end up switching to an abstract class, because there is a lot of shared code.

    abstract class EntityStoreBase
    {
        protected IEnumerable<Employee> m_employees;
        protected IEnumerable<Employee> m_managers;

        IEnumerable<Employee> FetchEmployees() { return m_employees; }
        IEnumerable<Employee> FetchManagers() { return m_managers; }
    }

    class EntityStoreSimulator: EntityStoreBase
    {
        public EntityStoreSimulator(IEnumerable<Employee> employees, IEnumerable<Employee> managers)
        {
            m_employees = employees;
            m_managers = managers;
        }
    }

    class EntityStore : EntityStoreBase
    {
        public EntityStore(string connectionInformation)
        {
            EntityLoader loader = new EntityLoader(connectionInformation);

            m_employees = loader.Fetch(EntityType.Employee)
                                .Select(entity => new Employee(entity));
            m_managers = loader.Fetch(EntityType.Manager)
                                .Select(entity => new Employee(entity));
        }
    }

    That seems to work fine. Now I need a way to create the PeopleStore appropriately. How about a factory class?

    public static class EntityStoreFactory
    {
        public static EntityStoreBase Create(IEnumerable<Employee> employees, IEnumerable<Employee> managers)
        {
            return new EntityStoreSimulator(employees, managers);
        }

        public static EntityStoreBase Create(string connectionInformation)
        {
            return new EntityStore(connectionInformation);
        }
    }

    This feels close; it’s easy to create the right accessor and the EntityLoader class is fully encapsulated from the rest of the system. But looking through the code, I’m using 4 classes just for the entity-side part, and the code there is either not testable (the code to fetch the employees from the EntityLoader), or trivial. Is there a simpler solution? I think so…

    public static class PeopleStoreFactory
    {
        public static PeopleStore Create(IEnumerable<Employee> employees, IEnumerable<Employee> managers)
        {
            return new PeopleStore(employees, managers);
        }

        public static PeopleStore Create(string connectionInformation)
        {
            EntityLoader loader = new EntityLoader(connectionInformation);

            var employees = loader.Fetch(EntityType.Employee)
                                .Select(entity => new Employee(entity));
            var managers = loader.Fetch(EntityType.Manager)
                                .Select(entity => new Employee(entity));

            return Create(employees, managers);
        }
    }

    This is where I ended up, and I think it’s a good place to be. I have a class that is well-adapted to what the program needs (the PeopleStore), and very simple ways to create it (PeopleStoreFactory).

    Thinking at the meta level, I think the issue with the initial design was the read-only nature of the EntityStore; that’s what made the additional code untestable. So, as fond as I am of port/adapter/simulator, there are situations where a simple factory method is a better choice.

  • Eric Gunnerson's Compendium

    Identifying your vertical story skeleton

    • 0 Comments

    I’ve been leading an agile team for a while now, and I thought I would share some of the things we’ve learned. This one is about vertical slices, and learning how to do this has made the team more efficient and happier.

    To make this work you need a team that is cross-functional and has the skills to work on your whole stack (database/server/ui/whatever).

    As an example, assume that the team is working on the following story as part of a library application:

    As a authenticated library user, I can view the list of the books that I have checked out.

    The team discusses what this means, and here’s what they come up with:

    1. Authenticate the current user
    2. Fetch the list of book ids that are checked out by that user
    3. Fetch the book title and author for each book id
    4. Display the nicely-formatted information in a web page

    My old-school reaction is to take these four items, assign each of them to a pair, and when all of them are done, integrate them together, and the story will be complete. And that will work; lots of teams have used that approach over the years.

    But I don’t really like it. Actually, that’s not quite strong enough – I really don’t like it, for a bunch of reasons:

    • It requires a lot of coordination on details to keep everybody in sync. For example, changes need to be coordinated across the teams.
    • We won’t have anything to show until the whole story is done, so we can’t benefit from customer feedback.
    • Teams will likely be waiting for other teams to do things before they can make progress.
    • The different areas will take different amounts of time to finish, so some people are going to be idle.
    • Our architecture is going to be a bit clunky in how the pieces fit together.
    • It encourages specialization.
    • Nobody owns the end-to-end experience
    • Integrations are expensive; when we integrate the parts together we will likely find issues that we will have to address.

    The root problem is that the units of work are too coupled together. What I want is a work organization where the units of work are an end-to-end slice, and a pair (or whatever grouping makes sense) can go from start to finish on it.

    That seems to be problematic; the story describes a simple business goal, and it’s unclear how we can make it simpler. We *need* all the things we thought of to achieve success.

    This situation blocks most teams. And they are right in their analysis; there is no way to be simpler and to achieve success. Therefore, the only thing that might work is to redefine success.

    That’s right, we’re going to cheat.

    This cheating involves taking the real-world story and simplifying it by making it less real-world. Here’s a quick list of ways that we could make this story simpler:

    • We don’t really need the book title and author, so we redefine “list of books” to “list of book ids”.
    • The display doesn’t have to be nicely formatted, it could just be a list of book ids on a web page.
    • The display could even just be the results of a call to a web API that we make in a browser.
    • We could build the initial version as a console app, not as a web app.
    • The story doesn’t have to work for every user, it could just work for one user.
    • The list of returned books doesn’t have to be an actual list of checked-out books, it could be a dummy list.

    This is just a quick list I came up with, so there may be others. Once we have this list, we can come up with our first story:

    As a developer, I can call an API and get a predefined list of book ids back

    I’ve taken to calling this a “skeleton story”, because it’s a bare-bones implementation that we will flesh out later.

    We will go off and implement this story, deploy it as a working system, and – most importantly – verify that it behaves as it should.

    Getting to this story is the hard part, and at this point the remaining vertical slices are merely adding back the parts of the story that we took out. Here’s a possible list of enhancements:

    1. Fetch the list of book ids for a predefined user
    2. Fetch the list of book ids for a user passed into the api.
    3. Display the book ids in web page
    4. Display a book description instead of just a book id.
    5. Make the web page pretty.

    These will all be converted to stories, and we will verify that each one makes the system more real in a user-visible way. They aren’t perfect; some of the slices depend on the result of earlier slices, so we can’t parallelize across all 5 of them, and we will still need to have some coordination around the changes we make. These issues are more tractable, however, because they are in relation to a working system; discussions happen in the context of actual working code that both parties understand, and it’s easy to tell if there are issues because the system is working.

  • Eric Gunnerson's Compendium

    Rational behavior and the Gilded Rose kata…

    • 2 Comments

    The following is based on a reply to an internal post that I almost wrote this morning, before I decided that it might be of more general interest. It will take a little time to get to my point so perhaps this would be a good time to grab whatever beverage is at the top of your beverage preference backlog.

    Are you back? Okay, I’ll get started…

    A while back my team spent an afternoon working on the Gilded Rose kata. We used it to focus on our development skills using a pairing & TDD approach. This kata involves taking a very tangled routine (deliberately tangled AFAICT) and extending it to support a new requirement. It is traditionally viewed as an exercise in refactoring, and in the introduction to the kata, I suggested that my team work on pinning tests before refactoring or adding new functionality. My team found the kata to be quite enjoyable, as it involved a puzzle (unraveling the code) and the opportunity to work on code that is much worse than our existing codebase.

    This week, another team did the String Calculator kata (a nice introduction to katas and one that works well with pairing/TDD), and somebody suggested that Gilded Rose would be a good next step. I started to write a quick reply, which became a longer reply, which then evolved into this post.

    My initial reply was around how to encourage people to take the proper approach (if it’s not obvious, the “proper” approach is the one I took when I last did the kata…) – which was “write pinning tests, refactor to something good, and then add new functionality”. When writing it, however, it seemed like I was being overly prescriptive, and it would make more sense to remind people of a general principle that would lead them to a good approach.

    At this point, I realized that I had a problem. Based on the way the kata is written, I was not longer sure that my approach *was* the proper choice. The reason is that the kata provides a detailed and accurate description of the current behavior of the system and the desired changed behavior. After a bit of thought, I came up with four alternatives to move forward:

    1. Attempt to extract out the part of the code that is related to the area that I need to change, write pinning tests for it, and then TDD in the new functionality.
    2. Write pinning tests on the current implementation, refactor the code, then add new functionality.
    3. Write pinning tests on the current implementation, write new code that passes all the pinning tests, then add new functionality.
    4. Reimplement the current functionality from scratch using TDD and based on the detailed requirements, then add new functionality.

    Which one to choose?

    Well, the first one is something I commonly do in production code to make incremental changes. You can make things a little better with a little investment, and many times that’s the right choice WRT business value. If you want more info on this approach, Feather’s “Working Effectively With Legacy Code” is all about this sort of approach. It’s one of the two books that sits on my desk (anybody want to guess the other one?).

    Presuming that a kata that I finish quickly isn’t really my going, that leaves me with three options. My experience says that, if I have the Gilded Rose code and a complete specification of the desired behavior, I’m going for the last option. The problem isn’t very complex and, doing TDD is going to be straightforward, so I’m confident that I can get to an equivalent or better endpoint with less effort than dealing with refactoring the existing code.

    That conclusion was a bit surprising to me, given that this is commonly thought to be a refactoring kata, but it’s the logical outcome of having a perfect specification for the current functionality available. Last year I had the chance to fix a buggy area of code that handled UI button enabling/disabling, and I took this exact approach; I knew what the functionality would be, wrote a nice clean class & tests, and tossed the old code out. Worked great.

    In reality, however, this case is somewhat rare; most times you just have the code and your requirement is to add something new without changing the existing behavior, whatever it might be. Or, you have some requirements and some code, but the requirements are out of date and don’t match the code. So… I think the Gilded Rose kata would be a lot better if you didn’t have the description of the current behavior of the system, because:

    1. That’s when pinning becomes important, to document the existing behavior.
    2. Automated refactoring becomes more interesting because it allows me to make *safe* changes even if my tests aren’t perfect.
    3. I get led towards these approaches automatically.

    If I don’t have the description, I’m going to choose between the last two approaches. The tradeoff likely depends on how good my pinning tests are; if they are perfect I should just choose the option that is quicker. If they are less than perfect – which is generally going to be the case - then doing it through refactoring is likely a better choice.

    Or, to put it another way, the effort to write a reasonable set of pinning tests and refactor the code is generally going to be less than the effort to write a perfect set of pinning tests and write the code from scratch.

  • Eric Gunnerson's Compendium

    7 Hills 2014

    • 3 Comments

    The forecast did not look good. In fact, it looked pretty bad.

    It was Sunday of Memorial day weekend, and I was forecast-shopping. That’s what I do when I want to ride and the weather is marginal; I look at the different weather forecasts (Accuweather, wunderground, weather.com, national weather service) to see if I can find one that I like. They said – if I recall correctly – Rain, showers, showers, rain.

    I was registered to ride 7 hills for the nth time (where 5 < N < 10) on Memorial day. To be specific, I was registered to ride the 11 hills “metric century”. Quotes because a kilometer is about 8% shorter on this ride, needing only 58 miles to reach the metric century mark.

    I had tentatively agreed to ride with a few friends, which is not my usual modus operandi; after a few rides where a group ride turned into a single ride, I started doing most rides by myself.

    I rolled out of bed at 6AM on Memorial day, and took a look outside. It was wet but not raining. A look at the radar (the NWS National Mosaic is my favorite) showed that not only was there no rain showing, it looked like it was going to be that way for the next 6 hours or so.

    Normally, my ride prep would be done the night before; I’d have everything that I wanted out on the counter, appropriate clothes chosen, and a couple of premixed bottles in the fridge. Since I expected not to be riding, I had to do all of this in a bit of a hurry. I got packed, grabbed my wallet, keys, phone, and GPS, and headed out.

    I passed the first group parking on Lake Washington Blvd (people always park too far to the south), find a spot and unload. I roll into the park, get my registration band, route sheet, and find my companions. I’ll be riding with riding friends Joe and Molly, and their friends Bill and Alex. We roll out at 8:20 or so.

    Market street (Hill 1) is quickly dispatched, and we head up Juanita (Hill 2). The first two hills are fairly easy; something like 5-7% gradient max. We regroup at the top of Juanita (well, actually not the top of the hill, but the part where we head back down). My legs have felt pretty good so far, but we are coming to Seminary hill (#3), which is steeper and harder than the other two. I think it’s the second-hardest climb of the ride. It also is a bit misleading; there’s a steep kicker right at the beginning, a flat part, and then it steepens up again for the remainder of the climb.

    I start the climb. I’m have a secret weapon – my power meter. I know from the intervals that I’ve been doing that I can hold 300 watts for 2 minutes. I also know that I can hold 240 watts for 10 minutes, so I set that as my “do not exceed” level. I pass a few people, pass a few more, and before I know it, I’m at the top. I do have legs today.

    The others filter up soon after. Well, that’s not factually true; Joe and Alex finished quite a bit faster than me, and Molly and Bill filter up soon after. Joe is my benchmark for comparative insanity, so I know that him finishing in front of me just means that things are right with the world.

    We head north to descend; Joe/Molly/Bill have an almost-incident with a right-turning truck. We get on the trail and spin to Norway hill. As we approach the base, Joe is talking with a few friends, and we turn right and the climb starts. The road turns left, and I see a bunch of people on the hill. I start passing people, and strangely, nobody is passing me. I hit the stop sign, keep climbing, and eventually top out. I passed 40 people on the way up, get passed by none. Though in the spirit of full disclosure, I did pass the last 5 as they were getting ready to pull off near the top, and most of these riders are out here for the “7 hills” version of the ride.

    We head south, and turn left on 132nd. The previous course would take us all the way to my favorite intersection  - 132nd st and 132nd ave – but this year they instead route us south, and then to a food stop near Evergreen Hospital. Somewhere on the last section, the sun has popped out, and we feel pretty good. I get some sort of energy bar and pretty tasteless bagelette. After a bit too long waiting, we head out again, and take 116th north. We descend down brickyard, and turn right, heading towards back on the south towards Winery hill.

    And into the headwind. I go into ride leader mode, and settle in with the rest of the group somewhere behind me. After a few minutes, Bill – who is tall and wide like me – passes and pulls for a little bit. Soon enough, we reach the base of Winery. The route that we are taking – through the neighborhood – is a series of climbs and flats. We hit the first one, which is something like 15%, and Joe and Alex ride off. I try to stay around 300 watts on the climbs and recover a bit on the flats. Soon enough, I hit the top, and find the the 7 hills bagpiper is too busy having his picture taken with riders to play. He starts playing as Molly pulls up and we ride off down to the next food stop. The new route has changed this experience; previously you would have to climb north while being demoralized by the riders approaching because they had already finished winery, and then have the opposite feeling when you come down the same road after Winery. The new route is fine but is missing a bit of the emotional experience of the old one.

    I grab a dark chocolate chip cookie, refill my Nuun bottle and deploy some cheez-its, my wonder ride food.

    We now have a decision to make. We have done 6 hills, and we can either descend down into the Sammamish River Valley, ride south, and climb up hill #7, Old Redmond Road, or we can head east to grab an extra 4 hills before returning for the last climb. We decide to do the full metric and head east. This takes us on 116th to a short but really steep (say, 17%) climb. There’s a route via 124th that is much more gradual, so I’m not sure whether this route is because the organizers don’t know about the other route or it’s a deliberate choice.

    This is one of the downsides of being a ride leader; I know the vast majority of the roads out here and if I’m on an organized ride I’m constantly plotting where we are going versus what the other options are.

    The next climb is Novelty Hill. There really isn’t a lot of novelty involved; it’s a 500’ or so climb with a lot of fast traffic. On the way up, I find myself stuck on “If you’re happy and you know it, clap your hands”, planted by Joe a few minutes before. A few minutes later, it morphs to the surprisingly appropriate “I’ve been through the desert on a horse with no name, it felt good to get out of the rain” (America, 1971).

    We finish, regroup, and head south to Union hill road. There’s a bonus half hill here that isn’t part of the 11 hills, we finish that section, and head north to descend Novelty again, and head up NE Redmond Road (not to be confused with Old Redmond Road, which we will climb later). This is a fairly easy climb but everybody’s legs are a bit tired. Even Joe’s, though his are tired because of the miles that he has put in the past few days. Another hill top, another descent, and we head up education hill on 116th for the second time (re-education hill). That takes us to the last food stop, where I have a fairly pedestrian ham and cheese wrap and make up another bottle of Nuun. Unfortunately, it seems that I chose “moldy fruit” flavor, so I’m not too excited about it, but I choke a bit down.

    We descend, head across the valley with a vicious sidewind which turns into a headwind as we head south. I pull for Molly for the couple of miles, then Molly and Bill and I hit the base of Old Redmond Road at the same time. This is the last hill, and I open it up a bit, passing X people (5 < X < 300,000) on the way up. We crest, regroup, and head down  the last descents and the final run on Lake Washington Blvd back into Redmond. I get ahead, wait for the group, Joe goes by, and I find that I have one last sprint in my legs, so I open it up, and catch him.

    Then it’s through to the finish, chocolate milk, and strawberry shortcake.

    Normally at this point, I would talk about stats, but I only have 30 miles of the ride. I *can* say that I got PRs on Seminary, Norway, and Winery hills, so it’s pretty clear that I did have legs.

     

Page 1 of 47 (1,151 items) 12345»