There are many people i work with here that i admire immensely.  Devs, PMs, QA, UE, Loc, (hell… even management) all include people who i look up to every day.  But of those set of amazing people there’s a special “cream of the crop” area for those who are sometimes too dazzling to stare directly at.  While it’s a combination of things, what really sets these people apart is their ability to communicate exceptionally well to pretty much anybody.  There are meetings i go to that are full of confusion and frustration.  That is, of course, until one of these people decides to talk.  Somehow, not only have they grasped the complete situation, but they’re also able to articulate it so that everyone is now on the same page and really gets what the problems are and what we need to do about it.  These people can talk to anyone from hard core devs all the way out to regular customers with ease (even when both are included in the same conversation), and they allow all those parties to understand each other whereas before no headway was being made. These people are often pretty quiet and attentivel and they seem to be mostly absorbing the situation rather than just responding to any minor point which they feel must be clarified “at this very moment”.  Then once they know that they really understand it they lay it all out.  i’ve been in discussions before where i’ve put forth my case with depth and passion, and after laying it all out they’ve turned and responded to others “what i think cyrus meant to say was…”  And you know what?  They were right.  i did mean to say that, and i wanted to say it as concisely and clearly as they had.

 

Sometimes this is quite frustrating for me.  i speak “dev” and i speak “dev” quite well.  My peers and i can go off on great lengths on these dev subjects and completely understand what’s going on. However, once i start communicating with the non “hardcore dev” world things start breaking down.  i have to really try hard to find ways of conveying information and i find that my usual way of just directly attacking the subject isn’t working.  i then have to back-peddle and give context and whatnot and i find it difficult to know what the right balance is between explaining the who, where, what’s, and why’s. 

 

So i’ve been putting a lot of effort into getting better in this regard.  As you can guess, one of my primary outlets to work on this has been my blog.  i’ve loved it because it not only allows me to connect with a community of people that then help me do my job better with all the feedback they provide, but it also allows me to converse with all different sorts of readers (although i can tell that primarily it is other devs that read my stuff).  But even with the blog i’ve noticed that i need a lot more work in being able to simply get across my points.  For example, i often include small examples with my posts to help support some other point, however the fact is that the example is purely for demonstration and not the main focus of the post, but users end up often talking about that instead of the meta-issue that i was presenting.  (although it’s possible that it’s because my examples are in color and thus quite eye turning).  This means to me that I'm not conveying my ideas across clearly enough and they're getting clouded by my tangets and explanations.  But, in any event i want to become better at this using my blog as one outlet.

 

Another way i’ve been doing this is to try to attend conferences.  Conferences have the nice benefit of throwing me into a situation where i’m conversing with people who i have no knowledge about and who very likely are not “hardcore devs”.  As well as that you’re commonly multitasking because you’re dealing with a group of people with disparate skillsets, backgrounds and knowledge.  It’s a lot of fun really because you’re rapidly trying to gauge the best way to work with this group and constantly adapting what you’re doing in response to them.  It’s also great fun because in person you can really see a person’s passions and that’s enormously helpful in making our further choices as we proceed post this release.  To help with this i’ve also signed up for things like ASL, writing and “giving a speech” classes to really try and stress the different parts of my brain that i use when trying to put my perfectly clear internal abstract representations into the words that others will be able to understand efficiently.

 

Finally, one of the other ways i work toward these goals might not be something you would have expected.  In our group we use a system that we call “Gauntlet” to process the changes we want to make to our source tree.  After verifying that everything builds and registers correctly, it then runs a partial set of tests over all the code.  Why not run all tests?  Well, it’s a balance of getting good coverage and ensuring that things haven’t broken while also allowing checkins to proceed at a productive pace.  The set of tests we have now mean a checkin can happen in an hour, rather than in days otherwise.  As we get closer to shipping we push more and more tests in and make the “gauntlet” much more difficult to pass in order to make sure that the code we are checking in does meet the higher and higher bar we are setting for ourselves.  As part of your checkin, you collect all the changed sources or changelists you’ve made, package them all up together, write a summary of what you did, get code reviews from your peers, mark off all the bugs fixed, and then send it all off to Gauntlet.  (a nice little wizard walks you through this so it’s pretty simple).  People then subscribe to the checkin mail alias to get notified when changes have happened to the code.  i subscribe to about 10 different checkin systems because i’m always fascinated to see what other teams are doing.  These aliases are subscribed to by everybody, not just devs as the mails end up making many jobs much easier and it’s a good way to keep on track of the progress of the product while you are working on one of your many other tasks.

 

meta-step-back: I did a bit of explanation about Gauntlet to help put you all on the same page since most of you don't know how our internal team processes work.  However, the intracacies of how we choose the tests Gauntlet isn't the focus of this post, and if were to get responses talking about that then i would feel that my main points were getting obscured by my desire to clarify certain details that you were being introduced to for the first time.

 

So for these checkins i work very hard to try to and really make them well understood.  Rather than just:

 

            “fixed a couple of bugs”

 

i strive to make it so that when a person reads my mail they really understand what it was that i did, why i did it, and why it was the right choice for the product and the customers.  This helps people reading the mail as it’s released and it also helps people when they go look at a change made a year or more ago and are wondering “why on earth did he make that change”.

 

i wanted to give you guys a look at what one of those changelists looks like.  To be upfront it’s been slightly edited for content.  Clearly i cannot post NDA material, or things about our future plans, and so some things had to be removed.  But what i’ve left here is the verbatim text of what got checked into our source control system.  It’s my hope that you’ll find this useful and that you’ll want to see even more checkin mails in the future.  It'll help me even more with my desire to get better and communicating and will allow transparency between us and the community, and that’s something i’d really like to see.

 

Anyways, let me know how you feel about all of this!!

 

Change 867366 by REDMOND\cyrusn@BUGSAUNT102 on 2004/07/12

13:32:54

 

 [cyrusn]    Removed raw BYTE* signatures from the language service.

 

 

        First, a bit of history.  The LS used to store information

        about types (like "IList") and members (like "void

        Add(object o)") in a raw memory dump form, i.e. as just

        BYTE*'s that we're streamed over whenever necessary to grok

        them.  There were a couple of reasons that this was done. 

        First, it somewhat simplified reading in data from metadata. 

        The IMetadataImport2 interface exposes a lot of information

        from metadata through the use of raw streams of data.  So

        originally it was pretty simple to read the data in raw and

        keep the same representation internally.  We'd also read

        over the user's source code and convert it into that same

        internal representation.  Second, back when we had the

        projdata file it made things pretty simple to just stream

        this raw internal structure out to a file and back in again.   

 

 

        Now, fast forward to Whidbey.  The C# language underwent

        some pretty big changes.  We added partial types and

        generics, both of which made handling these streams

        particularly difficult.  For example, with generics we now

        have the ability to have structured types of any depth.  So

        you could easily have something like:

 

              IDictionary<IList<int>, IDictionary<string, Nullable<bool>>>. 

 

        This significantly complicated things and didn't mesh well

        with the current stream based code.  There were also a huge

        number of bugs dealing with mishandling this stream.  A

        while back we reduced the surface area of the LS that was

        exposed by this by removing the projdata file.  However i

        was still finding bugs related to corrupting or generating

        malformed streams, especially when dealing with delegates,

        generic types, and issues of inheritance.  These bugs were

        very difficult to fix without causing other problems, and

        the debugging situation was pretty awful.  When you see 

 

              ".T?Š..........??-A?.A??è??.A.?è...Ð?..?...............??..*./..."

 

        Can you tell which byte in there is the offending one?  i

        finally felt that enough was enough and that we needed

        proper strongly typed  objects to get this all right.  

 

 

        One of the nice things about this change was that it

        suddenly made understanding and implementing generic types

        much easier.  For example, we had a bug with the following

        code:

 

 

            public class A<Z>

            {

                public class B<Y>

                {

                }

 

                public B<int> Foo() {}

            }

 

            public class Program

            {

                void Bar()

                {

                    A<string> a;

                    a.Foo().

                }

            }

 

 

        The issue here is how Foo is bound, specifically how we

        make sure that we understand its return type.  Even though

        the user typed it as B<int> we need to understand it as the

        fully instantiated type A<string>.B<int>.  However, various

        incarnations of the stream code would understand it as

        A<string>.B<Y>, A<int>.B<?> (when we'd corrupt the stream),

        A<Z>.B<int>, or it just wouldn't understand it at all.

 

 

        By switching over to the new code it was suddenly easy to

        understand what was happening and to figure out what the

        right algorithm was.  I explained the entire algorithm out

        to Renaud and he agreed that it was the right one. 

        Verifying that the language service was doing the right

        thing was then much easier given that it was now a pretty

        straightforward implementation of the algorithm without all

        the muckity muck of the BYTE* streams interfering.

 

 

        This is a fairly large change, and it comes late in the

        game, but i feel that it's the right one to make.  Our

        customers will be creating these types of generic

        structures, and we need to be able to support them properly. 

        Not only that, but given the ease of being able to corrupt

        these BYTE*'s (and subsequently possibly corrupt memory) i

        think that this change is necessary for the stability and

        robustness of the C# IDE.   If we don't do this then i

        fully expect that we are going to get QFEs related to our

        stability and also related to common generic constructs

        being broken in user code.

 

 

        i also did some perf measurements using on of our large

        projects (About 30 MB of source code).  Load time was not

        affected at all (~18 seconds) and neither was the time

        taken until we had fully parsed and understood all the

        source (~42 seconds after initial load).  Memory was

        affected, but not in a way i find terribly upsetting.  The

        enterprise project took 130 MB to load, and that has now

        moved to 138 MB (only about a 6% gain).  Note: there are

        opportunities for memory based optimization that are

        available.  However, they were not taken because:

 

            a) It wasn't clear if the perf was going to be bad

            b) Why add complexity when this change is already so

               large.  The optimizations can be done at a later

               point safely.

 

        One such optimization that would be to reuse these objects

        when their structure was equivalent.   So, rather than

        having a new object created every time you used a "string"

        parameter, we'd share the common implementation.  Similarly,

        every time we say a method of the from "public bool

        Equals(object obj)" we could share that.  If we feel that

        reducing memory use is important in beta2 i'm confident

        that we can investigate (and even implement) these

        optimizations quickly.

 

 

Affected files ...

        (file list removed) 

 

---

Edit: I don't want to present any misleading information to you.  Very few of my checkin mails get the sort of attention that i have put forth in the above example.  Only when making what i consider to be major changes to you see somethign like that (although in many cases it is quite a bit longer).  Major changes, of course, do not happen so often and so these kind of posts would be on the rarer side.

That said, even with my regular post i always try to put a lot of depth in them.  I usually follow this form:

 

1)       a simple one line summary. 

a.        If I’m fixing a bug

                                                               i.      A small example of the bug I fixed or issue I was changing that usually of the form:

 

Imagine you have the following code ...

Now take the following steps

 

You would have expected Foo to happen, but instead you get Bar

 

b.       If I’m changing a feature, or adding a new one, why I think it the right choice and why I think it’s acceptable ot make that sort of change in the given part of the product cycle we are in.  For example, if I am making a change to help stability, do I think it’s worth the chance of bugs if we’re trying to shut down for RTM?

 

2)       an explanation of why that happened

3)       how I fixed the code and why I (and my teammates) felt that that was the correct fix

4)       information about if I spent any time checking for this sort of bug elsewhere

5)       If this now constitutes a change in the expected behavior

6)       How I think this will impact QA, and if there needs to be a response to this

7)       if I’ve included any regression tests (this is a new thing for me that I’ve started doing recently.  It’s to help let people know that I’m not just checking in code but I’m also putting in safeguards to catch this stuff immediately so that we won’t regress)

 

This usually constitutes a couple of paragraphs (since i'm not so rigidly formal and these concepts all get talked about at once), and that's usually what i want a checking to be.

And, of course, there's the occasional:  "Fixes a grammatical error: 'an problem' -> 'a problem'"  for stuff that is absolutely trivial.