Can anybody hear me?
There are many people i work with here that i admire immensely. Devs, PMs, QA, UE, Loc, (hell… even management) all include people who i look up to every day. But of those set of amazing people there’s a special “cream of the crop” area for those who are sometimes too dazzling to stare directly at. While it’s a combination of things, what really sets these people apart is their ability to communicate exceptionally well to pretty much anybody. There are meetings i go to that are full of confusion and frustration. That is, of course, until one of these people decides to talk. Somehow, not only have they grasped the complete situation, but they’re also able to articulate it so that everyone is now on the same page and really gets what the problems are and what we need to do about it. These people can talk to anyone from hard core devs all the way out to regular customers with ease (even when both are included in the same conversation), and they allow all those parties to understand each other whereas before no headway was being made. These people are often pretty quiet and attentivel and they seem to be mostly absorbing the situation rather than just responding to any minor point which they feel must be clarified “at this very moment”. Then once they know that they really understand it they lay it all out. i’ve been in discussions before where i’ve put forth my case with depth and passion, and after laying it all out they’ve turned and responded to others “what i think cyrus meant to say was…” And you know what? They were right. i did mean to say that, and i wanted to say it as concisely and clearly as they had.
Sometimes this is quite frustrating for me. i speak “dev” and i speak “dev” quite well. My peers and i can go off on great lengths on these dev subjects and completely understand what’s going on. However, once i start communicating with the non “hardcore dev” world things start breaking down. i have to really try hard to find ways of conveying information and i find that my usual way of just directly attacking the subject isn’t working. i then have to back-peddle and give context and whatnot and i find it difficult to know what the right balance is between explaining the who, where, what’s, and why’s.
So i’ve been putting a lot of effort into getting better in this regard. As you can guess, one of my primary outlets to work on this has been my blog. i’ve loved it because it not only allows me to connect with a community of people that then help me do my job better with all the feedback they provide, but it also allows me to converse with all different sorts of readers (although i can tell that primarily it is other devs that read my stuff). But even with the blog i’ve noticed that i need a lot more work in being able to simply get across my points. For example, i often include small examples with my posts to help support some other point, however the fact is that the example is purely for demonstration and not the main focus of the post, but users end up often talking about that instead of the meta-issue that i was presenting. (although it’s possible that it’s because my examples are in color and thus quite eye turning). This means to me that I'm not conveying my ideas across clearly enough and they're getting clouded by my tangets and explanations. But, in any event i want to become better at this using my blog as one outlet.
Another way i’ve been doing this is to try to attend conferences. Conferences have the nice benefit of throwing me into a situation where i’m conversing with people who i have no knowledge about and who very likely are not “hardcore devs”. As well as that you’re commonly multitasking because you’re dealing with a group of people with disparate skillsets, backgrounds and knowledge. It’s a lot of fun really because you’re rapidly trying to gauge the best way to work with this group and constantly adapting what you’re doing in response to them. It’s also great fun because in person you can really see a person’s passions and that’s enormously helpful in making our further choices as we proceed post this release. To help with this i’ve also signed up for things like ASL, writing and “giving a speech” classes to really try and stress the different parts of my brain that i use when trying to put my perfectly clear internal abstract representations into the words that others will be able to understand efficiently.
Finally, one of the other ways i work toward these goals might not be something you would have expected. In our group we use a system that we call “Gauntlet” to process the changes we want to make to our source tree. After verifying that everything builds and registers correctly, it then runs a partial set of tests over all the code. Why not run all tests? Well, it’s a balance of getting good coverage and ensuring that things haven’t broken while also allowing checkins to proceed at a productive pace. The set of tests we have now mean a checkin can happen in an hour, rather than in days otherwise. As we get closer to shipping we push more and more tests in and make the “gauntlet” much more difficult to pass in order to make sure that the code we are checking in does meet the higher and higher bar we are setting for ourselves. As part of your checkin, you collect all the changed sources or changelists you’ve made, package them all up together, write a summary of what you did, get code reviews from your peers, mark off all the bugs fixed, and then send it all off to Gauntlet. (a nice little wizard walks you through this so it’s pretty simple). People then subscribe to the checkin mail alias to get notified when changes have happened to the code. i subscribe to about 10 different checkin systems because i’m always fascinated to see what other teams are doing. These aliases are subscribed to by everybody, not just devs as the mails end up making many jobs much easier and it’s a good way to keep on track of the progress of the product while you are working on one of your many other tasks.
meta-step-back: I did a bit of explanation about Gauntlet to help put you all on the same page since most of you don't know how our internal team processes work. However, the intracacies of how we choose the tests Gauntlet isn't the focus of this post, and if were to get responses talking about that then i would feel that my main points were getting obscured by my desire to clarify certain details that you were being introduced to for the first time.
So for these checkins i work very hard to try to and really make them well understood. Rather than just:
“fixed a couple of bugs”
i strive to make it so that when a person reads my mail they really understand what it was that i did, why i did it, and why it was the right choice for the product and the customers. This helps people reading the mail as it’s released and it also helps people when they go look at a change made a year or more ago and are wondering “why on earth did he make that change”.
i wanted to give you guys a look at what one of those changelists looks like. To be upfront it’s been slightly edited for content. Clearly i cannot post NDA material, or things about our future plans, and so some things had to be removed. But what i’ve left here is the verbatim text of what got checked into our source control system. It’s my hope that you’ll find this useful and that you’ll want to see even more checkin mails in the future. It'll help me even more with my desire to get better and communicating and will allow transparency between us and the community, and that’s something i’d really like to see.
Anyways, let me know how you feel about all of this!!
Change 867366 by REDMOND\cyrusn@BUGSAUNT102 on 2004/07/12
13:32:54
[cyrusn] Removed raw BYTE* signatures from the language service.
First, a bit of history. The LS used to store information
about types (like "IList") and members (like "void
Add(object o)") in a raw memory dump form, i.e. as just
BYTE*'s that we're streamed over whenever necessary to grok
them. There were a couple of reasons that this was done.
First, it somewhat simplified reading in data from metadata.
The IMetadataImport2 interface exposes a lot of information
from metadata through the use of raw streams of data. So
originally it was pretty simple to read the data in raw and
keep the same representation internally. We'd also read
over the user's source code and convert it into that same
internal representation. Second, back when we had the
projdata file it made things pretty simple to just stream
this raw internal structure out to a file and back in again.
Now, fast forward to Whidbey. The C# language underwent
some pretty big changes. We added partial types and
generics, both of which made handling these streams
particularly difficult. For example, with generics we now
have the ability to have structured types of any depth. So
you could easily have something like:
IDictionary<IList<int>, IDictionary<string, Nullable<bool>>>.
This significantly complicated things and didn't mesh well
with the current stream based code. There were also a huge
number of bugs dealing with mishandling this stream. A
while back we reduced the surface area of the LS that was
exposed by this by removing the projdata file. However i
was still finding bugs related to corrupting or generating
malformed streams, especially when dealing with delegates,
generic types, and issues of inheritance. These bugs were
very difficult to fix without causing other problems, and
the debugging situation was pretty awful. When you see
".T?Š..........??-A?.A??è??.A.?è...Ð?..?...............??..*./..."
Can you tell which byte in there is the offending one? i
finally felt that enough was enough and that we needed
proper strongly typed objects to get this all right.
One of the nice things about this change was that it
suddenly made understanding and implementing generic types
much easier. For example, we had a bug with the following
code:
public class A<Z>
{
public class B<Y>
{
}
public B<int> Foo() {}
}
public class Program
{
void Bar()
{
A<string> a;
a.Foo().
}
}
The issue here is how Foo is bound, specifically how we
make sure that we understand its return type. Even though
the user typed it as B<int> we need to understand it as the
fully instantiated type A<string>.B<int>. However, various
incarnations of the stream code would understand it as
A<string>.B<Y>, A<int>.B<?> (when we'd corrupt the stream),
A<Z>.B<int>, or it just wouldn't understand it at all.
By switching over to the new code it was suddenly easy to
understand what was happening and to figure out what the
right algorithm was. I explained the entire algorithm out
to Renaud and he agreed that it was the right one.
Verifying that the language service was doing the right
thing was then much easier given that it was now a pretty
straightforward implementation of the algorithm without all
the muckity muck of the BYTE* streams interfering.
This is a fairly large change, and it comes late in the
game, but i feel that it's the right one to make. Our
customers will be creating these types of generic
structures, and we need to be able to support them properly.
Not only that, but given the ease of being able to corrupt
these BYTE*'s (and subsequently possibly corrupt memory) i
think that this change is necessary for the stability and
robustness of the C# IDE. If we don't do this then i
fully expect that we are going to get QFEs related to our
stability and also related to common generic constructs
being broken in user code.
i also did some perf measurements using on of our large
projects (About 30 MB of source code). Load time was not
affected at all (~18 seconds) and neither was the time
taken until we had fully parsed and understood all the
source (~42 seconds after initial load). Memory was
affected, but not in a way i find terribly upsetting. The
enterprise project took 130 MB to load, and that has now
moved to 138 MB (only about a 6% gain). Note: there are
opportunities for memory based optimization that are
available. However, they were not taken because:
a) It wasn't clear if the perf was going to be bad
b) Why add complexity when this change is already so
large. The optimizations can be done at a later
point safely.
One such optimization that would be to reuse these objects
when their structure was equivalent. So, rather than
having a new object created every time you used a "string"
parameter, we'd share the common implementation. Similarly,
every time we say a method of the from "public bool
Equals(object obj)" we could share that. If we feel that
reducing memory use is important in beta2 i'm confident
that we can investigate (and even implement) these
optimizations quickly.
Affected files ...
(file list removed)
---
Edit: I don't want to present any misleading information to you. Very few of my checkin mails get the sort of attention that i have put forth in the above example. Only when making what i consider to be major changes to you see somethign like that (although in many cases it is quite a bit longer). Major changes, of course, do not happen so often and so these kind of posts would be on the rarer side.
That said, even with my regular post i always try to put a lot of depth in them. I usually follow this form:
1) a simple one line summary.
a. If I’m fixing a bug
i. A small example of the bug I fixed or issue I was changing that usually of the form:
Imagine you have the following code ...
Now take the following steps
You would have expected Foo to happen, but instead you get Bar
b. If I’m changing a feature, or adding a new one, why I think it the right choice and why I think it’s acceptable ot make that sort of change in the given part of the product cycle we are in. For example, if I am making a change to help stability, do I think it’s worth the chance of bugs if we’re trying to shut down for RTM?
2) an explanation of why that happened
3) how I fixed the code and why I (and my teammates) felt that that was the correct fix
4) information about if I spent any time checking for this sort of bug elsewhere
5) If this now constitutes a change in the expected behavior
6) How I think this will impact QA, and if there needs to be a response to this
7) if I’ve included any regression tests (this is a new thing for me that I’ve started doing recently. It’s to help let people know that I’m not just checking in code but I’m also putting in safeguards to catch this stuff immediately so that we won’t regress)
This usually constitutes a couple of paragraphs (since i'm not so rigidly formal and these concepts all get talked about at once), and that's usually what i want a checking to be.
And, of course, there's the occasional: "Fixes a grammatical error: 'an problem' -> 'a problem'" for stuff that is absolutely trivial.
Comments
Anonymous
March 27, 2005
Your checkin comments are way excellent Cyrus! If I ever had programmers that wrote complete comments, I'd be in heaven! Usually, most programmers I've encountered would put in little highly-useless comments like "Made changes according to what JimmyJoeBob told me to do" or "Too many changes to explain" or "Fixed PR38567".
Do you get to read a lot of other programmer's change comments? Are they anywhere near as complete or comprehensive?
Is there any chance that .NET 2.0 checkin change summaries can be culled by Brad Abrams for use in his Annotations series of books? At least with your change summaries, there's an awful low of useful "intention" information that a lot of developers would like to see in the addendums to the official BCL books.Anonymous
March 27, 2005
Outstanding.
In the past I was manager of huge CVS tree server (tens of megabytes plain text) and person responsible for backporting of new patches to builds already released to our customers (or to different branches). It was big pain for me to read default CVS comment "no comment". :-(
This often forced me to read raw source diffs and get understanding of features I was not responsible/familar in any way and wasted too much of my time for no reasons.
I did everything possible to solve this problem - even has developed and approved by management coding standard. Most of people has followed it.
But some people (one who always miss deadlines) ignored all this and motivated that they are too busy writing code - they have no time for comments :-( [True story, no kidding]
If people at MS will write CVS commit comments like this one - I feel that overall quality of source code will increase.
Then person trying to document his source code change - only then he start to understand that he actually did. Before - it was simply code that compiled and possibly working ;-)
Keep this good direction! Do not care about colors in your code samples ;-)Anonymous
March 27, 2005
Steve: Thanks for the feedback. Feel free to point out this post to anybody if you think it might be a good way of explaning why good checkin mails can be very helpful. The feedback i've gotten at MS has been consistantly possitive and i think when the devs you work with get the same response they'll keep it up.
If i didn't have that sort of positive encouragement i wouldn't continue with this.
---
Now, as the rest of your post.
Yes, there are checkins that go through that i would consider to be fairly useless. Some of the ones are exactly: "Fixed VSWhidbey2131312321" which is pointless because as part of the checkin mail you are automatically informed about which bugs were fixed by this.
I also find them slightly depressing because they don't say why it was that the bug was ever introduced or what steps will be taken to:
a) prevent regressions
b) determien if these same sorts of bugs exist elsewhere
c) prevent these style of bugs from occuring in the future.
THat said, there are many others who do take as much time on their mails as i do, and there are others who have been influenced by our posts and have changed their style. in response. It's my hope that in the future this sort of checkin is seen everywhere by all team members, but it'll take a little bit of time to get there.
You idea about the .Net 2.0 summaries is excellect and i definitely think that you should refer Brad to this page and tell him your thoughts on that.
I can do it as well, but i think he'd really like hearing this from you.
Cheers, and thanks once again for your feedback.Anonymous
March 27, 2005
AT: Thanks! And the only time i will accept a developer saying "i have no time for comments" is if they are really able to write perfectly clear code that's readable by everyone.
This is almost never the case though. Different people understand differnt code differently and in order to be able to maintain it well given any coding style means that things like comments go a long long way.Anonymous
March 27, 2005
The bug report was entertaining but in the middle ofthe first paragraph I was already asking myself "would anybody in war bother reading all of this?". My constructive criticism is "bug reports should read like journalism". IMHO they should use the inverted pyramid structure: put all the most important facts up front and the details later in case anyone wants to check the supporting information. For DCRs and the like there's a template that the Windows division uses that serves the same purpose. Not sure how our process differs form yours, though.
Not that I follow my own suggestion, necessarily. My entries in the db are usually 1) a short explanation of the problem, 2) repro steps, and 3) a completely unedited, often really far too long and really too wordy for its own good email thread of all the conversation we had about the bug. My part #3 is bad form for anyone trying to read and understand the bug later. I can only blame my own laziness for this shortcoming.
You probably already do this, but I'd also recommend putting some of that explanation in comments in the code. I know we have some old legacy code that's left us scratching our heads about design decisions - cases when we couldn't find old bugs (possibly because of the retirement of the old bug db) or specs to justify what had been coded and the dev who wrote the code being long gone.
Note: nobody else who commented seems to have had a problem with the bug's layout. That's a sign to me that I'm being harsh or unreasonable. Please take whatever I wrote with a grain of salt.Anonymous
March 27, 2005
Drew: In my experience War doesn't read the checkin mail. They read the bug contents and tehy look at the code changes.
However, developers and others are continuously goign over old code and old changelists and trying to figure them out. Now, in that case thsi sort of info woudlnt' be that helpful. This is much more helpful to those on the discussion lists receiving this email who now know that some major architectural change has happened.
And yes, a lot of the explanation is included in both the code and the tests. While i would like to write code that totally describes itself, sometimes that isn't possible since you're working on code that youv'e inherited and you don't want to majorly change in for a simple bug fix. In those cases i will often include a good comment (with an example) that shows why it's necessary to that have that code there.
Very very beenficial for someone glancing across that code in the future.Anonymous
March 30, 2005
But Cyrus, I thought English was your first language.Anonymous
April 23, 2005
Cyrus,
IMHO I think you could really improve upon this type of commenting in check-ins. One huge improvement would be to place header information in front of every group of thoughts. It would also be beneficial to put this header information at the top of the page so as to quickly inform the user as to what they can expect to find in the subsequent comments. When someone is reading comments they are generally looking for something and the wordiness of the comments you posted would be detrimental to this process.
That said, the level of detail you provide is very informative, and could end up saving someone else (or even you) a great deal of time down the road (not that I need to point this out to you, but wanted to reiterate the point). Personally I have been inspired by your comments and will look towards improving my own comments on check-ins. Currently I am quite guilty of the "Fixes issue 32283 - Description of issue" cop-out.
Just some thoughts. Very good post.Anonymous
April 23, 2005
jared: I do try to do this:
i.e. "Removed raw BYTE* signatures from the language service."
Normally there is one sentence description like: "fixed overload resolution when using varargs of nullable types" that quickly explains the purpose of the checkin, then i go in depth based on the topic. Given the immense scope of this change, there was a huge amount to discuss. Normally, it's just a couple of paragraphs and an example or so.Anonymous
June 07, 2009
PingBack from http://besteyecreamsite.info/story.php?id=6Anonymous
June 09, 2009
PingBack from http://menopausereliefsite.info/story.php?id=1430