次の方法で共有


A risk-reward scale for evaluating project practices

So you’re seeking to improve things. You tried to implement a “best practice” or twelve … how did that work for you? I am convinced that problems addressed by a well known practice will not always result in a well known and positive result, and it seems I am not alone.  In a survey of software development professionals conducted last year by William Money and me, 96% agreed or strongly agreed that there are different objective effects of various practices under varying situations.    (Complete survey results can be seen at https://www.projectpracticeportfolio.com/ )

In practice this means the actual value of a best practice isn’t the advertised return but a possible return from within a range of returns. If we take the example from my last post; adding a new tool to address testing quality and velocity.   It is easy to see that when a practice is done well it can provide significant improvement in overall product quality.  On a reward scale of 1 to 10, I’ll call that an 8.  When it is done poorly, it could increases cost or inject delays into the schedule.  On the risk scale, this time from -1 to –10, I put that at –6.   Now we have a range of potential that is 14 points wide going from negative 6 to positive 8. The actual value is highly dependent on the specific implementation within the context of the team and the mixed effect the combination of practices being put in place.

Before we get into the weeds around how we can use these values I want to propose a simple, consistent way to describe the points along the range.

If any of you know me, you know I'm not actually a fan of standards.  In fact I have been known to violently oppose the idea of compliance where integration will do.   I will however concede to the point that nearly every attempt at integration requires agreed upon terms if for no other reason than to limit semantic arguments.  Here then, is my proposal for a complete Risk / Reward scale for evaluating practices;

RISKS
----------------------------------
0 no impact
-1 negligible impact, easily resolved
-2 likely to create additional tasks
-3 scope of issue expands
-4 degrades team morale / communication
-5 requires notable response
-6 increases cost or delays schedule
-7 injects current and future issues
-8 degrades solution quality
-9 critical feature failure
-10 prevents delivery

REWARD
----------------------------------
0 no impact
1 improve poor performance to nominal
2 reduces isolated level of effort
3 broadly improves things
4 improves team morale / communication
5 measurable improvement
6 reduces cost or schedule
7 resolves current and future issues
8 significant improvement in overall quality
9 ensures feature delivery
10 ensures total delivery

In the next post I'll show how we can group practices and by using this scale remove much (but not all) of the guess work from selecting, and monitoring, corrective practices.

Comments