Freigeben über


Micro optimization or just good coding practice?

cheetah This
is a common topic and I thought I’d write up some thoughts I have on it.  In-fact,
I was just working with a customer on improving their code reviews and what they should
be checking for and the question arose - “Should performance be targeted during a
code review?”  It’s an interesting question.  I’m a big fan of performance
testing early and often and not waiting until the end of a dev cycle but code reviews,
IMO, should focus on logic, maintainability and best practices.  I may be in
the minority and if you look around the web, you’ll see varying opinions on the topic.  For
example, one of the PAG articles states
:

“Code reviews should be a regular part of your development process. Performance and
scalability code reviews focus on identifying coding techniques and design choices
that could lead to performance and scalability issues. The review goal is to identify
potential performance and scalability issues before the code is deployed. The cost
and effort of fixing performance and scalability flaws at development time is far
less than fixing them later in the product deployment cycle.

Avoid performance code reviews too early in the coding phase because this can restrict
your design options. Also, bear in mind that that performance decisions often involve
tradeoffs. For example, it is easy to reduce maintainability and flexibility while
striving to optimize code.”

As I mentioned above, I am a huge proponent of performance analysis and optimization
many times throughout a typical product development cycle.  I can say with a
fair amount of certainty that if you don’t build performance reviews into your project
plan at regular intervals, you will hit some problem (or multiple problems) in production
and have to refactor some code. 

Circling back to the original question, though, are code reviews the place for performance
analysis?  Typically, I’d recommend using them to squash little bits of bad code
but maintainability and code-cleanliness should be first and foremost in your minds. 
That said, if you see a pattern that you know can be improved, by
all means bring it up.  What’s an example of that type of situation? 

Let’s take a look at predicates, specifically
their usage in the Find method of a List<T>
.  If you’re not aware,
the Find() method performs a linear search through all of the items until it finds
the first match – then it returns.  This makes it a O(n) operation where “n”
is the number of items in the list.  Basically, this means that the more items
you have in the list, the longer a Find() operation can potentially take.  So,
if we slam about 10,000 elements into a list:

 private static List<Data>
LoadList()

{

List<Data> myList = new List<Data>();

for (int i
= 0; i < 10000; i++)

{

myList.Add(new Data() { Id = "Id" +
i.ToString(), 


Value = "Value" + i.ToString() });

}



return myList;

}

Then, if someone wants to return the instance of the Data class that contains an Id
of say “Id10000”, they might write the following code:

 static Data
Find1(List<Data> myList, string idToFind)

{

Data data = myList.Find(s => 


s.Id.ToLower() == 


idToFind.ToLower());



return data;

}

Now, keep in mind that the predicate is executed for each element in the List<T>
until it finds the instance you care about.  With that in mind, we would probably
want to refactor out the “idToFind.ToLower()” above the predicate since that value
isn’t changing.  So, you might end-up with something like this:

 static Data
Find2(List<Data> myList, string idToFind)

{



idToFind = idToFind.ToLower();



Data data = myList.Find(s => 


s.Id.ToLower() == 


idToFind);



return data;

}

Another route you may want to go is just to use the string.Equals(…) method to perform
the comparison.  That would look like:

 static Data
Find3(List<Data> myList, string idToFind)

{



Data data = myList.Find(s => 


string.Equals(

s.Id, 


idToFind, 


StringComparison.

InvariantCultureIgnoreCase)

);



return data;



}

Fact is, the last method IS the fastest way to perform the operation. 
I can say that without even needing to run it through a profiler.  But if you
don’t believe me…  

FunctionName Elapsed InclusiveTime
...Find1(System.Collections.Generic.List`1<....Data>,string) 6.34
...Find2(System.Collections.Generic.List`1<....Data>,string) 4.47
...Find3(System.Collections.Generic.List`1<....Data>,string) 3.65

That’s something I might put into the category of a micro-optimization AND just
good coding practice.  But is this something that should be caught during a code
review?  I’d say “yes” because logically it all makes sense and none of the solutions
would really hurt maintainability or readability of the code.

 

So, I’d tag this as a good coding practice.