다음을 통해 공유


Paper Summary #1: Efficient and Precise Dynamic Impact Analysis Using Execute-After Sequences

What's the name of the paper?

Efficient and Precise Dynamic Impact Analysis Using Execute-After Sequences [PDF]

Who wrote it?

Taweesup Apiwattanapong, Alessandro Orso, and Mary Jean Harrold at Geogia Tech.

Background:

Impact Analysis is a pretty interesting area.  The basic idea is to determine what parts of the source code may be effected by a change to a given portion of the source code.  A practical application would be to figure out what tests to run after the source code has been modified in order to reduce the chance of regressions.  It also would be a metric for assessing the risk of making a change.  

In the past, static analysis techniques have been applied to try to derive impacted code directly from running static analyses (interprocedural data flow analysis, etc).  This doesn't work too well, because the impact sets tend to be 90% of the source code.  Not very useful.  Dynamic techniques monitor which code is actually affected on some sample program executions.  Existing techniques include walking execution traces (PathImpact) and using coverage (CoverageImpact).  The problem with the execution traces is that program traces are way too large for non-trivial programs.  In coverage, the impact set includes any methods executed in tests that also execute the changed method.  While coverage tables are quite small, they lack precision.

This paper's contribution:

The authors propose a new type of dynamic analysis based on the execute-after relationship with storage sizes comparable to coverage methods with precision similar to path based techniques.  For two methods, X and Y, X executes after Y if Y calls X, or Y returns in to Y or Y returns in to some function that then calls X.  They instrument the program to collect those type of events, but they need only track the last and first event for each method.  If the first event of Y occurs before the last event of X, we know that X executes after Y. 

Discussion:

This is a pretty cool paper.   I wonder how you could unify these dynamic measures with other types of static analyses to make a stronger measure of risk.  For instance, let's say your static analysis notices that you are doing something inherently risky like changing a write to a global variable or making a change to a virtual method in a base class that is inherited by many other classes.  You run dynamic impact analysis over some set of tests and you realize that the impact is high.  It would seem like those two measures should positively reinforce each other and raise the DANGER DANGER flag.

One assumption in dynamic impact is that you have some set of tests that you can run to compute the analysis.  How much of the accuracy of this technique is based on the quality of the test coverage?  Couldn't you really make a bad mistake if you impact a part of your code that you weren't exercising in your tests?  Doesn't that raise a chicken/egg problem for using these measures to determine which parts of the code to retest?

Comments