Warped Weaving
Aspect oriented programming uses a technique called 'weaving' to merge together and/or modify portions of functional/procedural code, to produce code that at the same time is both highly efficient and easy to understand in cases where normal procedural composition would have prohibited clean unification of these separate concepts. The general idea is that you the programmer get to write snippets of code called 'aspects' that do particular jobs by making modifications or whatnot to the rest of the code tree (usually during compilation.) This might sound like a brand new concept, but it is actually something that happens already inside of compilers, the only difference here is that now you get to define your own.
And that's really the crux of the problem. On the surface, all this sounds like a huge benefit, you can write your own style of transformations and/or optimizations of the code tree to achieve ends people have not even imagined yet. Unfortunately, it can quickly go awry when you try to combine multiple aspects together.
A compiler does this kind of 'weaving' during separate phases often referring to them as rewrites of the tree; validation, optimization, transformation, these are all steps that may in fact rewrite the semantic program tree before code generation. Whenever a language introduces concepts that are no longer one-to-one with the underlying processor architecture, it must in fact decompose/rewrite that semantic node as a series of more primitive instructions that the processor can understand.
But lets ignore the CPU for a moment, and imagine a perfect world where the language has already been parsed and bound into its most pure semantic form that can be analyzed, inspected, modified by other code. We can imagine a set of operations that we might want to do against this tree, patterns to match, rewrite to occurs, that still leave the tree in a full semantic state, swapping some semantic nodes for others, basically a closed semantic set with onto mappings. Even doing this, we still find ourselves in a state where our rewrites exchange some nodes for others, usually exchanging single nodes for a variety of equivalent decomposed ones.
This guarantees that after one rewrite occurs, the overall semantics may remain the same, yet the tree itself has changed in physical ways that are near impossible to reverse, to reconstruct the original form. That means any rewrites that may follow see a different tree than the first. Of course, you would have expected this, and actually intended it. However, if the second rewrite is looking for particular nodes and/or patterns now clouded by the operation of the first rewrite it is unlikely to perform as expected, if at all.
This means that in any system that may be expected to perform multiple rewrites, the order of execution of these rewrites is paramount to their operating effectively. You cannot in general develop a rewriting procedure that operates in isolation of the rest of the system. You must consider the rewrites that may have occurred before and those that are yet to come. A compiler designer working with only a few built in rewrites has a immensely difficult time grafting these together. Can you imagine how near impossible it would be to allow for just any rewriting procedures to be strung together?
Of course, you would have to limit your rewrites to those that only made orthogonal non-disruptive changes to the tree, adding a node here or there, preserving the physical tree at the same level of node granularity. Maybe that's why when people talk about AOP they always point out trivial examples like doing method call logging. Certainly, that's a rewrite that only adds a few orthogonal instructions.
Matt
Comments
- Anonymous
July 27, 2004
The comment has been removed - Anonymous
July 27, 2004
A couple of comments:
http://jroller.com/page/rickard/20040728 - Anonymous
July 27, 2004
Maybe that's also why call interception seems to be more widely used to achieve the same effects (albeit less efficiently)?