Share via


Learning to thrive in times of accelerating change

Technorati Tags: continuous learning , systems thinking , adaptive , change , learning

Over the last couple of years I've often heard about the need for a "sense and respond" capability -- to sense what's happening in our businesses and to respond in a timely, effective way.  I believe these capabilities will be increasingly valuable to us over time. While the phrase has become common in some circles, I worry that the way people understand "Sense and Respond" is not well-enough connected to the way we understand Learning.

This troubles me because it seems to me that the capability to continuously sense and respond is most valuable when when it is coupled with the capability to continuously learn. Why? Because I think learning can be our most effective method for how we choose our "responses" over time to achieve the things that matter to us.

The notion that "sense and respond" is something that we do *over time* is important. Unless we are sensing and responding only once, I think the sense and respond talk is really about an iterative loop of "sensing --> responding --> sensing --> responding --> sensing --> responding --> sensing." This is ok, but I think modestly expanding this mental model to include how we can most effectivley choose our "responses" over time leads to a model that may be more helpful for us: a "Respond --> Sense --> Learn" loop (in that order, too).

I say this because my hope is that our "responses" in the sense --> respond loop are neither random nor statically defined. Instead, I hope that each decision we make about our next response is informed by our environmental conditions, principles, and, importantly, an appreciative reflection on the previous response and the sensing that followed it in the sense --> respond loop. If this concept seems fuzzy, perhaps it will help to think about an example that compares what I'll naively call "traditional" electronics with machine learning. (One caveat: this is based on my understanding of machine learning principles, I do not pretend to be an expert in this complex subject!)

In the traditional electronics environment, you often have a sensor, a controller, and an actuator. The sensor, of course, "senses." The actuator "responds." What does the controller do? It compares sensed values to pre-defined logic that determines response/no-response decisions, and perhaps even decisions about attributes of the response (how hard, how fast, how long, etc.). Learning is not really a part of the model. The assumption in this model is that the controller already knows everything it needs to know to behave optimally.

We sometimes follow this model in the real world, too, leaving learning out of the process completely because we believe we already know everything we need to know on a topic. And sometimes this assumption is correct; and naturally, in some cases we are also wrong in this assumption. The confounding difficulty is that it can be very hard to know when our knowledge is insufficient because there is often very poor traceability between our actions and our outcomes. In other words, it's sometimes quite difficult to tell when results, negative or positive, are due primarily to our own actions or to something else.

Now consider electronics in the context of a "machine learning" example. There are still sensors that "sense," and actuators that "respond." And, of course, there is still a controller that makes decisions by some pre-defined logic. So far, it sounds the same. The key difference is that the logic is not just business rules ("if x>y, do z"). Instead, the logic includes evaluating a series of "sensed value-response-sensed value" triplets, and uses the historical relationship between previous responses and observed values that follow to make decisions about the next response. The beauty of this approach is that as the environment impacting the relationship between the "sensed value-response-sensed value" triplet changes over time, the responses will also change over time. In other words, the machine "learns."

If machines can do it, why can't humans? As you might guess, we can do it, and in many cases humans already do use this method. A prominent example is known as the scientific method, where we formulate hypotheses, test them, and then based on the results of the tests confirm or revise our hypotheses (never definitively "proving" them; always remaining open to the chance that the next trial will require improvement in our hypotheses -- if you've seen vague references to "black swans" recently, this idea of holding beliefs tentatively is the core concept. See Karl Popper's first couple chapters in The Logic of Scientific Discovery, for more.  Also, there's new book out, that I've not yet read, that uses the black swan example as a central organizing concept). 

This rigorous commitment to learning is not something that we need just limit to scientific experiments and electronics. It can be used in nearly any context where we need to make a decision, or when what we believe about the world matters. To me this is the essence of what Learning can be in its highest sense: the fundamental process by which we evolve our beliefs about the world.

Without this commitment to rigor in how we perceive and evolve our beliefs, we are doomed to increasingly poor decision-making as change accelerates. Even if you start with a perfect model or decision-criteria, the faster the rate of change, the greater the probability that the next decision from an "unrevised" criterion will produce something other than results we really want to achieve.

Of course, this is just my hypothesis and I welcome any feedback you may want to share!

[Note: This is a revised version of an article I published in a private forum about a year ago.  Seems like a good time to give it a little broader reach.]

Comments