Lewis x Strategy: Is Hindsight 20/20?

“You better have a short memory if you want to get anywhere in this life.” I watched as these words grunted out of the chapped, hardened lips of the former Hell’s Angel sitting across from me. His name was Charles. Young blondes in party skirts paraded past us, clutching the arms of recently met male companions to gain stability on stiletto heels as they passed through the narrow passageway between our alfresco table and the street. I glanced back at Charles to see him still stone-faced after that last statement; he wouldn’t even pick up the Lonestar he had just begged for until surveying my face long enough to calculate that the words had sunk in. They had–and somehow, they seemed more salient to me now, knowing that I could be a few months away from the same homeless fate.  His face relaxed, and he chuckled at me as he took a swig of the Lonestar. It was then that I realized two things at once: First, he believed his own words despite where they had landed him. Second, I was taking career advice from a homeless man on 6th street. Well, maybe not taking it—but listening intently. We were outside at Halcyon coffee shop in 30° weather, and I had to put an end to our visit—Charles had a jacket and scarf, while I was shivering over my Americano in a button-down. He had outsmarted me on one level, at least. But, could he have been right? Do we over-rely on our perceptions of past events when making decisions that impact our future?

I have been trying to wrap my mind around this issue of hindsight, and I have serious doubts about the old adage that it’s 20/20 (which also implies the past should be examined in order to craft a more opportune present and future). So, I also have doubts about whether it’s worth our time. It seems logical that we should learn from mistakes. But, can the past be trusted to teach us what we should learn? Or, is it consuming our time and offering a false sense of understanding? How should the strategist deal with the past in order to maximize the future?

Perhaps it depends on the situation. Maybe certain lessons are clear-cut. For example, when learning to ride a bike, you might have fallen off because you didn’t pedal fast enough or your foot slipped off the pedal. Correcting these errors by looking at what happened is obviously going to help you ride the bike further next time. But, what about complex team environments where what happened is not recent? For example, think about a quarterly strategy meeting; The team will look at numbers and talk about observations, but how much of what actually contributed to success or failure was lost in translation throughout the 3-month measurement period?

Consider your own experiences—my guess is that most people are dealing with a messier decision set, and a messier past to make sense of than whether their foot slipped off the pedal. Isn’t there a potential for the human mind to unknowingly make false attributions? Can these human inputs ever tell the full story? If they can, is what we’ve learned really creating better alternate futures?

It seems causality is an important principle to consider. When we examine the past strategically, we are examining the factors and strategic decisions that resulted in our current situation. What we are effectually doing, is determining causality. Typically, we point to things we believe we have control over—marketing budget, plant utilized, etc. An increased marketing budget resulted in increased sales, and so forth. But, isn’t it possible we are relying on a false causality simply to make sense of things? Did the increased marketing budget increase sales because of the raw dollars input, or because of the confidence boost it gave to the sales team to know that Marketing has their back? Relevance of future plans hinge on this distinction. If the increase was actually caused by the new confidence level, you can explore more cost effective ways to boost sales’ confidence than a superbowl ad. Forgive the simple example, but this intangible territory is behind every human input, and therefore, in the discussion of causality, primal to whatever other metrics you have been toying with.

I don’t know that an empirical study could ever deal with the inherent intangibles of this dilemma, but I came across an interesting parallel in C.S. Lewis essay, The Efficacy of Prayer. It’s actually very direct, considering he is using human requests as a parallel to requests of the divine in a prayer setting:

We make requests of our fellow creatures as well as of God: we ask for the salt, we ask for a raise in pay, we ask for a friend to feed the cat while we are on our holidays, we ask a woman to marry us. Sometimes we get what we ask for, and sometimes not. But when we do, it is not nearly so easy as one might suppose to prove with scientific certainty a causal connection between the asking and the getting. Your neighbor may be a humane person who would not have let your cat starve even if you had forgotten to make any arrangement. Your employer is never so likely to grant your request for a raise as when he is aware that you could get better money from a rival firm and is quite possibly intending to secure you by a raise in any case. As for the lady who consents to marry you—are you sure she had not intended to do so already? Your proposal, you know, might have been the result, not the cause of her decision. A certain important conversation might never have taken place had she not intended that it should.

Thus in some measure the same doubt that hangs about the causal efficacy of our prayers to God hangs also about our prayers to man. Whatever we get we might have been going to get anyway. But only, as I say, in some measure. Our friend, boss, and wife may tell us that they acted because we asked; and we may know them so well as to feel sure, first that they are saying what they believe to be true, and secondly that they understand their own motives well enough to be right. But notice that when this happens our assurance has not been gained by the methods of science. We do not try the control experiment of refusing the raise or breaking off the engagement and then making our request again on fresh conditions. Our assurance is quite different from scientific knowledge. It is born out of personal relation to the other parties; not from knowing things about them but from knowing them.

Is it possible that in analyzing the past we are losing the same ‘moment,’ as in our discussion of Kierkegaard and competition? Maybe, but my message isn’t that performance evaluations or strategic metrics shouldn’t be considered– only that in order to make a future strategy relevant, we must master the intangible territory of human imperfection that actually causes success or failure.  With that in mind, I believe the solution lies in examining your company’s approach to knowledge sharing. How efficient is your knowledge market? Does tacit knowledge interact with and get factored into your strategic decisions? Do you know your colleagues well enough to understand the messy territory of human beliefs, motivations, impulses, desires, and fears that underpin every strategic event and subsequent recommendation that impacts your success? With your own colleagues? Your partners? Your competitors?

Here’s a slightly more complex example: Matt Weilert and I are changing the way risk is measured and managed. It is currently a top-down process where a few managers dictate their top risks, and then dictate a process to manage them. Obviously, that hasn’t led to disaster prevention. If it did, millions of gallons of oil wouldn’t have poured into the gulf a few months ago. Strategy, just as much a risk-management, needs to deal with the intricacies and imperfections of humans. In one of Matt’s first test engagements, he met tremendous success in saving a GM plant money, and saving them from the disillusioned notion that their receiving line problem was technology driven. The receiving conveyor belt was breaking down daily at 2 p.m. Another consultancy quickly identified a problem with the belt. It was literally breaking under the weight of the daily incoming shipment. A technological fix was quickly implemented. Guess what happened at 2 p.m. the next day? It broke again. Matt fixed it because he started acquiring knowledge about what was actually happening. It was a people problem. The night manager was irked about having to deal with a new software implementation, and to express his dissatisfaction, scheduled all incoming deliveries for 2 p.m. What cost GM thousands of dollars in lost time, productivity, and consulting fees could have been solved if anybody in the factory had known the manager well enough to suspect what was really happening.

What about you? Is your hindsight 20/20? Do you have a pulse on what’s happening at all levels of your organization? Or, will you continue on what seems like a viable path, attributing causality to whatever seems logical? Maybe time spent trying to plan for the future would be more effective if we focused on how to efficiently factor in human imperfections, rather than coldly analyzing past events?

Texts Referenced: The Efficacy of Prayer, from The World’s Last Night and Other Essays

ref=sib_dp_pt.jpg

Advertisements

One thought on “Lewis x Strategy: Is Hindsight 20/20?

  1. An excerpt from David Apgar’s ‘Relevance’ discusses the issue further in a business context:

    “The idea that specificity and relevance drive the value of information helps explain how the information revolution’s economics may have undermined it. Technologies like microprocessors and optical fiber have basically cut the cost of specificity. Whether you run a division, compile financial reports, or manage service operations, you’ve been able to get more data faster. The relevance of those data has mattered less and less because increasingly you’ve been able to compensate with volume. And as we’ve grown used to less relevant data, I suspect we’ve formed two bad habits.

    The first bad habit is requirements based analysis. Instead of devising a specific strategy to meet a financial goal like a sales or profit target, we’re increasingly tempted to enumerate requirements for meeting the goal. Requirements might include minimum levels of staffing, funding, technology functionality, or process performance. Budget planning increasingly follows this course, working backward from profit-targets to sales requirements and cost-ceilings. And executives tend to use balance scorecards, one of the most widespread management tools, as menus of common requirements.

    In this book, I liken the necessary conditions for meeting a goal to a list of ingredients. The comparison helps explain the appeal of requirements; it’s as easy to find metrics for them as it is to determine whether you have all the required ingredients for a dish you’re cooking for dinner. But just as you can know all the ingredients for a dish without having a recipe for it, you can meet every conceivable requirement for a goal without knowing how to achieve it. Simply put, requirements are not strategies. And unlike strategies, there’s little to learn with they fail to work. That may be another key to their appeal: you can be right about requirements even if a goal proves elusive. And it leads to a second bad habit: our growing reliance on red herrings.

    Red Herrings are events that appear to confirm your plans but in reality are merely consistent with them. You might think customers will prefer cell phones with internet browsers, for example, and find that they purchase more of a model that includes them. But if that model is also much easier to use than its predecessors, the increase in customer uptake may be a red herring as far as your assumption about browsers is concerned. The new model’s results, in other words, are not necessarily relevant to expectations about cell-phone browsers. As our enterprise planning systems accumulate more and more data of limited relevance to our plans, red herrings are bound to proliferate.

    That’s fine as long as you recognize them. The trouble is that you may think they confirm your strategy. Worse, you may be tempted to build a strategy largely out of requirements and irrelevant metrics–red herrings–to test its more explicit prescriptions. Experience won’t teach you much about such a strategy: events are unlikely to prove the requirements wrong, and the red herrings are unlikely to challenge the explicit prescriptions.

    Our declining ability to learn from experience as we accumulate conflicting data marks the end of the information revolution. Hastening its close is a characteristic style of performance analysis I call the analyze-execute system. It has three tenets: you can reduce any account of performance to a basic set of facts, those facts imply a correct strategy, and so finding the right strategy reduces to collecting enough facts. These tenets contradict the way we formulate and test hypotheses in our conduct of science—scientists rarely just collect facts—and yet we embrace them in business and government.

    Why do we do that? Perhaps it’s not just that we want to be right—which is less likely if you lay out explicit strategies that events might prove wrong and track highly relevant metrics that will tell you as soon as they do. Perhaps its that we hope we’ve hit on an objective way of looking at our businesses and the world around us that captures the way they really are. We distrust the filter of our point of view. But the question remains: Why do we want to be right in such a metaphysical way rather than just achieving a goal?

    The philosopher Richard Rorty offered the controversial explanation that we pursue a kind of objectivity beyond mere public evaluation because we fear we’ll be forgotten. He wrote, “The picture of a common human nature oriented towards correspondence to reality as it is in itself comforts us with the thought that even if our civilization is destroyed, even if all memory of our political or intellectual or artistic community is erased, the race is fated to recapture the virtues and the insights and the achievements which were the glory of that community.

    Rorty may have put his finger on a profound reason that we’re not content with making up business strategies that not only work, but are somehow true for all time and in all languages. There’s a big practical advantage, however, to my proposal that cheap information has tempted us to neglect relevance and led us into some bad habits. It means you may not have develop a view on the fate of the world to learn from experience despite the information overload our organizations produce. Paying close attention to relevance may be enough.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s