Theories of Action: Disciplines for Measured Change

This guide describes four foundational approaches to organizational learning. They span the strategic, decision making, process, and interpersonal facets of the organizational system. They are all actionable, that is, they produce knowledge that directly guides action and decision-making. While successful in their own domains, they remain barely known beyond them. Their relative obscurity is unfortunate, as many less effective approaches have become widespread in organizational learning, diluting the field’s practical value.

Beyond their focus on actionable knowledge, these approaches share several fundamental characteristics. They all:

  • Focus on the system and how it shapes individual performance, rather than examining individual or project performance in isolation

  • Use concrete reflective artefacts - depending on the approach they may be referred to as maps, cases, process behaviour charts, or simulations. These are techniques for tracking the actual in comparison with the ideal, or outcomes in comparison to intentions

  • Are capable of, and usually do, demonstrate a gap between what people are intending and what their system is producing

  • Address cognitive biases that are either hard-wired or socially pervasive, which often reveals previously unseen counter-intuitive dynamics in the system

  • Demand client ownership and responsibility for the issues at hand and their resolution; nothing can be done to’ the client system but only with’ the client system

Note that I use examples to make the methods more understandable, but this risks funnelling each approach into narrow use-cases. In practice, they extend far beyond their illustrative case—the Vanguard method applies well beyond call centres, Strategy Dynamics beyond HR planning, Applied Information Economics beyond IT investments, and Action Science beyond interpersonal disputes in social services. Future posts will explore this broader applicability and show how these approaches complement and reinforce each other in practice.

Vanguard Systems Approach

John Seddon has spent the last four decades developing an approach to customer and client facing services that has the somewhat unhelpfully generic name of systems thinking”. Sometimes it is presented as the Vanguard approach’, with reference to his consulting company, Vanguard. It is heavily influenced by W. Edwards Deming’s quality approach and Taiichi Ohno’s Toyota Production System (TPS). But unlike the myriad of other derivatives of Deming’s and Ohno’s work, Seddon has convincingly translated the sense, not just the form, of their insights to service and knowledge based contexts.

The Call Centre Paradox

Consider a familiar scenario: the call centre. An executive whose remit included a call centre service to respond to client queries described their process design as follows:

We begin by splitting the calls along functional lines, prompting the caller to select 1 for sales, 2 for outages” and so on. This front-end functional splitting via Interactive Voice Response (IVR) reduces average handling time (AHT) by pre-sorting queries into specialist queues. It also allows our staffing models to match forecast call volumes by category. We then set targets for each of these specialist units based on talk time. The aim is to motivate the staff to solve the client’s problems quickly. We have quarterly reviews that analyse call data and reward frontline staff according to their efficiency (time on task divided by calls resolved).

This description is unlikely to surprise most readers, all of whom will have spent time at the other end of this situation. The design appears logical: functional specialisation should improve efficiency. Yet the experience of this kind of system, from the client/customer point of view, is often torturous. This reveals a fundamental problem in how we usually measure organizational performance. As targets are hit and bonuses paid, one would expect the client experience to improve. But the opposite is often the case.

What’s going wrong? Each operator faces a dilemma: they want to help but customers with complex needs, or whose issues span functional boundaries, take more time and reduce the operator’s efficiency metric, upon which bonuses depend. To reach their targets they pass callers to different operators—“that’s really a sales issue, let me transfer you to someone who can help.” From the perspective of the set targets, everything is going well. From the customer’s perspective, their problem remains unresolved. The customer calls again the next day, perhaps visits a physical office, or writes an email. Each contact gets treated as a fresh interaction, even when CRM systems dutifully log the history. The customer’s journey toward resolution becomes a series of disconnected episodes rather than a coherent process.

Beyond Traditional Management Thinking

To industry insiders, this might appear to be a straw man argument. They could correctly point out that modern contact centres have moved toward multi-skill, blended, and intent-based routing because pure functional segmentation is too rigid for variable and changing demand mixes. Explicitly or implicitly, fragmentation is now seen as a design limitation in many cases1. But this would be to miss the point that the logic behind service approaches has changed little and is inherently antagonistic to organizational learning. Take the survey of the call centre literature by Aksin, Armony & Mehrotra (2007)2 as an illustration. While it identifies many of the issues identified above such as callbacks’ (customers calling back again identified as a problem not fully resolved) and retrials’ (when the client has to call back after no or minimal contact) it treats these mainly as variables inside performance/queueing models rather than as a systemic redesign or through an organizational learning lens. The emphasis is on how to handle incoming contacts (forecasting, staffing, routing, skill configuration, balancing efficiency vs quality proxies) and only tangentially approaches the question of why those contacts exist.

The Vanguard systems thinking approach directly focuses on the theory-in-use of the organization (how work actually works) versus espoused theory (how work’s aims and processes are officially described)3. It shifts from the management point of view to the end user’s perspective. Many traditional problems dissolve because they result from a management ideology that fails to understand the organization as a system serving clients’ end-to-end needs.

Core Principles of the Vanguard Approach

Outside-in Perspective

The approach begins by understanding customer demand by identifying purpose from the customer’s point of view. The process starts with check”: studying demand, measuring capability (end-to-end time for meeting customer purpose), and understanding system conditions that create waste work (see failure demand’ below).

Purpose-Measures-Method

This systemic relationship is crucial—targets are eschewed as they’re imposed externally and create distortions, including cheating4, but more generally they create a de facto purpose that may not align with customer purpose. Instead, measures useful to workers controlling work outcomes serve as guiding data. Seddon’s phrase is that when measures are derived from purpose (from the client’s point of view)… method is liberated,” meaning relevant data given to workers in control of meeting that purpose allows for experimentation and innovation.

Value Demand vs. Failure Demand

A key outcome of the check process is distinguishing between value demand’—requests customers place on the work system to achieve their purpose—and failure demand’—demand caused by failure to do something correctly for the customer, forcing re-engagement. Examples include forms that can’t be completed without help, unnecessary duplication of activities, and under-resourced operators who seed further customer contact. This one distinction, of all the Vanguard insights, has a disproportionate effect on managers’ thinking. The volume of clearly identifiable waste work’ is often astonishing.

Focus on Flow, Not Activity

Traditional call centres exemplify activity-centered approaches, measuring work as activity (calls per hour) against targets (resolutions per call). But from the customer’s perspective, value lies entirely in end-to-end resolution of their issue. The object, analogous to Toyota Production System, is managing work flow as a whole to achieve resolution as quickly as possible.

Integrate Decision-Making with Work

This necessitates reframing frontline workers’ roles—they become increasingly responsible for work totality. Management shifts to facilitators who act on the system,” improving conditions affecting how work gets done. This might mean removing redundant policy steps or providing training so workers can absorb the variety of demand” from customers, reducing hand-offs and accelerating flow.

Capability Charts: Making System Behaviour Visible

The capability chart—a key Vanguard tool—tracks performance over time. For the call centre case, the y-axis shows hours to resolve client issue’, reflecting what clients actually seek. The x-axis shows date of initial contact.

For most of the early period, the system operated under control’—resolution times averaged 49.33 hours with predictable variation of ±8.1 hours, within control limits (UCL of 57 hours, LCL of 41 hours). However, starting March 17, resolution times deviated markedly and exceeded these limits.

Investigation revealed new trainee workers were introduced during this period. Management had anticipated performance impact and deliberately overstaffed by retaining experienced workers alongside new hires. Yet resolution times still deteriorated dramatically.

The root cause lay in overlooked factors. First, management underestimated the time experienced workers need to train novices. Second, the individual incentive system created perverse dynamics: time spent training directly reduced experienced workers’ bonuses. Faced with this trade-off, experienced workers naturally minimized support, leaving novices under-prepared and compromising overall system performance.

INSERT CHART HERE

The Learning Benefits of System Visibility

Capability charts prevent managers from reacting to normal variations—data points within control limits representing expected performance range. Traditional metrics compound problems in three ways: they rarely measure end-to-end performance from customer perspective; they’re averaged over arbitrary periods, obscuring patterns; and they reduce complex system behavior to simplistic comparisons like this month was better/worse than last month.’

This reductionism leads managers to misattribute performance variations to individual workers rather than system conditions. When results improve after a pep talk, managers credit their intervention. When results worsen, they blame worker attitudes. These poorly grounded attributions exemplify superstitious learning’—where managers develop false cause-and-effect beliefs based on coincidental timing and embed this flawed reasoning into organizational practices.

Capability charts break this cycle by making system behaviour visible and distinguishing special causes from normal variation. They embed organizational learning directly into daily operations, eliminating need for separate training programs that rarely transfer effectively to actual work settings.

Resources
Vanguard’s website used to boast they were the only management consultancy with a fan club—which is actually true and speaks volumes. Publications and detailed client testimonials attest to the magnitude of change possible through this approach in service settings historically difficult to improve: housing benefits allocation, council planning applications, adult social care, policing, healthcare, and many others.

A good place to start is with Seddon’s entertaining books, part description of his approach and part takedown of establishment practices. Two of these are:

Seddon, J. (2005). Freedom from command and control. Productivity Press.
Seddon, J. (2008). Systems thinking in the public sector. Triarchy Press Limited.

Beyond their books, Vanguard offers extensive online resources. Their website includes both free materials and a subscription service with various tiers. It is a very generous sharing of their IP: https://beyondcommandandcontrol.com/subscriptions/

Strategy Dynamics

Strategy Dynamics is a method for developing, evaluating, and managing an organization’s strategy. It builds on System Dynamics, applying quantitative modelling techniques to strategic decision-making. Unlike traditional strategy frameworks that offer static snapshots, Strategy Dynamics creates living models that track resource accumulation and depletion over time—revealing why strategies succeed or fail and what interventions improve performance. Strategy Dynamics is applicable at both the overall business level and business unit level. Additionally, the principles are just as relevant to public service, voluntary, and not-for-profit organizations.

Understanding Resources as Dynamic Stocks and Flows

As an approach to strategy built on system dynamics, it is primarily concerned with the organization’s resources—‘stocks’—their inflows and outflows, and the interactions between stocks that may manifest as feedback effects. It addresses the cognitive limitations humans face in analyzing complex systems—overlooking time delays, being blind to or misunderstanding the consequences of feedback, and assuming linear rather than dynamic relationships between cause and effect.

While Strategy Dynamics shares an interest in organizational objectives and competitive positioning with established frameworks like Porter’s 5-forces and SWOT analysis, its distinctive contribution lies in tracking strategic progress over time. The approach employs simulation models as organizational digital twins,” enabling leaders to test policies before implementation and observe resource and capability evolution. This transforms strategy from a periodic planning exercise into continuous organizational learning—where decisions are grounded in data, causal relationships are explicit, and environmental feedback shapes strategic adjustments.

The Banking Diversity Case: When Good Intentions Meet System Physics

Consider this example. Warren reports a workshop encounter with a banking executive who described her bank’s strategy for increasing disadvantaged minority representation amongst senior managers from the current 6% to 20%5. The plan was to increase the level modestly, by approximately 3% per annum over 5 year period. In person terms, it would mean lifting the current number of minority senior staff from 60 to 200 out of 1000 senior personnel. It sounds reasonable, but Warren pointed out that the physics of the system” wouldn’t support this goal. To see why, we need to understand the stocks and flow rates between them and out of them. The following figure shows the three main stocks of disadvantaged minorities: Junior Staff (new hires); Potential Senior Staff (Junior Staff identified as having nascent Senior Staff qualities); and Senior Staff. Junior Staff might get promoted to Potential Senior Staff, leave, or remain as Junior Staff. Potential Senior Staff may also be promoted, typically after four years, leave, or stay as Potential Senior Staff. Senior Staff either stay or leave. Below is a rough schematic of this process.

Staff pipeline schematic, based on Warren 2008 p. 328.

At the time of the proposed strategy, there were 60 minority Senior Staff. Promotion to this stock from Potential Senior Staff was typically running at 25% per annum, or 16 persons. Focusing on the year one goal of a 3% increase in Senior Staff per annum, we are actually asking for a 50% increase - from 60 Senior Staff to 90 (3% of 1000) - in one year. This shows that small percentage increases can translate into large relative increases in actual numbers, especially from a small base. But the situation is actually worse as it doesn’t account for minority Senior Staff that leave, which was running at 25%, or 15 persons in year one. So the actual promotion figure from Potential Senior Staff to Senior Staff has to find 15 additional promotable people from the Potential Senior Staff stock. That would mean promoting 45 of the 65 in that stock, or 69%, way beyond the current 25% promotion rate and certain to promote those who have insufficient experience, or might otherwise not have been considered. This echoes down the pipeline, requiring more vigorous hiring and promotion than currently exists, or is likely to be desirable, or even possible in the short to medium term. And this is to say nothing of the hypothetical effect on attrition of minorities promoted to positions beyond their capabilities because of accelerated promotion rates. The firm could find itself in a position not much better, and possibly worse, regarding minority representation among senior managers. This would mean their efforts would yield results far short of their policy goals.

The following simulation illustrates how this could eventuate and opens to a model the reader can experiment with by adjusting the values of the key metrics (best to start with the story’ button in the bottom left on the model page)6.

<iframe src="https://insightmaker.com/insight/5xPMD9IBnuwRzRwG6JZH2Y/embed?topBar=1&sideBar=1&zoom=1" title="Embedded model" width="800" height="600"></iframe>

Why Strategic Failures Are Systemic, Not Exceptional

The bank promotion case might seem like an edge case, but empirical evidence suggests otherwise. There is a long history of strategic failures stemming from misunderstanding the system dynamics of pipeline scenarios. Consider the U.S. Centers for Disease Control and Prevention’s (CDC) ambitious Healthy People 2010” initiative. In 2000, they aimed to reduce diagnosed diabetes prevalence by 38% within a decade. However, when the CDC funded a system dynamics analysis, researchers found this target unattainable. Even achieving a dramatic 29% reduction in new diabetes cases would only slow, not reverse, the overall rise in prevalence7. Why? Three systemic factors:

  1. Large existing stocks of undiagnosed and pre-diabetic individuals already in the pipeline
  2. A basic demographic reality - fewer diabetic patients die each year than new cases emerge, causing the total stock to expand
  3. Long delays between implementing lifestyle or medical interventions and seeing reductions in new diagnoses

The Blind Spots of Traditional Strategy Frameworks

Traditional strategic approaches struggle to uncover these time compression diseconomies’ - the principle that resource stocks take time to build or deplete, meaning doubled effort rarely halves the time required. This blind spot exists because traditional strategy frameworks:

  • Focus on infrequent positioning decisions rather than ongoing system dynamics
  • Emphasise relative metrics (like profitability ratios) over absolute performance indicators (like cash flow or customer numbers)
  • Rely on abstract concepts and ambiguous terms that resist precise specification or measurement, making real impact hard to assess8

From Planning Exercises to Continuous Learning

Strategy Dynamics transforms organizational learning from an abstract aspiration into an empirical process. It enables leaders to safely experiment with different policies, surface hidden assumptions, and build shared understanding of how their strategic choices ripple through time by making system behaviour visible and manipulable through simulation. This shifts strategy from periodic planning exercises into a continuous learning cycle where decisions are grounded in data, causal relationships are made explicit, and environmental feedback directly shapes strategic adjustments.

Resources
The key book is: Warren, K. (2008). Strategic management dynamics. John Wiley & Sons. This is also available as a less detailed volume suitable for learning (and teaching) the method: Warren, K. (2010). Strategy dynamics essentials. Strategy Dynamics Limited.

At https://www.strategydynamics.com/ there are digital resources. Kim Warren also offers an excellent series of online courses at: https://www.sdcourses.com/home

Applied Information Economics (AIE)

The previous two approaches, Vanguard systems thinking and Strategy Dynamics, often deal with variables that seem impossible to measure precisely. How do you quantify a recruit’s leadership potential? What’s the relationship between staff training time and customer satisfaction? How does improved customer service translate to future profits? Senior managers frequently grapple with these questions when allocating limited resources or planning for future demand. Yet traditional approaches usually treat these intangible” factors as unmeasurable, effectively ignoring their impact on organizational performance.

The Standard Planning Process and Its Hidden Flaws

When planning initiatives under uncertainty, organizations typically follow a familiar pattern:

  • A senior executive, often responding to market pressures or organizational goals, proposes a strategic initiative
  • A project manager or team lead is selected to develop and steer the implementation plan
  • Subject matter experts provide input on technical, financial, and operational aspects
  • Spreadsheets are developed using experts’ estimates and basic actuarial calculations, including discounted cash flows, to calculate metrics like Net Present Value (NPV)
  • These models forecast key metrics like profitability, timeline, and costs
  • In more sophisticated cases, Monte Carlo analyses or scenario planning may be conducted to explore outcome ranges
  • The final analysis and recommendations are presented to executive leadership or another formal decision-making body for approval or rejection

This is so routine, and so widespread, that it passes as business desideratum. But it includes some practices that are likely to produce misleading results. The first issue is the reliance on single point estimates - typically averages of possible ranges. Consider a simple example: when estimating widget production costs, a range of $2 to $6 might be reduced to a $4 average. While this simplifies calculations, it obscures critical uncertainty in the data9. The second issue is the systematic exclusion of intangible’ variables from analysis, ostensibly to avoid subjective estimates. However, this practice effectively assigns these factors zero value - an implicit assumption that often proves dangerously wrong.

Embracing Uncertainty Through Calibrated Estimates

AIE transforms this approach by embracing uncertainty. Instead of relying on single-point estimates, it elicits calibrated10 90% confidence intervals from subject matter experts - ranges within which they believe the true value lies with high probability. These ranges feed into Monte Carlo simulations that model thousands of scenarios, revealing the full spectrum of potential outcomes and their probabilities. For instance, instead of assuming our widget costs exactly $4, we might capture expert uncertainty as a range between $2 and $6. When this data populates our models, we gain insights into possible outcomes. A proposed initiative might show a 50% chance of profits exceeding $5M, but also a 20% risk of losses greater than $2M. This detailed risk profile helps executives identify which variables contribute most to uncertainty, allowing them to focus additional measurement efforts where they’ll have the greatest impact on decision quality.

A University IT Investment: When Certainty Meets Reality

By way of example, I created a business plan for a university considering a large IT hardware investment to improve student retention. Management wanted to know if it would be successful. Interviewing subject matter experts, I estimated costs, the number of at-risk’ students we could monitor, and the percentage of monitored at-risk students we could retain (that we otherwise were unlikely to have retained). Happily for those championing the project, the business plan showed we expected to recoup the hardware costs and installation over 4 years and post a $4M surplus. There wasn’t much to question or debate in the figures offered, so not much questioning or debate took place. But if I incorporate the uncertainties about the key variables held by the various subject matter experts and express those as a range within a 90% confidence interval, and run these through 500 Monte Carlo simulations, we get a more nuanced picture. Most scenarios are profitable, with an average NPV of approximately $930k. But there’s a meaningful tail risk — about 1 in 6 runs ends in a loss, and in one extreme case the loss exceeded $3M.

INSERT HISTOGRAM HERE

Analyzing the simulation results also helps identify variables contributing to outcome variance. While formal sensitivity analysis methods exist, even a basic data examination reveals important patterns. The table below compares outcomes when each key variable is at its lowest versus highest 10%. For percentage of students saved” and number of students monitored,” the ranges provided by experts have minimal impact - the NPV difference between low and high values is small. However, uncertainty around development costs dramatically affects outcomes: scenarios with costs in the bottom 10% show substantially higher NPV than those in the top 10%. This indicates where additional information gathering could improve decision-making.

Key Variable

Average NPV if Key Variable in bottom 10%

Average NPV if Key Variable in top 10%

% Students saved

$879,163

$1,180,978

No. Students monitored

$860,634

$881,251

Initial dev costs

$1,689,163

($111,857)

Making the Intangible Measurable

AIE is interested in intangibles because these are often the most valuable predictors of project outcomes. High information value variables (highly uncertain with potentially large impact) are often assumed to be unmeasurable, for example chance of project cancellation’ or low user adoption’. To measure intangibles, the organization needs to:

  • Define the essential decision problem before considering measurement approaches. Instead of starting with How do we measure X?”, ask What critical choice are we trying to make?”

  • Clarify the variables involved, including relevant intangibles

  • Employ a clarification chain” to define ambiguous concepts and determine the meaning of intangible variables. The clarification process is guided by this fundamental principle: if a measurement matters at all, it is because it must have some conceivable effect on decisions and behaviour. This involves asking questions that decrease the abstraction, such as What do you mean, exactly?” and Why do you care?”. The principle is: If it matters at all, it is detectable/observable. If it is detectable, it can be detected as an amount (or range of possible amounts). If it can be detected as a range of possible amounts, it can be measured”.11 For example, IT security, often deemed intangible, was broken down into reduction in unauthorised intrusions and virus attacks” with impacts like fraud losses or lost productivity, both measurable. This process ensures that vague concepts are translated into concrete, observable consequences with direct, calculable costs

A Framework for Better Organizational Learning

AIE is a method that promotes organizational learning. It allows decisions to be made on a more complete data set, including the level of uncertainty decision makers hold about that data. It provides decision makers and subject matter experts with a language’ to discuss and test assumptions. By providing a structured, quantitative, and empirically grounded approach to uncertainty reduction and decision-making, AIE pushes organizations towards a deeper engagement with the information they use, how they use it, and how the decisions it underpins are monitored and improved.

Resources
The best place to start is with Doug Hubbard’s book, Hubbard, D. W. (2014). How to measure anything: Finding the value of intangibles in business (Third edition). John Wiley & Sons, Inc.

The AIE Academy’s training programs offer exceptional value, providing detailed access to their methodologies and practical tools: https://hubbardresearch.com/training/ To preview their approach, explore their 10-minute tutorial and companion spreadsheet: https://hubbardresearch.com/one-for-one-substitution-model/

Action Science (The Theory of Action)12

Action Science is foundational to organizational learning in many ways. The developers of this theory and practice, Chris Argyris and Donald Schön, were the first to place organizational learning at the heart of organizational theory in their landmark 1978 book Organizational learning: A theory of action perspective. However, it is foundational not just for historical reasons, but because the subject of their work is the most fundamental building block of organizational learning—the reasoning processes and actions of an organization’s members.

A theory of action’ serves as the underlying cognitive blueprint—encompassing norms, strategies, and assumptions—that guides all deliberate human behavior and, by extension, dictates how organizations design and execute their operations and pursue their objectives. However, organizations are complex and multifaceted, with multiple interpretations held by different members about the organization’s purpose, the meaning of its activities, and the value of future initiatives. There are multiple theories of action in play.

Mostly, these can co-exist as people collectively solve problems that have straightforward means-ends solutions, so-called single-loop’ problems. Human theories of action are typically quite good when faced with these issues. But theories of action are more complicated: they are bifurcated into espoused theories—what we consciously believe guides us—and theories-in-use—what our actions actually produce, often unconsciously. When organizational problems trigger competing norms or assumptions contained in our theories of action, or the organization itself has latent inconsistencies in its theories, gaps emerge, and theories-in-use for resolving these are often defensive and anti-learning.

The Double-Loop Learning Challenge

To resolve the incongruities in the system, we need to be able to double-loop’ learn. This means surfacing the hidden assumptions and norms and engaging with the binds produced by incompatible goals. However, where that is likely to trigger interpersonal threat, the preferred strategies are to attempt to decompose these double-loop issues into single-loop ones, control the situation, and prevail. Of course, others with competing senses of organizational purpose and effectiveness are doing the same. These defensive theories-in-use interlock in dyads, leading to schisms and polarisation between groups and cementing win-lose dynamics. This, in turn, creates a sense of hopelessness that fundamental issues can be addressed productively. It is in this way that the organization becomes …a medium for translating incompatible requirements into interpersonal and intergroup conflict.”

A Case Study in Defensive Dynamics

Benjamin’ led a social services unit undergoing a reorganization aimed at breaking down discipline silos to create a learning culture. When an enthusiastic staff member proposed changes, Benjamin faced a dilemma. Publicly, he was supportive, saying Look, change is good but we’ve got to manage change; change is too fast people get a bit lost. I think it’s excellent that you’ve given me these ideas, I think they’re great; I want you to keep doing them. So just for me it’s more of a timing issue so let’s just put that on hold.”

Privately, however, Benjamin held deeper concerns. He believed the staff member’s proposals reflected the medicalised model” of her previous mental health team—an approach he saw as bureaucratic and form-driven. He contrasted this with his preferred community model” which was more holistic. Faced with this dilemma, Benjamin adopted four strategies:

  • Keep the dilemma private (that is, do not share it with the staff member or team)
  • Unilaterally gatekeep change using hidden rules…
  • …While expressing approval of ideas he had doubts about
  • Cover-up these strategies, including covering-up the cover-up.

None of this was conscious, and all of it contradicted Benjamin’s espoused theories about open communication and effective leadership. The underpinning logic was:

If I share the reasoning about my doubts and inquire into her reasoning, I will:

  • Dim the enthusiasm of my staff member
  • Potentially embarrass her, given how mistaken her world view is
  • Create conflict that will be detrimental to team harmony

Because this logic is hidden, it is essentially self-sealing. Therefore, it is unsurprising that Benjamin had not considered the outcome of this approach could produce exactly what he feared:

  • The staff member’s enthusiasm will be undermined as none of her great’ ideas are ever implemented
  • If the staff member then deduces that she was being patronised, she may feel embarrassed
  • Other staff who are aligned with the medicalised model’ will share their stories of exclusion, heightening the potential for disharmony in the team

Furthermore, following this defensive approach makes it very difficult for Benjamin and the staff member to ever discuss why the staff member’s ideas are not being adopted. This joins a list of undiscussable issues that preclude organizational learning.

The Pattern of Organizational Anti-Learning

When defensive theories-in-use are triggered, the assumptions fuelling dilemmas remain undiscussed. As Argyris has noted, errors tend to be uncorrectable whenever their correction entails double-loop learning; that is, when norms central to organizational theory-in-use would have to be questioned and changed.”

Predictably, members of the medicalised model” camp also held negative views of the community model” group, whom they saw as impractical hippies.” Each camp developed hardened positions about the other without testing their assumptions. Under these conditions, the group with most power became gatekeeper, ensuring little idea-sharing across paradigms. Conversation by conversation, they re-established the very discipline silos the reorganization aimed to eliminate.

The Cascade Effect Across Organizational Levels

When difficult and potentially threatening issues arise, they trigger defensive theories-in-use. Each interlocutor tries to control the situation to the advantage of their chosen beliefs, or at least stymie the advances of competing worldviews. Each sees the other, and the other’s erroneous beliefs, as the problem. Neither tests the validity of their assumptions, but rather retreats into like-minded groups where they can develop strategies for achieving their ends or securing more power.

Given this, the organization as a whole cannot learn about some of its most important issues—such as problems of meaning and purpose or disparities between strategy and actual impact. As time goes on, the gap between the organization’s mission statements and policies (its espoused theories) and the actual practices it sponsors becomes more acute and visible to its members. This gap itself is an open secret, one of many that bind the organization’s members to a culture where learning about straightforward issues is permissible but learning about apparent schisms is taboo.

These levels of anti-learning norms and actions reinforce each other. The more the organization displays, through its actions, an inability to surface and address potentially threatening issues, the less individual members are able to do so as they fear censure and embarrassment. Any contradictions or latent problems in the underlying conditions are systematically undermined, leaving the organization’s members feeling that the more things change, the more they stay the same.

INSERT FIGURE

The Path Forward: Learning Non-Defensive Norms

Any resolution requires operating from non-defensive norms. One of the most surprising aspects of this field is the discovery that these must be learned. They do not come naturally. Because of that, describing solutions is simplistic and can even be misleading. One of the tricks’ of our defensive theories-in-use is to inculcate the belief that we can, perhaps easily, be non-defensive and support deep organizational learning around potentially threatening issues. This is not the experience of those who have worked in the field, some for over 40 years now.

Change at this fundamental level is challenging and requires time. However, it is also true that the learning can focus directly on the most pressing issues; indeed, it should focus on those. The sponsorship of leadership is vital because non-defensive norms can sound very out of place and can appear threatening in the business-as-usual organizational milieu. Any unilateral attempt to surface the hidden games of the organization would indeed be, as people fear, personally risky.

Nevertheless, there are techniques and practices that can help people get from here to there. Moving beyond defensive routines may initially feel uncomfortable, but remaining trapped in them undermines both individual potential—eroding confidence and self-image—and the organization’s ability to learn about its most important issues.

Resources
I run occasional workshops in Australia for those interested in exploring the theory of action or even just exploring an issue of concern using that theory’s lens. Subscribing to my newsletter on this site is one way to get notified, or reach out directly:

For several decades Action Design have offered an annual workshop: https://actiondesign.com/services/workshop/productive-conversations-online

Two of the founders of Action Design were, with Argyris, co-authors of the seminal book Action Science’. This comprehensive work, available online, provides both the philosophical foundation of Action Science and detailed case studies demonstrating its application.

  1. Legros, B., Jouini, O., & Dallery, Y. (2015). A Flexible Architecture for Call Centers with Skill-Based Routing. International Journal of Production Economics, 159, 192–207. https://doi.org/10.1016/j.ijpe.2014.09.025
  2. Aksin, Z., Armony, M., & Mehrotra, V. (2007). The Modern Call Center: A Multi‐Disciplinary Perspective on Operations Management Research. Production and Operations Management, 16(6), 665–688. https://doi.org/10.1111/j.1937-5956.2007.tb00288.x
  3. This distinction is more fully explored in the section on Action Science.
  4. Even in modern call centres cheating is assumed and so approaches to mitigating its effects are promoted (Rumburg, J. (2018, September 12). The Link Between Customer Satisfaction and First Contact Resolution. ICMI Call Center and Contact Center Resources. https://www.icmi.com/resources/2018/the-link-between-customer-satisfaction-and-first-contact-resolution
  5. Warren, K. (2008). Strategic management dynamics. John Wiley & Sons, pp. 327-329
  6. This opens an InsightMaker model. Insight Maker has features for sharing model insights, but I use StochSD for my actual modelling work. StochSD is system dynamics software for creating stochastic models. The Applied Information Economics section explains why this is important.
  7. Jones, A., Homer, J., Murphy, D., Essien, J., Milstein, B., & Seville, D. (2006). Understanding Diabetes Population Dynamics Through Simulation Modeling and Experimentation. American Journal of Public Health, 96(3), 488–494.
  8. Warren, K. (2012). The trouble with strategy. Strategy Dynamics Limited.
  9. Sam Savage uses the phrase Plans based on average assumptions are wrong on average.”. See Savage, S. L. (2012). The flaw of averages: Why we underestimate risk in the face of uncertainty. John Wiley & Sons, Inc.
  10. A calibrated’ expert is one whose subjective probability assessments have been measurably improved to align with real-world outcomes. Through specific training exercises, experts learn to correct for common cognitive biases, such as overconfidence. Consequently, when they state they are 90% certain about a range, they have demonstrated through testing that they will be correct 90% of the time
  11. Hubbard, D. W. (2014). How to measure anything: Finding the value of intangibles in business (Third edition). John Wiley & Sons, Inc. p.39
  12. Strictly speaking, action science’ and the theory of action’ don’t signify exactly the same thing. But they have been used interchangeably for so long the distinction is probably, for most purposes, a bit arcane.

  1. Legros, B., Jouini, O., & Dallery, Y. (2015). A Flexible Architecture for Call Centers with Skill-Based Routing. International Journal of Production Economics, 159, 192–207. https://doi.org/10.1016/j.ijpe.2014.09.025↩︎

  2. Aksin, Z., Armony, M., & Mehrotra, V. (2007). The Modern Call Center: A Multi‐Disciplinary Perspective on Operations Management Research. Production and Operations Management, 16(6), 665–688. https://doi.org/10.1111/j.1937-5956.2007.tb00288.x↩︎