Analytical decision-making models can fail…so what can your organisation do about it?

18 May 2020

Managers rely on analytical models to inform difficult decisions: where to build that next hospital; how to allocate resources during the next budgetary cycle; whether performance is likely to meet targets; what the fall-out might be from a proposed or feared change; and whether that business case really warrants funding.

In applications such as these (and a thousand others) business leaders expect models to provide reliable answers to business-critical questions within tight timescales. Should a model fail, especially at short notice, the repercussions for the host organisation can be traumatic.

How do decision-making analytical models fail?

Models can fail in a spectacular number of ways, especially when captured in software: at one extreme, the model may simply refuse to run. At the opposite extreme, a model may generate corrupt results without any obvious sign of a problem.

Countless authors have dealt with the software side of these problems but far subtler modes of failure are possible, manifesting themselves in symptoms including:

  • Models generating counter-intuitive results that no-one can explain
  • The hidden and silent collapse of assumptions underpinning a model
  • Difficulties extending models
  • Analysts distancing themselves from models

Why do analytical models fail?

Model failures can usually be traced back to one of the broad categories of causes:

  • The model is a poor representation of the real world
  • The model itself is fine but its software implementation is wrong
  • The model has been used incorrectly or beyond its scope
  • The model and its shortcomings are not fully understood
  • The model has failed to keep up with changes in the real world

Without documentation, models are at risk of becoming orphaned

One frequent contributor to model failure is inadequate documentation; another is staff turn-over. When both of these things occur simultaneously, the result can be an orphaned model, i.e. an undocumented model that has been abandoned by its developers.

Despite incessant cajoling and warnings, many analysts still fail to write comprehensive documentation for their models. In the short-term, gaps in the documentation can be papered over by verbal briefings from the team developing the software. However, as the original team members move on and the model is gradually extended or updated, so detailed knowledge of the overall model drains away.

The result is a maintenance team whose members’ collective knowledge of the orphaned analytics model leaves much to be desired.

When you need it most, it can fail

On the day that the orphaned analytical model fails, it transpires that no-one is quite sure how it performs certain calculations; no-one knows what the model has assumed about condition A or constraint B; no-one knows how to alter the model in order to answer the customer’s latest query; no-one’s quite sure of the knock-on effects that altering one part of the model might have on all others; and there’s a distinct absence of analysts offering to help out.

Preventing orphaned analytical models

It’s a better idea to prevent a model becoming orphaned in the first place than to deal with the consequences at a later date. Here are three obvious – yet oft-ignored – preventative measures:

  • Ensure that the delivery of any new model includes a set of documentation that is sufficient to understand its structure and its operation
  • Operate a change control system that includes the automatic updating of documentation throughout a model’s life
  • Implement a system for “analyst succession” that includes full briefing, paired working and hand-over of documentation for the model

Closing thoughts

As business leaders become ever more reliant upon analytical models to inform major organisational decisions, they need to be well prepared to deal with models that fail. If you rely on decision-making models, then solid knowledge management and succession planning from the outset is essential. Analytical model testing and quality assurance processes are key.

For more on what to do if your model fails and how to prevent it, read my next blog posts in this series, out within the next couple of weeks.

Related articles :

Reducing risk in procurement bid scoring: Generate bespoke scores for your grade or rating lists

For nearly all procurement professionals, its tough to produce bespoke tender evaluation systems again and again…

More

How to identify which packages (and which functions) are actually being used by an R file

When inheriting someone else’s R code, I always like to check which packages and functions it uses…

More

Git version control in RStudio: Our top tips to help you get up and running

It has never been more intuitive to implement Git for version control in R Studio. Here’s how…

More