Safety Performance Indicators: A False Dichotomy?
Everyone knows that when it comes to safety performance indicators, you want them to be leading rather than lagging. We usually start with lagging because they are easier to spot but we’re pushed to find, track, measure and analyse their much superior cousins, the lead indicators. But is it that simple? Of course not...
The Fundamentals
The ICAO Safety Management Manual could be considered the bible on this subject and as such, requires a bit of deciphering. After it discussion qualitative versus quantitative indicators and the need to use rates, it jumps into the discussion we are most interested in - lagging versus leading indicators.
And I am going to try to boil down this page or so of discussion into a couple of bullet points:
Lagging indicators are things that have happened and are, typically, either high probability/low severity events (incidents) or low probability/high severity events (accidents)
Leading indicators are things that you do. Processes, activities or inputs that the aviation service provider undertakes to manage safety.
I have a few problems with these definitions.
My Concerns
My first issue is that the two categories are described in completely different terms. The first thing I think of when I contrast lagging and leading is time but it is not clear how time factors into these definitions.
Now, I am sure that most of us can see beyond my simplification and the slightly more descriptive language in the SMM. We note that the activities measured by leading indicators are often undertaken before something bad has happened and that lagging indicators often measure the rate at which bad things happen.
But I would like to see the pivot point discussed a bit more. I’m still not sure there is an actual pivot point though. I have argued before that aviation is hard to fit into the “energy release” model of Workplace Health & Safety (WHS) systems but I just can’t shake the feeling that we have a gap between our two categories of indicators.
My second issue is that many of the leading indicators I have seen suggested often require 100% achievement to ensure regulatory compliance. Setting aerodrome serviceability inspections as an indicator would be pretty boring since you have a schedule to keep according to the regulations and/or your aerodrome manual. I acknowledge that you might have to do extra inspections following an event but the goal would still be 100% and falling short is an instantaneous breach into the red zone.
My third issue is gaming of the system. One of the suggested indicators in the SMM is the “frequency of bird scaring activities”. I guess the theory here is that if scaring activity goes up, there must be an increased hazard that requires attention. But what does a “unit” of scaring look like and is more better or worse. Would airfield staff feel compelled to get a good “score” on this indicator if it were tracked?
This is not a question of integrity but more about incentives and subjectivity/objectivity. I once managed an airport with a growing bird strike problem. In response, we set out to establish the scale of our problem with bird counting. The numbers went through the roof!
Because the airfield team wanted to do a good job, so they counted every bird they saw, every time they saw them - every hour, every 30 minutes, every ten minutes, etc. We needed more structure around our bird counting program and every indicator of this type should have a similarly rigid structure in order to yield valid results.
The Health Analogy
I’ve been pondering the indicator gap for sometime but I recently stumbled into an analogy that I think is instructive - personal health. An individual’s health is a lot like safety at an airport. Most of us do things that we hope will make us healthy - good diet, exercise, sleep, relaxation, etc. We have good data that this will help make or keep us healthy but we can’t directly measure how healthy we are.
We know we’re not healthy if we are sick, tired, getting injured, etc. And, of course, we are definitely not healthy if we are dead.
However, doctors have a range of tools to help them and us assess our health. While they often start with questions on lifestyle and activities, they also have a whole bag tricks to measure physiological processes that can highlight hazardous conditions that increase the risk of illness, disease or death. They can be simple non-evasive techniques such as taking a pulse, measuring blood pressure or conducting an ECG. And they can go a little further and order blood draws, x-rays, CT scans, MRIs, biopsies and more.
Do we have any equivalents in airport safety?
Better Traditional Indicators
I think we already have some and could easy develop some more. The first one that comes to mind, I have already mentioned - wildlife abundance.
When the result of a robust and structured program conducted over a number of years, wildlife abundance could provide a little advanced warning of hazardous conditions. If you are considering this SPI, also factor in seasonality and the risk posed by each species. I guess the ideal trend would be year-on-year reductions for the equivalent period and trends in the opposite direction should trigger examination of environmental attractants and the effectiveness of deterrent measures.
Think of this SPI like testing for cholesterol. Cholesterol is something we have to live with. Not all cholesterol is bad and even high numbers of the bad cholesterol doesn’t mean that you will have a heart attack. It is an indicator of the risk of a heart attack and a heart attack is definitely an indicator of poor health.
Variability Indicators
Most SPIs involve counting something and then making judgements on how many (either directly or as a rate) you have had or completed or so on. The judgement call is usually about whether you’ve had too many of something or not done enough of something else.
I think there is a real opportunity to establish SPIs that look at variability. With a Lean Six Sigma view of the overall operation of an airport, we would want to see processes and activities with little waste and little variability. Remember, variation is the enemy of efficiency and perhaps, by extension safety.
I acknowledge that an airport is not an assembly line and that safety is not efficiency. My use of these philosophy is not to demand strict adherence to a procedure or timing but that certain activities should fall, naturally, into a consistent pattern as frontline workers turn procedures into actions.
So, what could aerodrome serviceability inspections look like as an SPI? I mentioned above that I think counting is not very productive but what about how long they take? Over a period of time, wouldn’t it be reasonable that the time to complete an inspection would fall into a set pattern with an identifiable average and standard deviation?
Once a reasonable average and standard deviation were established, the time taken to complete inspections could become a weekly indicator of whether runway safety factors are as expected. Significant increases might be an indicator of increased FOD, surface deformations, obstacle complexity. Reductions in time could indicate short-cuts, training issues, or even, a reasonable efficient gain. As variations are identified, the average and standard deviations can be adjusted to suit evolving situations.
I think a variability approach could also be used in the context of airside driving speeds. Not for the purpose of issuing infringements but speed generally. A monitoring system set up on a well used road could be used to establish an average and standard deviation. Changes in speeds over time could point to increased time pressure on the ground handlers (maybe coupled with On-Time-Performance), equipment availability or, when multiple points are measured, driving route changes and hotspot developments.
Fertile Ground
As airport managers, we would all love more data, especially safety data, on which to base our decisions. And we all recognise the limitations of traditional lagging indicators and, I hope, the limitations of leading indicators based on safety activities, like completed audits or training completed (which should be completed before performing the actual job).
There are plenty of things happening on an airport and they can be measured. You just need to think about what aspect you want to measure and how you want to make judgements on the data that comes in. There are a lot of options out there. I would love to hear some interesting ideas in the comments below.