|Home About Kurtosis Courses Course Calendar Booking Information Ideas Technique Clients Contact|
How a data analyst started thinking about individual patients
I experienced a moment of “Flowopoly epiphany” twelve days ago.
It was a Sunday afternoon. Two days before a conference that was due to be held at the Scottish Exhibition and Conference Centre in Glasgow. As part of the 2014 NHS Scotland Event, we were due to “play” Flowopoly in a room with 100 people in it. On 10 sets of tables. With about 5,000 cards. On pre-printed vinyl boards that had cost the Scottish Government goodness knows how much money to produce. Oh, and with 140 red hotels that we’d managed to procure from the actual company that make the actual hotels for the manufacturers of actual Monopoly.
The stakes were high, so I decided to do a final “rehearsal” of the 12-hour period we’d chosen for the “game”. I wanted to be absolutely sure that the cards we’d chosen were in the right order, that there weren't any missing cards and that we didn't have any superfluous cards.
I cut a somewhat sad and lonely figure in my office in Leith that Sunday afternoon. I’d commandeered the big table in the meeting room, I’d laid out all the cards on the table as they’d need to appear at the start of the “game”, I’d started to read out loud the patient-by-patient transactions, and I was walking round the table moving the cards. Oh yes, and there was a stopwatch ticking away: I wanted to make sure the whole thing could be done in 25 minutes.
It was the first time I’d played Flowopoly on my own. So I was in a more “reflective” frame of mind than I might otherwise have been. That might explain what happened next.
It is 10:01 Flowopoly Time. The time of the first of the day’s four-hour breaches. A 68-year-old man called Ross Williams (not his real name) has been in the Emergency Department (ED) since 06:01 and at 10:01 he breaches the four-hour target.
I look at his card to see what is supposed to happen to him next. Is he a patient who is just going to go home from the ED? Or is he destined to flow onwards into the hospital? And the card tells me that he will eventually move onwards into the Acute Medical Unit (AMU) at 10:24.
I look at how full the AMU is at 10:01 (the time of the breach). There is one empty bed. I am slightly puzzled why Ross Williams doesn’t get this bed (although I am conscious that there are lots of reasons why this might be the case). But then I look two transactions ahead and notice that at 10:15 an 86-year old woman called June MacLennan (not her real name) is due to be admitted directly to the AMU. Directly. So she’s a GP-referred AMU admission. And she is perhaps somehow taking precedence over Ross Williams. And I wonder why that is. And I’ve started to take notes…
Actually, the specific cause-and-effects of what was going on between 10:00 and 10:30 on Tuesday 1st October don’t really matter.
What matters is that a data analyst has suddenly been drawn into the fine detail of a day. What I want to call the “little” detail. Except that the word “little” belittles the significant things that were happening to Ross Williams and June MacLennan.
But I was reaching for the word “little” because it’s the opposite of “big”. As in “big data”. I've recently — in the last few months — been getting steadily more interested in the extent to which these “little details”, these “little realities” like the ones that occurred at 10:01 and 10:15, these are the realities that matter most. Not just to the patients who flow through the unscheduled care system, but also to the clinicians who inhabit the unscheduled care system. Whereas for data analysts and for many managers, these “little realities” are pretty much invisible for most of the time, so busy are they trying to make sense of the world using “big data”.
The “little realities” are invisible because data analysts have a “default setting” that means we always want to aggregate and summarise. We take all the individual “little realities” (which we tend to call “raw data”) and we can’t help but want to summarise them into one one abstract number. For example: Anytown General Hospital achieved 95.4% compliance with the four hour target in October 2013.
And — deep down — I think most clinicians hate this. Just as they hate a description of there not being a bed in AMU for Ross Williams at 10:01 being statistically abstracted to “Average bed occupancy for AMU in September was 78%.”
These big numbers have neither relevance nor resonance for most clinicians. These big numbers don’t let you “see” Ross Williams lying on a trolley in A&E, waiting for a bed that ought to be available but isn't. I mean, Ross Williams is actually in there, he’s in both of those numbers, but he’s just a tiny little anonymous part of the four-hour compliance and bed occupancy calculations.
(If this were a conference presentation, this is the point when someone would project a slide with the quote: “Statistics are human beings with the tears wiped off.”)
Worse still, because so many NHS managers are also clinicians (or ex-clinicians), many managers also share this distrust of “big” data. Data simply doesn't “capture” the “little realities” they witness daily.
And this disconnect between the “little realities” like what happened at 10:01 and the “big data” descriptions like 78% occupancy — this disconnect is precisely why we haven’t yet found a way of using data to help us solve our demand and supply problems in unscheduled care. The problem isn't the data, or the expertise, or even the complexity of the problem. I think it’s to do with widely divergent cultural attitudes towards data. Clinicians see the world in terms of “little data.” Whereas number-crunchers (and therefore the managers who are fed by their information) see the world through the lens of “big data”.
Hence my moment of epiphany: a number-cruncher suddenly confronted with — and becoming obsessed by — a “little reality”.
As a relevant aside, I was googling the term “big data” at the weekend and I came across this article on the Simply Statistics blog.
Interestingly, and slightly controversially, it offers a different slant on the definition of “big data”. It suggests that the “big data” revolution occurred in the years following 1980 when computers replaced filing cabinets and card indexes as the main way of storing and accessing data. The “big” in “big data” doesn’t refer to the quantity of data; instead it refers to the percentage of the population using the data.
And as I looked at the timeline on that hand-drawn graph on the blog, I reflected on my very first experiences of “the unscheduled care problem” at the Royal Infirmary of Edinburgh in the autumn of 1988. No beds. Trolley waits. Outliers everywhere. Delayed discharges. Basically, everything then was as it is now. No beds. Trolley waits. Outliers everywhere. Delayed discharges.
So: plus ça change. Except that in 1988 we had minimal amounts of data to help us solve the problem. Whereas now — 26 years later — we have loads of data. But we still haven’t found a way of using the data to even make sense of the unscheduled care problem, let alone fix it.
In the area of unscheduled care, there are two things that we have manifestly failed to do with data over the last quarter of a century.
The first failing is to do with the way we use data to communicate with clinicians. We have failed to use data to effectively reflect back to clinicians the “little realities” that they experience every day. We haven’t found a way of describing their status quo. And as a result, data is now a debased currency amongst a lot of clinicians. The “big data” revolution might as well have never happened. Because data has had so little to say to them.
The second failing is to do with the way managers use data. I worry that managers have not yet worked out how to use “big data” to help them with supply and demand arithmetic. Too few managers “get”, for example, that:
And there is no getting away from that arithmetic.
It’s set in stone.
And if you have done some behind-the-scenes work to ascertain that actually 78% bed occupancy for a 24-bed AMU is too high (because it results in — say — 750 occasions per year when the AMU is full and there is a patient in the Emergency Department needing to be admitted to that bed), then managers need to know that and they need to make decisions — probably about length of stay, but also probably about capacity — that will enable that AMU to operate at an occupancy level that is compatible with better four-hour performance.
But in that last paragraph there was that difficult step. That disconnect between 750 “little realities” and one abstract number: 78% bed occupancy.
And this disconnect is cultural. Deeply-embedded. The reason why people don’t “get” data isn't because they’re stupid or innumerate; it’s because they have different ways of looking at the world, and data rarely represents that view of the world in a way that’s credible, let alone enlightening.
The analysts and managers do have to understand the demand and capacity arithmetic in order to make the decisions that enable them to re-design their unscheduled care systems. But they also have to understand the arithmetic in a way that connects it to the “little realities” that comprise the clinicians’ view of the world. The arithmetic must be in context.
I think my moment of epiphany twelve days ago has taught me that you can only get the NHS to make use of summarised, aggregated data when that data’s connection to individual patients is made utterly explicit.
And in the last quarter of a century, it seems to me as if nobody’s really been paying much attention to making that connection explicit.
|© Kurtosis 2014. All Rights Reserved.|