CGM: Is “More” always better? Abbott’s Libres. More Data – part 1

Abbot – More data

With Abbott’s Libre, we have more data. Five times as much in fact, as the data set is every minute instead of every five minutes. Does this mean that we can gain more insight into what’s happening and make better decisions? Let’s take a look.

This will be split into two separate articles. This first one, which looks at the data itself and the implications of using it, and a second that reviews the effects of having five times as much data to handle.

The data

Here we can see the blue line showing the minute by minute data and the orange crosses are the five minute values (the equivalent of what you might get with a Dexcom). This is the data over the full 15 days of the Libre2 Sensor, provided by Juggluco. It’s worth noting that this Libre sensor isn’t one I’d have wanted to run an AID from. Throughout the entire sensor life, it produced results that were significantly lower than blood and Dexcom readings.

Given the volume of data, it’s pretty hard to see any patterns here, however, one thing that can be seen is the number of peaks in the one minute data that aren’t shared with the five minute data. This suggests that the five minute data already acts as a low level smoothing mechanism.

To try and get a better view, I’ve zoomed into a 24 hour period, shown below:

 

 

In the areas circled, it’s possible to see how movement between the red five minute dots is much lower than the minute by minute data. this is particularly pronounced around the second peak in the orange circle and in the cluster of red dots on the right hand side of the green circle.

There are a few other areas on the graph that show another characteristic of the difference between one and five minute data, as provided by Juggluco. That’s the number of upticks in one minute data that occur between five minute dots, where the five minute trend reflects the overall direction of travel. In the centre of the image below, this is shown very clearly.

 

In the centre, there are two phenomena that show both the pros and cons of the 1 minute data.

  1. The initial uptick (around 0.4mmol/l) would result in a single larger dosing decision with the five minute data, compared with subsequent smaller dosing decisions with the one minute data. It would also disable any high TBRs earlier than the five minute data, as the drop is detected four minutes earlier.
  2. Subsequent to the initial drop, there’s a rise in the one minute data that isn’t present in the five minute data and isn’t present in the overall trend. This results in a reasonably significant jump that would manifest as an unexpected deviation for oref1 and would be likely to result in additional (unnecessary) insulin being delivered.

Unfortunately, due to the different timeframes that are shown and the resultant differences in the way that deltas are expressed, it’s not really possible to use the VEM method for evaluating the difference in variation between the two sets of data. Instead, I’ve taken a different approach.

Given the phenomenon shown above, I’ve looked at the aggregate absolute delta per five minute period for the one minute readings compared with the delta between the five minute readings. This highlights where the total variation caused by one minute readings is larger than that displayed by the difference between five minute readings. 

 

What this highlights is that there are frequent periods throughout the day where there is more variability on the one minute data with in a five minute period than in the difference between five minute readings. In this short, day-long period, these appear to be clustered around steeper rises and falls.

Takeaways

From a data only perspective, the Libre2 Plus minute by minute data I captured using Juggluco shows more variability per five minute period in the minute by minute data than in the equivalent five minute data. As we’ve already argued, this is useful for capturing changes early, as long as they are consistent with the trend. Where they produce something that is an anomaly to the trend, they present something of a risk.

When levels are climbing, a sudden drop then recovery of the climb would result in a short reductio in delivered insulin. This is unlikely to result in any major risk. 

When levels are falling, a sudden climb, then recovery of drop could easily result in additional insulin dosing that’s not necessary.

What we don’t know is how commercial systems are using this data. It’s almost certain that they will have some form of smoothing algorithm in place, but equally, they are probably not making decisions on each minute, given the average insulin action time. 

In open source systems, it’s possible to have the system read and use the data on a minute by minute basis. If this is being done, it really should be used with one of the smoothing algorithms that are available in Trio and AndroidAPS. Alternatively, either of those systems can be set to limit Super-Micro bolusing to once every 5 minutes, which would be likely to reduce the risk of too much insulin via SMB, whilst basal could be adjusted up and down in line with the variance shown. This would lower the risk of over-delivery.

Ultimately, whilst one minute data can be helpful, it does need careful handling to ensure that the data being used is showing a valid phenomenon. It’s always possible that the mechanism used to capture this data (Juggluco) isn’t replicating exactly what Abbott anticipated when they built the Libre software. 

So is “more data” better? I think the evidence presented here suggests that it has pros and cons, but it’s not always better.

Be the first to comment

Leave a Reply

Your email address will not be published.


*