I think, after six months or so, that I’m learning to get the best out of the libre and the patterns of use that occur with the sensors. But what do I mean?
Throughout using it, there are times when the interstitial readings and the blood readings seem to violently diverge. There are also times where they seem to be so close as to almost be the same. If I’d spent more time analysing the data using Excel, it would probably have taken less time to come up with the conclusions. But I finally have some, and they seem to be consistent across the majority of the sensors I’ve had.
When does the sensor read much too high or much too low?
I find a prolonged period with an inclined (North East) or declined (South East) arrow (long is more than an hour) and the scan will generally be around 20% higher or lower than the blood reading at around the same time. Typically this can be seen when you are heading high or heading low.
I’ve also found that once you are above 9mmol/l then you are heading into the wild west territory. The readings become increasingly unreliable as your blood glucose levels get higher. I don’t have a metric for the relationship, but it is something that clearly happens.
In addition, as someone reminded me, level of hydration seems to make a difference to sensor readings. In a state of lower hydration, the sensors read higher. This is a phenomenon I have observed, usually first thing in the morning. This would make sense, given that when dehydrated, interstitial fluid is likely to be more concentrated.
Is it ever any good?
If you have a flat arrow, then the scans are pretty accurate. I’ve found they tend to stay within around 7.5% of the blood tests at this level. If you can run between 4 and 9 with a flat arrow, then I consider the results typically to be quite good.
What about the fast moving levels that the manual warned me about?
On the vertical arrows I revert to blood till the arrows even out. Sometimes the scans will give you a half decent value and at other times it won’t be able to read the scan due to the rate of change. You need to take the readings with a pinch of salt and test.
Are there any other things you’ve noticed?
Sometimes, a slow entry hypo (what I like to think of as a basal hypo) can confuse the system. I’ll have hypo feelings and hypo blood levels, but the sensor won’t pick it up. The readings can be within the 15% tolerance that Abbott quote in this state, but it is odd.
Another thing that I’ve found makes a noticeable difference to the way it works on my arms is the level of fleshiness at the point of attachment. Nearer the top of the arm, firmly on the underside has been the most consistent.
But with all these caveats, how can it be useful?
Very easily. Once you understand the best parameters for operation, it makes a huge amount of sense and it is very easy to use. There’s a reason that the bolus and correction calculators only operate from the blood meter. That’s because the corrections only tend to take place under extreme blood glucose level conditions, and Abbott have identified that the performance of the sensor tails off at these levels.
It’s still important to know that you are way to high and how fast you’ve got to being way to high, so that you can see whether it was too low a bolus, too little basal, etc. But once you are in that position, either you fingerprick to check exactly where you are before resolving or you play it by ear. This is where the trend data and pattern spotting is incredibly useful.
If you are within the 4-9 mmol/l band, then it becomes very easy to sugar surf, and I’ve found that I’m confident to use the readings in this region without reverting to fingerpricking.
The conclusion that I’ve reached is that the type of data provided allows a user to more easily keep glucose levels in a controlled range, and when in this range, the tag line “Why prick when you can scan?” is a valid statement!
Leave a Reply