In this part of the “Six of the Best” coverage, we look at the 15/15 data from this test, and the percentage of readings where the CGM didn’t show a low value when finger sticks and hypo symptoms suggested otherwise.
The data in this piece is much more open to challenge for two reasons. The numbers of points when there were values out of range were comparatively small (11 below range for Dexcom and 13 for the others), and not only do CGMs become less accurate when out of range, so do blood testing devices, so there is a greater margin for error on both sides of the comparison.
15/15 test
What is the 15/15 test?
What you’re looking for is the percentage of datapoints that are within 15mg/dl when glucose levels are below 70mg/dl (3.9mmol/l) and 15% of the blood reading at all other times.
This data indicates how accurate CGM systems are when in a stressed state due to lower or higher glucose levels. In this example, it provides an indication of how good they are at these different levels, but the number of points is very low, so this should also be taken into consideration when reviewing this.
The table above shows fairly dramatic differences between the different systems. The G6 and ONE have much higher percentages of their datapoints within the boundaries laid out when low or high than any of the other sensors. What’s perhaps slightly surprising is that during “in range” periods, the ONE performed much more closely to two of the newcomers than we might expect. Overall though, it highlights the differences in the performance of the different systems at lower and higher levels.
Percentage of readings >3.9/70 when blood reading is <3.9/70
As we can see from the previous section, none of the newcomers perform particularly well in the hypo range. This section looks at the percentage of results that were not showing as hypo when I had hypo symptoms and glucose levels were registering below 3.9mmol/l (70mg/dl) on the fingerprick.
As I’ve already mentioned, there is much greater margin for error with these numbers, however, what they appear to show is that either some sensors weren’t as good at picking up low glucose levels, or that the lag on some sensors was much greater than on others. Whichever way you look at it, the data in the table speaks for itself.
Any conclusions from this?
As I’ve mentioned, due to the greater risk of systemic error in this data, it’s not fair to state conclusive outcomes, however, indicatively, these results feel about right as a user, and back up the error grids and MARDf data. They also tend to highlight the issues with some of the accuracy studies, where the majority of the participants perhaps didn’t experience as widely varying glucose levels as many people with T1D do.
Realistically, this adds to the dataset regarding the choices that people make regarding which system they want. I’m personally not surprised by the outcome, and think this backs up my preference of sensor.
Leave a Reply