Imagine, for a moment that that you’re walking down the street and all of a sudden an alert appears in the corner of your vision to say “I think you may need 10g of carbs, your glucose levels are falling a little more quickly than you’d like”. So you have a couple of jelly babies.
A little later, you arrive at a cafe for lunch with a friend. The club sandwich appears on your plate, and you look at it briefly, then an alert appears, again at the corner of your vision, saying, “Inject yourself, your dose is ready”. You do. About an hour later, a similar alert appears, as the protein and fat content in your meal is taken into account, and the additional bolus to cover that is suggested.
It sounds far fetched, and yet, the technology to achieve a lot of this is starting to become available.
For further details, read on…
Food recognition
Whilst sitting in the audience of the Diabetes UK Cymru Type 1 Technology conference watching @Nerdabetic (or Kamil as we like to call him) discussing Foodvisor, I thought that maybe, just maybe, the future isn’t so far away.
Firstly, if you want to look at food and see what its nutritional value might be, try http://www.foodvisor.io. It takes a photo of food, then works out what it is and the portion size, before advising you on nutrition data and glycaemic index.
While it’s not yet perfect, as it applies user advice to update the database, accuracy will hopefully improve, and even then, what’s there now is not too bad.
Insulin dosing
But that’s only food. What about giving you that advice on insulin? That’s where we move to https://quintech.io/.
Quin is a start up with a very interesting aim:
We use science, engineering and design to help people who take insulin make the best possible decisions.
We are in awe of what millions of experts who take insulin are doing to keep themselves going everyday.
Along the way, we turn their knowledge into new science to help others who take insulin and advance research into insulin-treated diabetes.
The devices we carry with us every day are a powerful platform for creating, managing and formalising self-care experience.
By combining them with large measures of empathy and ingenuity, we can create new insights that will inspire new approaches to treatment and research. Together.
Interpreting that, you might think it’s just another app, however, they capture user data in their app, using all sorts of sensors and are able to make recommendations as to what to bolus when. Signing up right now is under NDA, however, they have a CE mark, are undertaking ongoing research and development and expect to clinical trial early in 2020. They take user-centricity as the core of their design and apply machine learning to it meaning that they can provide personalised advice relating to how you live your life with diabetes and improve what you do and when you should do it.
Insulin delivery
I’ve previously talked about Pendiq 2.0, which has a two way bluetooth connection that provides the ability to send dose information to the pen and just press go.
Having something like Quin identify your location, that you’re sat down, have heard you order food, checked the food content of the what’s been presented in Foodvisor, after hearing the waiter provide it, or having just heard you order, and recommending a small pre-bolus, then acting on the data from Foodvisor when it’s heard the waiter deliver the food and send the dosing advice to your pen. Somehow it sounds far-fetched, and yet….
Display/Interaction
We’ve discussed Foodvisor identifying your meal, but what about how it does that? And how does the software give you advice that you need carbs or your pen is ready? That’s where you probably need “Smartglasses”, and something like these from Vuzix, or Focals, by North.
While this type of technology is still at an early stage, and both these solutions are a little clunky (as were the Google Glass products), they’d provide an interesting model for interaction. Of course, you could also utilise something like Amazon’s Echo Buds to talk to the system, and have it talk to you, instead of visual cues via smartglasses…
Could this work with artificial pancreas systems?
There’s no reason why not. In fact, it’s probably easier to get there with those. Using GPS and some form of machine learning (like Quin) you could identify by time and location whether you may be about to eat, and invoke “eating soon” mode. With either the microphones or glasses camera supplying meal data via Foodvisor, you could have a meal content sent to your APS system, without you having to enter anything manually, and with OpenAPS’s UAM and SMB features, if you were to use a faster acting insulin like Fiasp, you’d effectively have an even more capable artificial pancreas that knew about your location, recognised your food and acted accordingly.
I’m sure there are currently developers in the WeAreNotWaiting world that could write the code to do a lot of this integration already.
Do you think this could really happen?
All I’ve done is take a number of things that are available either in development or in the wild right now, and consider what the art of the possible might be. While what’s described here might take a little while, I think that we’ll have some of these tools available for people to use via people with diabetes doing the work themselves. And I’m fairly sure that we’ll see some of this within a couple of years. Given a cure always takes at least five, I think I’d put my money on this!
This is really fascinating. We would, however, need a more fast acting insulin than Fiasp, if we really want to prevent the sugar from rising when we eat. The other thing is that these devices will probably be also expensive, that “normal” diabetics could not afford them. (Those outside U.S.A. and insurances). (Artificial pancreas should not be called that if it only regulates blood sugar outside meal times.)
Given that people are currently successfully using Fiasp with OpenAPS and not having to pre-bolus, and similarly bolusing with their meal when using MDI, I’m not sure it’s essential.
It’s also worth noting that DIY APS systems are relatively cheap, and regulate glucose levels both with meals and outside meals currently.
There is a challenge in that the AI (machine that learns) will initially develop biases towards the usage patterns of early adopters. These are likely to be the more technology- focussed T1 community and this could in fact make it harder for new, less tech savvy users to adopt it in isolation from other tech.
In short, YDWV, will the machine’s?
From what I can tell of the literature that Quin have put forward (and it’s quite limited), I think they’re adopting a wide range of users and I also get the impression they understand this.
I would hope that others would.
I so want to do looping, but am not tech savvy.
I’ve read the online info, follow the FB pages, but just can’t do it myself. I worry that I will be left behind because I’m in a part of Australia without other loopers, no build workshops, no money to travel interstate to a workshop. More tech sounds great, but out of reach already