We live in a time of change. Just as the appearance of the steam engine gave way to the industrial revolution, the expansion of computers has once again changed the way we relate to our environment. Part of the most radical change we are witnessing today is due to Artificial Intelligence (AI) capable not only of surprising us with fun applications on our smartphones but also of resuscitating actors for the latest film in our favorite saga. However, precisely at the time when our lives have been disrupted by COVID-19, it is inevitable to ask ourselves why AI is not capable of solving more medical problems than it is currently capable of addressing. Next we will see the 3 main obstacles that eHealth faces in order to exploit medical AI in the future
Lack of data
A single person is an individual characterized by a great diversity of data directly related to his health. At PERSIST we handle some of them: physical activity, sleep, nutrition, mood and of course all the data that has been accumulated in the electronic records of patients throughout their lives (treatments, operations, laboratory tests, etc.). And although it may already seem like a large amount of data, there is still a lot of other clinically relevant information (eg DNA, RNA, Microbiome, etc.). So why do we say that we lack data? The problem is that these data are not grouped and available to researchers, and even when they are, they are not properly processed and annotated so that they can be interpreted by an AI algorithm. It is as if we need to have our own vehicle and someone has given us a high-end car, but instead of giving us the key, we only have a map with the places where they have hidden the parts (in the laboratory tests department, the image storage, the ‘smartwatch’ cloud, …) and that, after recovering them, we would have to clean and fine-tune them one by one (interpret the text, mark the images with carcinomas, …).
Failures in the generalization and robustness of the algorithms
Sometimes the resulting algorithms work correctly in test environments, but when used in real situations, with new medical data, they do not achieve the desired performance. We are facing problems of lack of robustness or perhaps a misinterpretation of the desired outcome. The second problem is easily solved by incorporating personnel with characteristics similar to those of the end user during the requirements specification and development phases. Precisely for this reason PERSIST is based on a methodology of close collaboration between patients, clinicians and technicians during all phases of the project, thus seeking to keep these three actors aligned so that the results obtained satisfy the real needs detected.
The first problem mentioned, the lack of robustness, is more complex. Let’s go back to the history of our high-end car in parts that, by now, we have already managed to assemble and also test successfully in a closed race track. All the results encourage us to put the car to circulate in our city and when we do, we confirm that its performance is excellent. However, during our first trip up the hill, the car hit a muddy road showing that it could obviously be improved. The problem was due to the fact that neither during its design nor during the tests the possibility that the pavement was unpaved and full of water and mud was considered. Likewise, with AI it often happens that the actual use of our machines reveals deficiencies that are masked in test environments, hence the urgent need to carry out tests in real environments, so that these failures can be detected and addressed in time. With this objective in mind, PERSIST has carried out clinical usability and validation studies with the participation of 160 cancer survivors in 4 different European hospitals, so that the project not only travels along a well-paved road but also adventure through 4 different wild tracks.
Inability to move from proof of concept to a real system
Experience shows that there are a large number of models in the proof of concept phase, some of them being verified in real environments and a small proportion finally deployed in hospital systems. One of the greatest difficulties in reaching this phase is the high requirements for reliability and intelligibility that must be demanded of the tools that intervene directly and indirectly on the health of patients. The users of these systems are healthcare professionals who need to be absolutely sure that the AI model used will not fail because human lives may depend on it. This raises the quality demands required compared to those necessary in other more permissive environments such as industrial ones. Furthermore, the responsibility that falls on end users leads them to distrust the technology simply because they do not understand exactly how it was created. Despite having proven to be painstaking mechanics and engineers, who have thoroughly assembled and tested the car for parts, how many of our friends and family would dare to risk their lives by going into it? Likewise, AI developers for eHealth should strive to dispel any doubts that hang over the obtained models, explaining in advance, and in as much detail as possible, how a certain conclusion or suggestion has been reached.
So, in light of the current situation and the difficulties that AI for eHealth must overcome, what does the future hold? Due to the very characteristics of the health field, it is likely that progress will be generated at a slower speed than in other fields, however it would be out of all logic to waste the opportunities that AI can offer to the health system. For this reason, clinicians and technicians should get down to work and collaborate together on projects like PERSIST if we really want to have better medical tools in the future.