Standardizing machine learning on MedTech sector – Expert’s insights

Recent news, articles & releases

How to standardize something that develops at an exceptionally fast pace? AI and Machine Learning (ML) in the healthcare sector has been a hot topic for a couple of years and there is as much concern as there is excitement over the advancing technology. Innokas Medical’s CTO is part of AG SNAIG (Software Network and Artificial Intelligence advisory Group), an international group that monitors and analyses available information from outside sources and advises the IEC Technical Committee 62 on Artificial Intelligence (AI) and connected topics; he has been interviewed about the current state of affairs.    

 

This blog post has been edited and updated for relevancy and accuracy on May 30, 2024

 

“There are no specific AI/ML standards in effect at the present in HealthTech,” says Innokas Medical’s CTO Antti Kaltiainen when asked about the situation with machine learning. “For now, there is healthcare AI/ML draft guidance as well as regulation from other sectors. Risk management standards related to AI/ML are also published. Additionally, as machine learning based solutions are software, they are placed under software standard; in practice, they should comply with the current state-of-art in software IEC 62304 and IEC 82304-1 and other related standards and guidance in place,” he continues.    

 

Even so, the work to make specific regulations for machine learning is underway all over the world. The European Council formally adopted the EU AI Act on 21 May 2024. The AI Act text will be expected to published in the Official Journal in June and will be in effective 20 days after that.

 

There are still transition periods for different areas under AI Act, periods from 6-36 months. EU AI Act is the world's first comprehensive legal framework for artificial intelligence. The Act sets general purpose AI (GPAI) under strict obligations like GPTs. Act also divides applications in 4 risk categories. 1. category is unacceptable risk and 2. category is high risk where most of the medical systems seems to fall under. “The primary purpose is to ensure that the use of AI is both responsible and ethical, but it has also been said to have contradictions with some existing healthcare standardization,” Antti specifies.     

 

The role of Advisory groups comprised of Experts  

  

Due to the ongoing implementation of regulations and standards, expert groups have been established to advise the standard setting bodies. These experts bring their knowledge and expertise in their respective fields and offer valuable guidance to the standardization process. “In practice, we study papers and international publications on the topic and discuss how AI risks are managed in healthcare sector at the moment,” Antti defines Software Network and Artificial Intelligence advisory Group he is in.    

 

AI regulatory challenges   

 

When asked about the remarkable challenges the group faces, Antti underlines that the most particular challenge at the moment is that machine learning develops significantly faster than studies about it are being done and published. Additionally, some private companies release no information on how their models work. Some of the information provided is also said to be so complex anyone would have trouble understanding it. “Monitoring and regulation are always behind the development, but that is especially true in this case”, says Antti.   

 

A specific example of a challenge would be the prevalence of “black box” type of machine learning software; a software that gives an answer without showing how it came up with that answer. “One way to approach black box AI would be Explainable AI, especially in the healthcare sector. Practically it would mean that the software needs to showcase how it made a conclusion, as well as the circumstances where it would have given a different answer,” Antti elaborates.   Bias derived from different local or global populations is also one of the domain challenges. 

 

AI concerns and compliance   

 

One of the biggest concerns is that AI learns independently without human intervention, such as by continuously updating its database. Antti reminds that if a software that utilizes machine learning is qualified as a medical device (software), any changes to it need to go through thorough verification and validation, and very likely a notified body inspection. “Generally, medical software and equipment are “frozen” so to speak. Practically, the machine does not learn by itself even though the database it pulls from is updated. Though, this also presents a different challenge and concern, as especially old software needs to have its data updated to avoid becoming obsolete. This has also been a major topic of discussion in standardization.” There is also an existing draft framework published by FDA on how to cope with changes in a safe way. FDA’s “Predetermined Change Control Plan” is officially in draft state but practically in-use for US market. The guidance will likely affect global regulation in this specific area. It does covers how models can and shall be updated and what shall be documented and taken into account.  

 

AI/ML applications in healthcare at the present   

 

Right now, probably the most widespread ML application area in healthcare sector is image recognition ML that can be used as a decision support tool for medical personnel. “At this point it is generally agreed that machine learning solutions should not be given authority over major decisions. They should only showcase their findings so that an expert can make the final decision; it’s like a digital colleague,” Antti concludes. Time will tell how machine learning solutions will develop from this point on. After all, a lot has already happened in a very short time.     

 

If you would like an audience with Innokas' AI/ML experts, may suggest a free meeting through the contact form. 


Contact us

 

Editing Linda Kellberg (Senior QA&RA Specialist)