Building responsible AI – Experts’ insights

Recent news, articles & releases

While artificial intelligence is a highly anticipated tool for healthcare, its widespread adoption calls for experimentation, responsible development, and strict regulation to an extent that’s less talked about. In spring 2024, DigiFinland published potential AI use cases for health and social services in Finland. Microsoft organized a hackathon based on these, where Innokas developed a top three demo application for home care. This showcased AI's potential to streamline work, collect data, and improve patient care. DigiFinland's new report, commissioned by the Ministry of Social Affairs and Health, investigated AI use in Finnish welfare regions, offering practical advice and explaining regulations for different AI applications. Now, Innokas CTO Antti Kaltiainen, also a member of the Software Network and Artificial Intelligence advisory Group (AG SNAIG), elaborates on the practicalities of responsible AI.

 

Regulations and risk management

 

The EU AI Act came into force on August first, 2024, establishing new standards for the use of AI across Europe. It classifies AI solutions into four risk levels, with medical solutions in healthcare falling under the highest permitted risk level. High-risk applications require strict monitoring and reporting to ensure patient safety. A key aspect of regulating these applications is the mandatory human oversight; in healthcare, artificial intelligence’s operations must be continuously evaluated. This ensures that the technology supports the work of professionals and adds value for patients without introducing unnecessary risks.

 

Data and its challenges

 

The reliability of AI depends heavily on the quality of its training data. Biases in the data can lead to erroneous decisions if the data isn’t neutral or it doesn't reflect real-world situations. Another challenge is hallucinating, where AI generates results containing incorrect or misleading information. This is one of the primary problems of AI systems that rely on large training databases, as generative AI prioritizes generating responses over ensuring their accuracy and bases them on statistical patterns. The security of patient data is also a critical concern. AI applications must ensure that personal data remains protected and doesn’t become vulnerable to misuse. Strong, novel information security practices are essential for the responsible use of AI.

 

Transparency and explainability

 

The transparency and explainability of AI solutions are particularly important in healthcare. Explainability means that AI can show the reasoning behind its decisions. This is especially critical when it comes to recommendations or diagnoses for patients. Healthcare professionals, such as doctors and nurses, need to understand how AI makes its decisions in order to critically evaluate the results it provides. This helps prevent over-trust where, over time, healthcare personnel might begin to trust the system's answers blindly.

 

Expertise in AI development

 

Safe and efficient artificial intelligence in healthcare requires multidisciplinary collaboration. The development process demands expertise in health technology and related regulations to ensure that solutions meet the strict requirements of the industry. Compliance is not just a formality but a crucial aspect of responsible development in healthcare. Additionally, health technology experts play a key role in highlighting ethical and practical considerations that may be overlooked during technological development to ensure that the AI effectively supports the needs of both healthcare professionals and patients.

 

Responsible artificial intelligence in the future

 

AI has the potential to advance healthcare by improving processes, increasing efficiency, and enhancing patient care. However, this potential can only come to fruition through responsible development, rigorous regulations, and thorough experimentation. When developed in collaboration with health technology experts and guided by transparent, secure practices, AI solutions can offer significant benefits to both healthcare professionals and patients. Through responsible implementation, AI becomes not just a tool, but the key to improved healthcare.

 

If responsible AI is especially relevant for you, consider signing up for responsible AI solution workshop on implementation of AI solutions in your product development project! The workshop will take place at Health Valley Event in Netherlands where Innokas experts will be present to showcase proven best practices.

 

You can also contact us for more insights and we will get back to you shortly! Innokas has been in the HealthTech scene for over 30 years and we leverage our experience to ensure AI innovations stay safe and compliant. Get in touch through the link below.

 

Contact us

 

Interviewee

antti_kaltiainen_sq

Antti Kaltiainen

CTO

antti.kaltiainen@innokas.eu