Trustworthiness of AI: the key to future medical use
AI is slowly becoming integrated into medical practices around the world. But how can we bridge the gap between the systems being developed by research and industry in ophthalmology? A team of UvA researchers studying the use of AI in health, and specifically in ophthalmology, believes the key lies in the trustworthiness of the AI, as well as in involving all relevant stakeholders at every stage of the production process.
In ophthalmology there are currently only a small number of systems regulated and even those are very seldom used. Despite achieving performance close to or even superior to that of experts, there is a critical gap between the development and integration of AI systems in ophthalmic practice. The UvA research team studied the barriers preventing use of these systems and how to bring these barriers down. They concluded that if the systems were finally to see widespread use in actual medical practice, the main challenge was to ensure trustworthiness. To become trustworthy they need to satisfy certain key aspects: they need to be reliable, robust and sustainable over time.
Already available in an open access version, the paper will soon appear in the publication Progress in Retinal and Eye Research. The full study “Trustworty AI: Closing the gap between development and integration of AI systems in ophthalmic practice” can be found here.
CWI spin-off company DuckDB Labs helped create startup MotherDuck which aims to connect DuckDB to the cloud. MotherDuck sports some big names: its CEO is Jordan Tigani, founding engineer at Google’s BigQuery, Google’s fully managed data analysis platform. A big part of the $47.5 million funding comes from Andreessen Horowitz, a prominent venture capital firm, specialized in technology startups.
Breaking ground as the first international conference on Hybrid Human Artificial Intelligence, HHAI22 held its first-ever in-person meeting in Amsterdam in the summer of 2022, establishing the beginnings of an international research community. ADS contributed to the conference by hosting a Meetup around the topic of Hybrid Intelligence.
There are legal rules and ethical frameworks, but little or no practical guidance on responsible design. In her inaugural lecture “System error, please restart”, Nanda Piersma argues what ‘responsible’ means and how we can carry out the (further) development of IT systems in such a way that they earn our trust.