Quantinuum’s Post

View organization page for Quantinuum, graphic

27,640 followers

Our team is tackling the “interpretability problem”, helping us build safer AI.   While it is well known that current widely used AI models lack interpretability, meaning we can’t understand why they made the decisions they do, we have until recently lacked a rigorous way of defining and assessing interpretability in machine learning. This is proving to be critical for the ethical deployment of AI in fields like medicine or finance.   A small team within Quantinuum led by Stephen Clark, and including Ilyas Khan, KSG has made a meaningful step forward in solving this problem, with the publication of a new paper that shows us how to properly and accurately determine the interpretability of various AI models. Read more on our blog: https://lnkd.in/gqs2pfWb The scientific paper is available here: https://lnkd.in/g7Gp4XF2 Access our hardware: https://lnkd.in/eV6b6nWw

To view or add a comment, sign in

Explore topics