Our team is tackling the “interpretability problem”, helping us build safer AI. While it is well known that current widely used AI models lack interpretability, meaning we can’t understand why they made the decisions they do, we have until recently lacked a rigorous way of defining and assessing interpretability in machine learning. This is proving to be critical for the ethical deployment of AI in fields like medicine or finance. A small team within Quantinuum led by Stephen Clark, and including Ilyas Khan, KSG has made a meaningful step forward in solving this problem, with the publication of a new paper that shows us how to properly and accurately determine the interpretability of various AI models. Read more on our blog: https://lnkd.in/gqs2pfWb The scientific paper is available here: https://lnkd.in/g7Gp4XF2 Access our hardware: https://lnkd.in/eV6b6nWw
Quantinuum’s Post
More Relevant Posts
-
The crossover between quantum computing and AI is a hot topic at the moment. Most of the attention is focused on which particular performance metrics will receive a quantum boost, with a lot of interest in areas such as speed, energy consumption and very interesting abilities such as reasoning and learning from sparse data. This paper looks at the whole topic from a foundational perspective and provides insights into the most important aspect of all: interpretability, leading to explainability. If we can't understand AI we can't necessarily control it. This paper applies compositional interpretability and string diagrams (which are widely used in quantum computing) to establish the interpretability or otherwise of the main families of AI model used industrially today – from transformers and other neural net methods, to causal models, conceptual space models and (good old fashioned) rule-based models. For anyone working in AI, this is an important paper
Our team is tackling the “interpretability problem”, helping us build safer AI. While it is well known that current widely used AI models lack interpretability, meaning we can’t understand why they made the decisions they do, we have until recently lacked a rigorous way of defining and assessing interpretability in machine learning. This is proving to be critical for the ethical deployment of AI in fields like medicine or finance. A small team within Quantinuum led by Stephen Clark, and including Ilyas Khan, KSG has made a meaningful step forward in solving this problem, with the publication of a new paper that shows us how to properly and accurately determine the interpretability of various AI models. Read more on our blog: https://lnkd.in/gqs2pfWb The scientific paper is available here: https://lnkd.in/g7Gp4XF2 Access our hardware: https://lnkd.in/eV6b6nWw
To view or add a comment, sign in
-
Shopify & Webflow Developer | Crafting Websites for maximum conversion rates and increased sales | Certified Webflow Expert | Dynamic Websites for Success 🔥| Shopify Store Developer
Now crafting task-specific LLMs is as simple as describing your dream model
This is amazing 🤯 This new "gpt-llm-trainer" can train any task-specific LLM with single sentence. You just need to describe the model you want and chain of AI systems will generate a dataset and train a model for you Open Source: https://lnkd.in/gdbvBvrD ↓ Are you technical? Check out https://AlphaSignal.ai to get a weekly summary of the latest research and breakthroughs in AI. Read by 100,000+ engineers and researchers.
To view or add a comment, sign in
-
This is amazing 🤯 This new "gpt-llm-trainer" can train any task-specific LLM with single sentence. You just need to describe the model you want and chain of AI systems will generate a dataset and train a model for you Open Source: https://lnkd.in/gdbvBvrD ↓ Are you technical? Check out https://AlphaSignal.ai to get a weekly summary of the latest research and breakthroughs in AI. Read by 100,000+ engineers and researchers.
To view or add a comment, sign in
-
Creative Technologist, CIO, CAIO, CTO, COO, FOOMO, Senior Pre & Post Sales Engineer & Architect in XR, CV, AI, ML, OTT, OVP, SaaS, EdTech, Sports, Video, Games, Entertainment, Immersive & Volumetric.
Single sentence LLM training: https://lnkd.in/e_2jAJ7u
This is amazing 🤯 This new "gpt-llm-trainer" can train any task-specific LLM with single sentence. You just need to describe the model you want and chain of AI systems will generate a dataset and train a model for you Open Source: https://lnkd.in/gdbvBvrD ↓ Are you technical? Check out https://AlphaSignal.ai to get a weekly summary of the latest research and breakthroughs in AI. Read by 100,000+ engineers and researchers.
To view or add a comment, sign in
-
Assistant Professor at Universidad Pontificia Comillas ICADE | PhD on Computer and Telecommunications Engineering
Responsible AI needs two key ingredientes: fairness and explicability. Only with these two features, we can check whether AI goals are aligned with our vision. Check out this article, that classifies the main fairness and explicability techniques out there. https://lnkd.in/dE_xJPNv
To view or add a comment, sign in
-
Can small LLMs (SLMs) rival big ones? This is one of the questions in @nathanbenaich's "State of AI Report". The answer: yes. "In a still largely exploratory work, Microsoft researchers showed that when small language models (SLMs) are trained with very specialized and curated datasets, then can rival models which are 50x larger. "They also find that these models' neurons are more interpretable." Tons of useful research and insights in this report.
State of AI Report 2023
stateof.ai
To view or add a comment, sign in
-
Covering the latest in AI R&D • ML-Engineer • Ex-Mila researcher • MIT Lecturer • Building AlphaSignal.ai, a technical newsletter read by 180,000 AI/ML experts.
This is amazing. This new "gpt-llm-trainer" can train any task-specific LLM with single sentence. You just need to describe the model you want and chain of AI systems will generate a dataset and train a model for you It's also open source: https://lnkd.in/gdbvBvrD ↓ Are you technical? Check out https://AlphaSignal.ai to get a weekly summary of the latest research and breakthroughs in AI. Read by 100,000+ engineers and researchers.
To view or add a comment, sign in
-
AI powered LLM trainer is here! Training models is hard. You have to collect a dataset, clean it, get it in the right format, select a model, write the training code and train it. And that's the best-case scenario. The goal of this project is to explore an experimental new pipeline to train a high-performing task-specific model. We try to abstract away all the complexity, so it's as easy as possible to go from idea -> performant fully-trained model. https://lnkd.in/dFA7wKGG
Covering the latest in AI R&D • ML-Engineer • Ex-Mila researcher • MIT Lecturer • Building AlphaSignal.ai, a technical newsletter read by 180,000 AI/ML experts.
This is amazing. This new "gpt-llm-trainer" can train any task-specific LLM with single sentence. You just need to describe the model you want and chain of AI systems will generate a dataset and train a model for you It's also open source: https://lnkd.in/gdbvBvrD ↓ Are you technical? Check out https://AlphaSignal.ai to get a weekly summary of the latest research and breakthroughs in AI. Read by 100,000+ engineers and researchers.
To view or add a comment, sign in
-
Westlaw's Generative AI is finally here! 🤩 Introducing AI-Assisted Research on Westlaw Precision! AI-Assisted Research is a generative AI skill that provides relevant answers to research questions with links to trusted Westlaw authority, so you can make more well-informed decisions and complete the remainder of your research with increased efficiency. Click here to learn more - https://lnkd.in/gfTCCykb
To view or add a comment, sign in
-