Qi Framework
by: Stephen Wormald, Matheus Kunzler Maldaner, Kristian D. O'Connor, Olivia P. Dizon-Paradis, Damon Woodard
Description
- The Qi-Framework is like the periodic table for explainable AI (XAI)—it organizes the fundamental building blocks of explanation methods into a structured system, allowing researchers and practitioners to systematically understand, compare, and refine them. Just as chemistry advanced by classifying elements based on their atomic properties, the Qi-Framework brings order to the fragmented landscape of XAI by breaking down explanations into core sub-components. This structured approach not only makes it easier to identify gaps and redundancies in existing methods but also accelerates the development of new, more effective explanation techniques. Instead of treating explainability as an art guided by intuition, the framework shifts it toward a more rigorous and scientific discipline.
- This shift has profound implications beyond research labs. Consider AI-driven decision-making in areas like medicine and finance, where explainability isn’t just a feature—it’s a requirement for trust and accountability. Without a standardized way to assess explanation methods, selecting the right XAI approach is like trying to diagnose an illness without a medical textbook: inconsistent, unreliable, and prone to misinterpretation. The Qi-Framework provides this missing structure, giving developers, regulators, and users a clear and consistent way to evaluate AI explanations. By doing so, it not only helps AI systems gain public trust but also ensures they can be audited, improved, and adapted to real-world challenges. In a future where AI decisions increasingly shape human lives, the ability to understand and justify these decisions will be as important as making them—Qi-Framework paves the way for that future.
Publications
Wormald, Stephen; Maldaner, Matheus Kunzler; O’Connor, Kristian D.; Dizon-Paradis, Olivia P.; L.Woodard, Damon
Abstracting General Syntax for XAI after Decomposing Explanation Sub-Components Journal Article
In: 2024.
@article{Wormald2024Abstracting,
title = {Abstracting General Syntax for XAI after Decomposing Explanation Sub-Components},
author = {Stephen Wormald and Matheus Kunzler Maldaner and Kristian D. O’Connor and Olivia P. Dizon-Paradis and Damon L.Woodard},
url = {http://dx.doi.org/10.21203/rs.3.rs-4824427/v1},
doi = {10.21203/rs.3.rs-4824427/v1},
year = {2024},
date = {2024-08-01},
publisher = {Springer Science and Business Media LLC},
abstract = {Policy makers, healthcare providers, and defense contractors need to understand many types of machine learning model behaviors. While eXplainable Artificial Intelligence (XAI) provides tools for interpreting these behaviors, few frameworks, surveys, and taxonomies produce succinct yet general notation to help researchers and practitioners describe their explainability needs and quantify whether these needs are met. Such quantified comparisons could help individuals rank XAI methods by their relevance to use-cases, select explanations best suited for individual users, and evaluate what explanations are most useful for describing model behaviors. This paper collects, decomposes, and abstracts subcomponents of common XAI methods to identify a mathematically grounded syntax that applies generally to describing modern and future explanation types while remaining useful for discovering novel XAI methods. The resulting syntax, introduced as the Qi-Framework, generally defines explanation types in terms of the information being explained, their utility to inspectors, and the methods and information used to produce explanations. Just as programming languages define syntax to structure, simplify, and standardize software development, so too the Qi-Framework acts as a common language to help researchers and practitioners select, compare, and discover XAI methods. Derivative works may extend and implement the Qi-Framework to develop a more rigorous science for interpretable machine learning and inspire collaborative competition arcoss XAI research.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Policy makers, healthcare providers, and defense contractors need to understand many types of machine learning model behaviors. While eXplainable Artificial Intelligence (XAI) provides tools for interpreting these behaviors, few frameworks, surveys, and taxonomies produce succinct yet general notation to help researchers and practitioners describe their explainability needs and quantify whether these needs are met. Such quantified comparisons could help individuals rank XAI methods by their relevance to use-cases, select explanations best suited for individual users, and evaluate what explanations are most useful for describing model behaviors. This paper collects, decomposes, and abstracts subcomponents of common XAI methods to identify a mathematically grounded syntax that applies generally to describing modern and future explanation types while remaining useful for discovering novel XAI methods. The resulting syntax, introduced as the Qi-Framework, generally defines explanation types in terms of the information being explained, their utility to inspectors, and the methods and information used to produce explanations. Just as programming languages define syntax to structure, simplify, and standardize software development, so too the Qi-Framework acts as a common language to help researchers and practitioners select, compare, and discover XAI methods. Derivative works may extend and implement the Qi-Framework to develop a more rigorous science for interpretable machine learning and inspire collaborative competition arcoss XAI research.