Our 1st Breakthrough: Explainable AI (XAI) in Dynamic Systems
Decodea’s proprietary algorithms mimic high cognitive functions to reveal the WHY behind AI recommendation systems.
Decodea’s proprietary algorithms mimic high cognitive functions to reveal the WHY behind AI recommendation systems.
Modelling how a human would explain AI recommendations
Explanations are situation-specific, and not statistical
Deduction chains & causal relationships of concrete situations
The explanations are independent of the AI system in hand
“72% of executives report that their organizations seek to gain customer trust and confidence by being transparent in their AI-based decisions and actions.”
Accenture
Fintech
Biomed
Markov processes
We develop our algorithms through Chess because it is an ideal AI research playground for investigating and modelling human reasoning in complex situations. Our first debut product is DecodeChess, the first AI chess tutor that combines the merits of a chess program and a human master.
“AI models are increasingly deployed to augment and replace human decision making. However, in some scenarios, businesses must justify how these models arrive at their decisions. To build trust with users and stakeholders, application leaders must make these models more interpretable and explainable.”
Gertner, 2019 (source)