Eoin M. Kenny


ekenny (at) mit (dot) edu
Postdoctoral Associate
CSAIL, MIT
Twitter
Google Scholar

About

Hello, I am a Postdoctoral Associate at MIT in CSAIL with a joint appointment at AeroAstro. I believe we must shift from our tendency of deploying “black-box” AI, to making it trustworthy when paramount. To this end, I am generating fundamental advances in Explainable AI (XAI), safety, and fairness, before applying them in application to prove they are both understandable and useful, drawing from expertise in computer science and cognitive psychology.

My strongest contributions to XAI have been making fundamental advances which allow (1) actionable explanations to optimize positive outcomes for users with semifactuals (see papers below), and (2) the first general purpose algorithm to make any deep reinforcment learning system inherently interpretable while not sacrificing performance (PW-Net; see below). I have shown the utility of these advances in finance and self-driving car domains, both of which need Trustworthy AI as a non-negotiable component.

In my future research, I plan to further push the boundaries of Trustworthy AI by developing algorithms to (1) debug deep learning, (2) expand contrastive explanation, and (3) make AI regulatable.

Upcoming

Education

Selected Publications (from Google Scholar)

These selected papers best represent my current research vision, currently I am working on the deployment of interpretable self-driving cars for error debugging, and transparent LLMs for human-AI collaboration.

In the future, I am also interested in further developing semifactual explanation. Counterfactuals and causality have been shown to be crucial to explanation, but I believe semifactuals will play a huge role in the future too.


The Utility of "Even if" Semifactual Explanation to Optimize Positive Outcomes

TL;DR: We show that semifactuals are more useful than conterfactuals when a user gets a positive outcome from an AI system.

Eoin M. Kenny, Weipeng Huang

[NeurIPS 2023]


Towards Interpretable Deep Reinforcement Learning with Human-Friendly Prototypes

TL;DR: We build the first inherently interpretable, general, well performaning, deep reinforcement learning algorithm.

Eoin M. Kenny, Mycal Tucker, and Julie A. Shah

[ICLR 2023] * Spotlight Presentation (top 25% of accepted papers)


On generating plausible counterfactual and semi-factual explanations for deep learning

TL;DR: We introduce the AI world to semi-factuals, and show a plausible way to generate them (and counterfactuals) using a framework called PIECE.

Eoin M. Kenny and Mark T. Keane

[AAAI 2022]


Bayesian Case-Exclusion and Explainable AI (XAI) for Sustainable Farming

TL;DR: We show how to accuractly predict grass growth and offer "good" explanations to Irish Dairy Farmers.

Eoin M Kenny, Elodie Ruelle, Anne Geoghegan, Mohammed Temraz, Mark T Keane

[IJCAI 2020] Sister Conference Best Paper Track * Best Paper Award at ICCBR 2019.


Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies

TL;DR: We find that nearest neighbor exemplar-based explanation lead people to view classifiction errors as being “less incorrect”, moreover they do not effect trust.

Eoin M Kenny, Courtney Ford, Molly Quinn, Mark T Keane

[Artificial Intelligence 2021]


Media

National AI Awards Ireland

TL;DR: I won the best application of AI in a student project for my work in Explainable AI in Smart Agriculture.


International Conference on Case-Based Reasoning

TL;DR: I won the best paper award at ICCBR 2019 which was themed on Explainable AI for my paper regarding Smart Agriculture.

Invited Talks (not exhaustive)

Academic Services