Eoin M. Kenny


eoin.kenny (at) jpmorgan (dot) com
Senior Associate AI Researcher
Explainable AI, J.P. Morgan
X
Google Scholar

About

Hello, I am a Senior AI researcher at J.P. Morgan, London, prior to this I did my Ph.D. at University College Dublin with Mark Keane, and Postdoc at MIT with Julie Shah. My fundamental research interest is designing AI systems which can communicate with humans in an understandable and useful way. Thus paving the way for a safe and fair future for all.

My contributions which I am most proud of are in semifactual explanation, and interpretable autonomous agents.

In my future research, I plan to use existing research to teach humans black-box knowledge from AI, so we may learn from their excellent pattern recognition skills. I also want to further demonstrate how explainable AI is essential to the deployment of advanced systems in sensitive applications like autonmous driving.

Upcoming

Work Experience

Education

Selected Publications (from Google Scholar)

These selected papers best represent my current research vision, currently I am working on the deployment of interpretable self-driving cars for error debugging, and transparent LLMs for human-AI collaboration.

In the future, I am also interested in further developing semifactual explanation. Counterfactuals and causality have been shown to be crucial to explanation, but I believe semifactuals will play a huge role in the future too.


The Utility of "Even if" Semifactual Explanation to Optimize Positive Outcomes

TL;DR: We show that semifactuals are more useful than conterfactuals when a user gets a positive outcome from an AI system.

Eoin M. Kenny, Weipeng Huang

[NeurIPS 2023]


Towards Interpretable Deep Reinforcement Learning with Human-Friendly Prototypes

TL;DR: We build the first inherently interpretable, general, well performaning, deep reinforcement learning algorithm.

Eoin M. Kenny, Mycal Tucker, and Julie A. Shah

[ICLR 2023] * Spotlight Presentation (top 25% of accepted papers)


On generating plausible counterfactual and semi-factual explanations for deep learning

TL;DR: We introduce the AI world to semi-factuals, and show a plausible way to generate them (and counterfactuals) using a framework called PIECE.

Eoin M. Kenny and Mark T. Keane

[AAAI 2022]


Bayesian Case-Exclusion and Explainable AI (XAI) for Sustainable Farming

TL;DR: We show how to accuractly predict grass growth and offer "good" explanations to Irish Dairy Farmers.

Eoin M Kenny, Elodie Ruelle, Anne Geoghegan, Mohammed Temraz, Mark T Keane

[IJCAI 2020] Sister Conference Best Paper Track * Best Paper Award at ICCBR 2019.


Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies

TL;DR: We find that nearest neighbor exemplar-based explanation lead people to view classifiction errors as being “less incorrect”, moreover they do not effect trust.

Eoin M Kenny, Courtney Ford, Molly Quinn, Mark T Keane

[Artificial Intelligence 2021]


Media

National AI Awards Ireland

TL;DR: I won the best application of AI in a student project for my work in Explainable AI in Smart Agriculture.


International Conference on Case-Based Reasoning

TL;DR: I won the best paper award at ICCBR 2019 which was themed on Explainable AI for my paper regarding Smart Agriculture.

Invited Talks (not exhaustive)

Academic Services