About
Hello, I am a Postdoctoral Associate at MIT in CSAIL with a joint appointment at AeroAstro. I believe we must shift from our tendency of deploying “black-box” AI, to making it trustworthy when paramount. To this end, I am generating fundamental advances in Explainable AI (XAI), safety, and fairness, before applying them in application to prove they are both understandable and useful, drawing from expertise in computer science and cognitive psychology.
My strongest contributions to XAI have been making fundamental advances which allow (1) actionable explanations to optimize positive outcomes for users with semifactuals (see papers below), and (2) the first general purpose algorithm to make any deep reinforcment learning system inherently interpretable while not sacrificing performance (PW-Net; see below). I have shown the utility of these advances in finance and self-driving car domains, both of which need Trustworthy AI as a non-negotiable component.
In my future research, I plan to further push the boundaries of Trustworthy AI by developing algorithms to (1) debug deep learning, (2) expand contrastive explanation, and (3) make AI regulatable.
Upcoming
- Nov 28th: I will give a talk as part of a panel at Huazhong Agricultural University in Wuhan on how ML and semifactuals can help combat climate change in Smart Agriculture.
- Dec 8th: I am going to NeurIPS 2023 to present three papers (two posters and one workshop at "Regulatable ML"), please get in touch if you would like to connect.
Education
- Ph.D. in Computer Science, UCD, 2022
- M.S. in Computer Science, UCD, 2019
- M.A in Musicology & Performance, Maynooth University, 2013
- BMus, Maynooth University, 2010
Selected Publications (from Google Scholar)
These selected papers best represent my current research vision, currently I am working on the deployment of interpretable self-driving cars for error debugging, and transparent LLMs for human-AI collaboration.
In the future, I am also interested in further developing semifactual explanation. Counterfactuals and causality have been shown to be crucial to explanation, but I believe semifactuals will play a huge role in the future too.
The Utility of "Even if" Semifactual Explanation to Optimize Positive Outcomes
TL;DR: We show that semifactuals are more useful than conterfactuals when a user gets a positive outcome from an AI system.
Eoin M. Kenny, Weipeng Huang
Towards Interpretable Deep Reinforcement Learning with Human-Friendly Prototypes
TL;DR: We build the first inherently interpretable, general, well performaning, deep reinforcement learning algorithm.
Eoin M. Kenny, Mycal Tucker, and Julie A. Shah
[ICLR 2023] * Spotlight Presentation (top 25% of accepted papers)
On generating plausible counterfactual and semi-factual explanations for deep learning
TL;DR: We introduce the AI world to semi-factuals, and show a plausible way to generate them (and counterfactuals) using a framework called PIECE.
Eoin M. Kenny and Mark T. Keane
Bayesian Case-Exclusion and Explainable AI (XAI) for Sustainable Farming
TL;DR: We show how to accuractly predict grass growth and offer "good" explanations to Irish Dairy Farmers.
Eoin M Kenny, Elodie Ruelle, Anne Geoghegan, Mohammed Temraz, Mark T Keane
[IJCAI 2020] Sister Conference Best Paper Track * Best Paper Award at ICCBR 2019.
Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies
TL;DR: We find that nearest neighbor exemplar-based explanation lead people to view classifiction errors as being “less incorrect”, moreover they do not effect trust.
Eoin M Kenny, Courtney Ford, Molly Quinn, Mark T Keane
Media
National AI Awards Ireland
TL;DR: I won the best application of AI in a student project for my work in Explainable AI in Smart Agriculture.
International Conference on Case-Based Reasoning
TL;DR: I won the best paper award at ICCBR 2019 which was themed on Explainable AI for my paper regarding Smart Agriculture.
Invited Talks (not exhaustive)
- Jun 2023, ML Labs at UCD: One way to do a Ph.D. (and postdoc)
- Dec 2022, Navy Centre for Applied Research in Artificial Intelligence: Interpretable Deep Reinforcement Learning
- Mar 2022, Imperial College London: Explaining Black-box algorithms
- Feb 2022, Robert Gordon University: On the utility of explanation-by-example
- Nov 2019, KBC Data Science Bootcamp: Explaining Artificial Intelligence Black-Box Systems
Academic Services
- Reviewer, NeurIPS 2021-2023
- Reviewer, ICLR 2023
- Reviewer, AAAI 2021
- Reviewer, ICML 2022-2023
- Reviewer, Artificial Intelligence Journal 2021-2022