Opportunity 5

“Evaluation of Explainability Tools”


Despite the growing research surrounding FAcct, machine learning models remain mostly black boxes. Understanding the reasons behind predictions/decisions of an ML model is, however, quite important in assessing trust in the model. Due to that researchers have proposed several explanation techniques that explain the predictions of any classifier in an interpretable and faithful manner. The goal of this project is the evaluation of some of the AI explanation techniques using a user-based evaluation approach.


The researcher will develop a platform which will have in background the implementation of 2-3 tools for explainable AI (e.g. LIME, Anchor, LORE). Image classification datasets (available in Kaggle/Github etc), annotated with ground truth will be uploaded in the platform.


The users of the platform will be presented with an image, the class predicted label (output of an ML or DL classifier) and an explanation from each of the implemented explainability tools. Given different scenarios, they have to state:

  1. If they prefer to be presented with only the predicted output or with the predicted output and an explanation for the specific context.
  2. Which explanation type they prefer for the specific context.


Guidelines:

  • The project should result in a research paper and/or a demonstration.
  • If travelling is allowed, CyCAT will support (via STSE) the travelling costs of the researcher to Cyprus for presenting the research results at a CyCAT meeting. If travelling will not be allowed, due to the COVID-19 restrictions, the collaboration and the presentation of the results will be done remotely.