“Explainability management for data annotation”
The main idea around the project is that biases introduced during data annotation can be reduced if annotators are asked to explain their tag when they provide it and to commit to that explanation. The more we involve the user (engaging their system 2 / slow-thinking part of their brain) and the more we hold them “accountable” to their past choices, the less biased their annotations are.
Given the above idea the goals of the project are to develop a tool in order to validate the above hypothesis. The tool will have four different setups according to what we want the user to do with a set of images:
- annotate the images,
- annotate the images while giving explanations,
- annotate based on a provided list of tags while giving explanations,
- annotate based on a provided list of tags and their explanations while giving explanations.
- A particular deliverable should result, e.g., a research paper, a white paper, a demonstration, etc.
- If travelling is allowed, CyCAT will support (via STSE) the travelling costs of the researcher to Cyprus for presenting the research results at a CyCAT meeting. If travelling will not be allowed, due to the COVID-19 restrictions, the collaboration and the presentation of the results will be done remotely.