Dr. Styliani Kleanthous, CyCAT gave a talk at the Information School, University of Sheffield


The abstract for the talk: Image analysis algorithms have become indispensable in the modern information ecosystem. Beyond their early use in restricted domains (e.g., military, medical), they are now widely used in consumer applications and social media enabling functionality that users take for granted. Recently image analysis algorithms, have become widely available as Cognitive Services. This practice is proving to be a boon to the development of applications where user modeling, personalization, and adaptation are required. But while tagging APIs offer developers an inexpensive and convenient means to add functionality to their creations, most are opaque and proprietary and there are numerous social and ethical issues surrounding their use in contexts where people can be harmed. In this talk, I will discuss recent work in analyzing proprietary image tagging services (e.g., Clarifai, Google Vision, Amazon Rekognition) for their gender and racial biases when tagging images depicting people [1]. I will present our techniques for discrimination discovery in this domain, as well as our work on understanding user perceptions of fairness [2]. Finally, I will explore the sources of such biases, by comparing human versus machine descriptions of the same people images [3].

The abstract for the talk: Image analysis algorithms have become indispensable in the modern information ecosystem. Beyond their early use in restricted domains (e.g., military, medical), they are now widely used in consumer applications and social media enabling functionality that users take for granted. Recently image analysis algorithms, have become widely available as Cognitive Services. This practice is proving to be a boon to the development of applications where user modeling, personalization, and adaptation are required. But while tagging APIs offer developers an inexpensive and convenient means to add functionality to their creations, most are opaque and proprietary and there are numerous social and ethical issues surrounding their use in contexts where people can be harmed. In this talk, I will discuss recent work in analyzing proprietary image tagging services (e.g., Clarifai, Google Vision, Amazon Rekognition) for their gender and racial biases when tagging images depicting people [1]. I will present our techniques for discrimination discovery in this domain, as well as our work on understanding user perceptions of fairness [2]. Finally, I will explore the sources of such biases, by comparing human versus machine descriptions of the same people images [3].

[1] Kyriakou, K., Barlas, P., Kleanthous, S., & Otterbacher, J. (2019, July). Fairness in Proprietary Image Tagging Algorithms: A Cross-Platform Audit on People Images. In Proceedings of the International AAAI Conference on Web and Social Media (Vol. 13, No. 01, pp. 313-322).

[2] Barlas, P., Kleanthous, S., Kyriakou, K., & Otterbacher, J. (2019, June). What Makes an Image Tagger Fair?. In Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization (pp. 95-103). ACM.

[3] Otterbacher J., Barlas, P., Kleanthous, S., Kyriakou, K. 2019.How Do We Talk About Other People? Group (Un)Fairness in Natural Language Image Descriptions. In Seventh AAAI Conference on Human Computation and Crowdsourcing (HCOMP).





Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>