Fairness in Algorithmic and Crowd-Generated Descriptions of People Images


Dr. Jahna Otterbacher is invited to give a keynote talk at FAT/MM: Fairness Accountability and Transparency in Multimedia on 25th October.

The abstract for the talk: Image analysis algorithms have become indispensable in the modern information ecosystem. Beyond their early use in restricted domains (e.g., military, medical), they are now widely used in consumer applications and social media. With the rise of the Algorithm Economy, image analysis algorithms are increasingly being commercialized as Cognitive Services. This practice is proving to be a boon to the development of applications where user modeling, personalization, and adaptation are required. From e-stores, where image recognition is used to curate a “personal style” for a given shopper based on previously viewed items, to dating apps, which can now act as visual matchmakers, the technology has gained increasing influence in our digital interactions and experiences. However, proprietary image tagging services are black boxes and there are numerous social and ethical issues surrounding their use in contexts where people can be harmed. In this talk, I will discuss recent work in analyzing proprietary image tagging services (e.g., Clarifai, Google Vision, Amazon Rekognition) for their gender and racial biases when tagging images depicting people. I will present our techniques for discrimination discovery in this domain, as well as our work on understanding user perceptions of fairness. Finally, I will explore the sources of such biases, by comparing human versus machine descriptions of the same people images.





Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>