Research
Publications
Pınar Barlas Maximilian Krahn, Styliani Kleanthous Kyriakos Kyriakou Jahna Otterbacher. Shifting our Awareness, Taking Back Tags: Temporal Changes in Computer Vision Services' Social Behaviors Inproceedings Forthcoming the International AAAI Conference on Web and Social Media (ICWSM 2022)., Forthcoming. BibTeX | Tags: Algorithmic Bias @inproceedings{Barlas2022, title = {Shifting our Awareness, Taking Back Tags: Temporal Changes in Computer Vision Services' Social Behaviors}, author = {Pınar Barlas, Maximilian Krahn, Styliani Kleanthous, Kyriakos Kyriakou, Jahna Otterbacher.}, year = {2022}, date = {2022-06-06}, booktitle = {the International AAAI Conference on Web and Social Media (ICWSM 2022).}, keywords = {Algorithmic Bias}, pubstate = {forthcoming}, tppubtype = {inproceedings} } |
Monica Lestari Paramita Kalia Orphanou, Evgenia Christoforou Jahna Otterbacher Frank Hopfgartner Do you see what I see? Images of the COVID-19 pandemic through the lens of Google Inproceedings 2021. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Artificial Intelligence @inproceedings{Paramita2021, title = {Do you see what I see? Images of the COVID-19 pandemic through the lens of Google}, author = {Monica Lestari Paramita, Kalia Orphanou, Evgenia Christoforou, Jahna Otterbacher, Frank Hopfgartner}, url = {https://www.sciencedirect.com/science/article/pii/S0306457321001424}, year = {2021}, date = {2021-09-05}, journal = {Information Processing & Management}, abstract = {During times of crisis, information access is crucial. Given the opaque processes behind modern search engines, it is important to understand the extent to which the “picture” of the Covid-19 pandemic accessed by users differs. We explore variations in what users “see” concerning the pandemic through Google image search, using a two-step approach. First, we crowdsource a search task to users in four regions of Europe, asking them to help us create a photo documentary of Covid-19 by providing image search queries. Analysing the queries, we find five common themes describing information needs. Next, we study three sources of variation – users’ information needs, their geo-locations and query languages – and analyse their influences on the similarity of results. We find that users see the pandemic differently depending on where they live, as evidenced by the 46% similarity across results. When users expressed a given query in different languages, there was no overlap for most of the results. Our analysis suggests that localisation plays a major role in the (dis)similarity of results, and provides evidence of the diverse “picture” of the pandemic seen through Google.}, keywords = {Algorithmic Bias, Artificial Intelligence}, pubstate = {published}, tppubtype = {inproceedings} } During times of crisis, information access is crucial. Given the opaque processes behind modern search engines, it is important to understand the extent to which the “picture” of the Covid-19 pandemic accessed by users differs. We explore variations in what users “see” concerning the pandemic through Google image search, using a two-step approach. First, we crowdsource a search task to users in four regions of Europe, asking them to help us create a photo documentary of Covid-19 by providing image search queries. Analysing the queries, we find five common themes describing information needs. Next, we study three sources of variation – users’ information needs, their geo-locations and query languages – and analyse their influences on the similarity of results. We find that users see the pandemic differently depending on where they live, as evidenced by the 46% similarity across results. When users expressed a given query in different languages, there was no overlap for most of the results. Our analysis suggests that localisation plays a major role in the (dis)similarity of results, and provides evidence of the diverse “picture” of the pandemic seen through Google. |
Styliani Kleanthous Jahna Otterbacher, Jo Bates Fausto Giunchiglia Frank Hopfgartner Tsvi Kuflik Kalia Orphanou Monica Paramita Michael Rovatsos Avital Shulner-Tal L Report on the CyCAT winter school on fairness, accountability, transparency and ethics (FATE) in AI Inproceedings ACM SIGIR Forum, pp. 1–9, ACM New York, NY, USA Association for Computing Machinery, 2021, ISSN: 0163-5840. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @inproceedings{Kleanthous2021b, title = {Report on the CyCAT winter school on fairness, accountability, transparency and ethics (FATE) in AI}, author = {Styliani Kleanthous, Jahna Otterbacher, Jo Bates, Fausto Giunchiglia, Frank Hopfgartner, Tsvi Kuflik, Kalia Orphanou, Monica L Paramita, Michael Rovatsos, Avital Shulner-Tal}, url = {https://doi.org/10.1145/3476415.3476419}, doi = {10.1145/3476415.3476419}, issn = {0163-5840}, year = {2021}, date = {2021-07-16}, booktitle = {ACM SIGIR Forum}, volume = {55}, number = {1}, pages = {1--9}, publisher = {Association for Computing Machinery}, organization = {ACM New York, NY, USA}, abstract = {The first FATE Winter School, organized by the Cyprus Center for Algorithmic Transparency (CyCAT) provided a forum for both students as well as senior researchers to examine the complex topic of Fairness, Accountability, Transparency and Ethics (FATE). Through a program that included two invited keynotes, as well as sessions led by CyCAT partners across Europe and Israel, participants were exposed to a range of approaches on FATE, in a holistic manner. During the Winter School, the team also organized a hands-on activity to evaluate a tool-based intervention where participants interacted with eight prototypes of bias-aware search engines. Finally, participants were invited to join one of four collaborative projects coordinated by CyCAT, thus furthering common understanding and interdisciplinary collaboration on this emerging topic.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {inproceedings} } The first FATE Winter School, organized by the Cyprus Center for Algorithmic Transparency (CyCAT) provided a forum for both students as well as senior researchers to examine the complex topic of Fairness, Accountability, Transparency and Ethics (FATE). Through a program that included two invited keynotes, as well as sessions led by CyCAT partners across Europe and Israel, participants were exposed to a range of approaches on FATE, in a holistic manner. During the Winter School, the team also organized a hands-on activity to evaluate a tool-based intervention where participants interacted with eight prototypes of bias-aware search engines. Finally, participants were invited to join one of four collaborative projects coordinated by CyCAT, thus furthering common understanding and interdisciplinary collaboration on this emerging topic. |
Maria Kasinidou Styliani Kleanthous, Jahna Otterbacher ‘Expected Most of the Results, but Some Others... Surprised Me’: Personality Inference in Image Tagging Services Inproceedings Fogli, Daniela ; Tetteroo, Daniel ; Barricelli, Barbara Rita ; Borsci, Simone ; Markopoulos, Panos ; Papadopoulos, George A (Ed.): International Symposium on End User Development, pp. 187–195, Springer, 2021, ISBN: 978-3-030-79840-6. Abstract | Links | BibTeX | Tags: Algorithmic Bias @inproceedings{Kasinidou2021c, title = {‘Expected Most of the Results, but Some Others... Surprised Me’: Personality Inference in Image Tagging Services}, author = {Maria Kasinidou, Styliani Kleanthous, Jahna Otterbacher}, editor = {Fogli, Daniela and Tetteroo, Daniel and Barricelli, Barbara Rita and Borsci, Simone and Markopoulos, Panos and Papadopoulos, George A.}, url = {https://link.springer.com/chapter/10.1007/978-3-030-79840-6_12}, isbn = {978-3-030-79840-6}, year = {2021}, date = {2021-07-06}, booktitle = {International Symposium on End User Development}, pages = {187--195}, publisher = {Springer}, series = { IS-EUD '21}, abstract = {Image tagging APIs, offered as Cognitive Services in the movement to democratize AI, have become popular in applications that need to provide a personalized user experience. Developers can easily incorporate these services into their applications; however, little is known concerning their behavior under specific circumstances. We consider how two such services behave when predicting elements of the Big-Five personality traits from users’ profile images. We found that personality traits are not equally represented in the APIs’ output tags, with tags focusing mostly on Extraversion. The inaccurate personality prediction and the lack of vocabulary for the equal representation of all personality traits, could result in unreliable implicit user modeling, resulting in sub-optimal – or even undesirable – user experience in the application.}, keywords = {Algorithmic Bias}, pubstate = {published}, tppubtype = {inproceedings} } Image tagging APIs, offered as Cognitive Services in the movement to democratize AI, have become popular in applications that need to provide a personalized user experience. Developers can easily incorporate these services into their applications; however, little is known concerning their behavior under specific circumstances. We consider how two such services behave when predicting elements of the Big-Five personality traits from users’ profile images. We found that personality traits are not equally represented in the APIs’ output tags, with tags focusing mostly on Extraversion. The inaccurate personality prediction and the lack of vocabulary for the equal representation of all personality traits, could result in unreliable implicit user modeling, resulting in sub-optimal – or even undesirable – user experience in the application. |
Evgenia Christoforou Pınar Barlas, Jahna Otterbacher It’s About Time: A View of Crowdsourced Data Before and During the Pandemic Proceeding Association for Computing Machinery, 2021, ISBN: 9781450380966. Abstract | Links | BibTeX | Tags: Algorithmic Bias @proceedings{Christoforou2021, title = {It’s About Time: A View of Crowdsourced Data Before and During the Pandemic}, author = {Evgenia Christoforou, Pınar Barlas, Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3411764.3445317}, doi = {10.1145/3411764.3445317}, isbn = {9781450380966}, year = {2021}, date = {2021-05-08}, publisher = {Association for Computing Machinery}, series = {CHI '21}, abstract = {Data attained through crowdsourcing have an essential role in the development of computer vision algorithms. Crowdsourced data might include reporting biases, since crowdworkers usually describe what is “worth saying” in addition to images’ content. We explore how the unprecedented events of 2020, including the unrest surrounding racial discrimination, and the COVID-19 pandemic, might be reflected in responses to an open-ended annotation task on people images, originally executed in 2018 and replicated in 2020. Analyzing themes of Identity and Health conveyed in workers’ tags, we find evidence that supports the potential for temporal sensitivity in crowdsourced data. The 2020 data exhibit more race-marking of images depicting non-Whites, as well as an increase in tags describing Weight. We relate our findings to the emerging research on crowdworkers’ moods. Furthermore, we discuss the implications of (and suggestions for) designing tasks on proprietary platforms, having demonstrated the possibility for additional, unexpected variation in crowdsourced data due to significant events.}, keywords = {Algorithmic Bias}, pubstate = {published}, tppubtype = {proceedings} } Data attained through crowdsourcing have an essential role in the development of computer vision algorithms. Crowdsourced data might include reporting biases, since crowdworkers usually describe what is “worth saying” in addition to images’ content. We explore how the unprecedented events of 2020, including the unrest surrounding racial discrimination, and the COVID-19 pandemic, might be reflected in responses to an open-ended annotation task on people images, originally executed in 2018 and replicated in 2020. Analyzing themes of Identity and Health conveyed in workers’ tags, we find evidence that supports the potential for temporal sensitivity in crowdsourced data. The 2020 data exhibit more race-marking of images depicting non-Whites, as well as an increase in tags describing Weight. We relate our findings to the emerging research on crowdworkers’ moods. Furthermore, we discuss the implications of (and suggestions for) designing tasks on proprietary platforms, having demonstrated the possibility for additional, unexpected variation in crowdsourced data due to significant events. |
Kalia Orphanou Jahna Otterbacher, Styliani Kleanthous Khuyagbaatar Batsuren Fausto Giunchiglia Veronika Bogina Avital Shulner Tal Tsvi Kuflik Mitigating Bias in Algorithmic Systems: A Fish-Eye View of Problems and Solutions Across Domains Journal Article arXiv preprint arXiv:2103.16953, 2021. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @article{Orphanou2021b, title = {Mitigating Bias in Algorithmic Systems: A Fish-Eye View of Problems and Solutions Across Domains}, author = {Kalia Orphanou, Jahna Otterbacher, Styliani Kleanthous, Khuyagbaatar Batsuren, Fausto Giunchiglia, Veronika Bogina, Avital Shulner Tal, Tsvi Kuflik}, url = {arXiv preprint arXiv:2103.16953}, year = {2021}, date = {2021-03-31}, journal = {arXiv preprint arXiv:2103.16953}, abstract = {Mitigating bias in algorithmic systems is a critical issue drawing attention across communities within the information and computer sciences. Given the complexity of the problem and the involvement of multiple stakeholders, including developers, end-users and third-parties, there is a need to understand the landscape of the sources of bias, and the solutions being proposed to address them. This survey provides a 'fish-eye view', examining approaches across four areas of research. The literature describes three steps toward a comprehensive treatment: bias detection, fairness management and explainability management, and underscores the need to work from within the system as well as from the perspective of stakeholders in the broader context.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {article} } Mitigating bias in algorithmic systems is a critical issue drawing attention across communities within the information and computer sciences. Given the complexity of the problem and the involvement of multiple stakeholders, including developers, end-users and third-parties, there is a need to understand the landscape of the sources of bias, and the solutions being proposed to address them. This survey provides a 'fish-eye view', examining approaches across four areas of research. The literature describes three steps toward a comprehensive treatment: bias detection, fairness management and explainability management, and underscores the need to work from within the system as well as from the perspective of stakeholders in the broader context. |
Maria Kasinidou Styliani Kleanthous, Pınar Barlas Jahna Otterbacher Association for Computing Machinery, 2021, ISBN: 9781450383097. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @proceedings{Kasinidou2021, title = {I agree with the decision, but they didn't deserve this: Future Developers' Perception of Fairness in Algorithmic Decisions}, author = {Maria Kasinidou, Styliani Kleanthous, Pınar Barlas, Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3442188.3445931}, doi = {10.1145/3442188.3445931}, isbn = {9781450383097}, year = {2021}, date = {2021-03-08}, publisher = {Association for Computing Machinery}, series = {FAccT '21}, abstract = {While professionals are increasingly relying on algorithmic systems for making a decision, on some occasions, algorithmic decisions may be perceived as biased or not just. Prior work has looked into the perception of algorithmic decision-making from the user's point of view. In this work, we investigate how students in fields adjacent to algorithm development perceive algorithmic decisionmaking. Participants (N=99) were asked to rate their agreement with statements regarding six constructs that are related to facets of fairness and justice in algorithmic decision-making in three separate scenarios. Two of the three scenarios were independent of each other, while the third scenario presented three different outcomes of the same algorithmic system, demonstrating perception changes triggered by different outputs. Quantitative analysis indicates that a) 'agreeing' with a decision does not mean the person 'deserves the outcome', b) perceiving the factors used in the decision-making as 'appropriate' does not make the decision of the system 'fair' and c) perceiving a system's decision as 'not fair' is affecting the participants' 'trust' in the system. In addition, participants found proportional distribution of benefits more fair than other approaches. Qualitative analysis provides further insights into that information the participants find essential to judge and understand an algorithmic decision-making system's fairness. Finally, the level of academic education has a role to play in the perception of fairness and justice in algorithmic decision-making.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {proceedings} } While professionals are increasingly relying on algorithmic systems for making a decision, on some occasions, algorithmic decisions may be perceived as biased or not just. Prior work has looked into the perception of algorithmic decision-making from the user's point of view. In this work, we investigate how students in fields adjacent to algorithm development perceive algorithmic decisionmaking. Participants (N=99) were asked to rate their agreement with statements regarding six constructs that are related to facets of fairness and justice in algorithmic decision-making in three separate scenarios. Two of the three scenarios were independent of each other, while the third scenario presented three different outcomes of the same algorithmic system, demonstrating perception changes triggered by different outputs. Quantitative analysis indicates that a) 'agreeing' with a decision does not mean the person 'deserves the outcome', b) perceiving the factors used in the decision-making as 'appropriate' does not make the decision of the system 'fair' and c) perceiving a system's decision as 'not fair' is affecting the participants' 'trust' in the system. In addition, participants found proportional distribution of benefits more fair than other approaches. Qualitative analysis provides further insights into that information the participants find essential to judge and understand an algorithmic decision-making system's fairness. Finally, the level of academic education has a role to play in the perception of fairness and justice in algorithmic decision-making. |
Barlas, Pınar ; Kyriakou, Kyriakos ; Guest, Olivia ; Kleanthous, Styliani ; Otterbacher, Jahna 2020. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Artificial Intelligence @proceedings{Barlas2020b, title = {To "See" is to Stereotype: Image Tagging Algorithms, Gender Recognition, and the Accuracy-Fairness Trade-off}, author = {Barlas, Pınar and Kyriakou, Kyriakos and Guest, Olivia and Kleanthous, Styliani and Otterbacher, Jahna}, url = {https://dl.acm.org/doi/abs/10.1145/3432931}, doi = {10.1145/3432931}, year = {2020}, date = {2020-10-17}, series = {CSCW3 20}, abstract = {Machine-learned computer vision algorithms for tagging images are increasingly used by developers and researchers, having become popularized as easy-to-use "cognitive services." Yet these tools struggle with gender recognition, particularly when processing images of women, people of color and non-binary individuals. Socio-technical researchers have cited data bias as a key problem; training datasets often over-represent images of people and contexts that convey social stereotypes. The social psychology literature explains that people learn social stereotypes, in part, by observing others in particular roles and contexts, and can inadvertently learn to associate gender with scenes, occupations and activities. Thus, we study the extent to which image tagging algorithms mimic this phenomenon. We design a controlled experiment, to examine the interdependence between algorithmic recognition of context and the depicted person's gender. In the spirit of auditing to understand machine behaviors, we create a highly controlled dataset of people images, imposed on gender-stereotyped backgrounds. Our methodology is reproducible and our code publicly available. Evaluating five proprietary algorithms, we find that in three, gender inference is hindered when a background is introduced. Of the two that "see" both backgrounds and gender, it is the one whose output is most consistent with human stereotyping processes that is superior in recognizing gender. We discuss the accuracy--fairness trade-off, as well as the importance of auditing black boxes in better understanding this double-edged sword.}, keywords = {Algorithmic Bias, Artificial Intelligence}, pubstate = {published}, tppubtype = {proceedings} } Machine-learned computer vision algorithms for tagging images are increasingly used by developers and researchers, having become popularized as easy-to-use "cognitive services." Yet these tools struggle with gender recognition, particularly when processing images of women, people of color and non-binary individuals. Socio-technical researchers have cited data bias as a key problem; training datasets often over-represent images of people and contexts that convey social stereotypes. The social psychology literature explains that people learn social stereotypes, in part, by observing others in particular roles and contexts, and can inadvertently learn to associate gender with scenes, occupations and activities. Thus, we study the extent to which image tagging algorithms mimic this phenomenon. We design a controlled experiment, to examine the interdependence between algorithmic recognition of context and the depicted person's gender. In the spirit of auditing to understand machine behaviors, we create a highly controlled dataset of people images, imposed on gender-stereotyped backgrounds. Our methodology is reproducible and our code publicly available. Evaluating five proprietary algorithms, we find that in three, gender inference is hindered when a background is introduced. Of the two that "see" both backgrounds and gender, it is the one whose output is most consistent with human stereotyping processes that is superior in recognizing gender. We discuss the accuracy--fairness trade-off, as well as the importance of auditing black boxes in better understanding this double-edged sword. |
Jo Bates Paul Clough, Robert Jaeschke Jahna Otterbacher Kris Unsworth Social and cultural biases in information, algorithms, and systems Journal Article Online Information Review, 2020. Links | BibTeX | Tags: Algorithmic Bias @article{Bates2020, title = {Social and cultural biases in information, algorithms, and systems}, author = {Jo Bates, Paul Clough, Robert Jaeschke, Jahna Otterbacher, Kris Unsworth}, url = {https://eprints.whiterose.ac.uk/158750/}, year = {2020}, date = {2020-03-19}, journal = {Online Information Review}, keywords = {Algorithmic Bias}, pubstate = {published}, tppubtype = {article} } |
Otterbacher, Jahna Fairness in Algorithmic and Crowd-Generated Descriptions of People Images Inproceedings Proceedings of the 1st International Workshop on Fairness, Accountability, and Transparency in MultiMedia, 2019, ISBN: 9781450369152. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness @inproceedings{Otterbacher2019b, title = {Fairness in Algorithmic and Crowd-Generated Descriptions of People Images}, author = {Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3347447.3352693}, doi = {10.1145/3347447.3352693}, isbn = {9781450369152}, year = {2019}, date = {2019-10-15}, booktitle = {Proceedings of the 1st International Workshop on Fairness, Accountability, and Transparency in MultiMedia}, abstract = {Image analysis algorithms have become indispensable in the mod- ern information ecosystem. Beyond their early use in restricted domains (e.g., military, medical), they are now widely used in con- sumer applications and social media. With the rise of the “Algorithm Economy,"1 image analysis algorithms are increasingly being com- mercialized as Cognitive Services. This practice is proving to be a boon to the development of applications where user modeling, personalization and adaptation are required. From e-stores, where image recognition is used to curate a “personal style" for a given shopper based on previously viewed items,2 to dating apps, which can now act as “visual matchmakers,"3 the technology has gained increasing influence in our digital interactions and experiences. However, proprietary image tagging services are black boxes and there are numerous social and ethical issues surrounding their use in contexts where people can be harmed. In this talk, I will discuss recent work in analyzing proprietary image tagging services (e.g., Clarifai, Google Vision, Amazon Rekognition) for their gender and racial biases when tagging images depicting people. I will present our techniques for discrimination discovery in this domain [2], as well as our work on understanding user perceptions of fairness [1]. Finally, I will explore the sources of such biases, by comparing human versus machine descriptions of the same people images [3].}, keywords = {Algorithmic Bias, Algorithmic Fairness}, pubstate = {published}, tppubtype = {inproceedings} } Image analysis algorithms have become indispensable in the mod- ern information ecosystem. Beyond their early use in restricted domains (e.g., military, medical), they are now widely used in con- sumer applications and social media. With the rise of the “Algorithm Economy,"1 image analysis algorithms are increasingly being com- mercialized as Cognitive Services. This practice is proving to be a boon to the development of applications where user modeling, personalization and adaptation are required. From e-stores, where image recognition is used to curate a “personal style" for a given shopper based on previously viewed items,2 to dating apps, which can now act as “visual matchmakers,"3 the technology has gained increasing influence in our digital interactions and experiences. However, proprietary image tagging services are black boxes and there are numerous social and ethical issues surrounding their use in contexts where people can be harmed. In this talk, I will discuss recent work in analyzing proprietary image tagging services (e.g., Clarifai, Google Vision, Amazon Rekognition) for their gender and racial biases when tagging images depicting people. I will present our techniques for discrimination discovery in this domain [2], as well as our work on understanding user perceptions of fairness [1]. Finally, I will explore the sources of such biases, by comparing human versus machine descriptions of the same people images [3]. |
Maria Matsangidou, Jahna Otterbacher What Is Beautiful Continues to Be Good Inproceedings IFIP Conference on Human-Computer Interaction, 2019. Abstract | Links | BibTeX | Tags: Algorithmic Bias @inproceedings{Matsangidou2019, title = {What Is Beautiful Continues to Be Good}, author = {Maria Matsangidou, Jahna Otterbacher}, url = {https://link.springer.com/chapter/10.1007/978-3-030-29390-1_14}, year = {2019}, date = {2019-09-02}, booktitle = {IFIP Conference on Human-Computer Interaction}, abstract = {Image recognition algorithms that automatically tag or moderate content are crucial in many applications but are increasingly opaque. Given transparency concerns, we focus on understanding how algorithms tag people images and their inferences on attractiveness. Theoretically, attractiveness has an evolutionary basis, guiding mating behaviors, although it also drives social behaviors. We test image-tagging APIs as to whether they encode biases surrounding attractiveness. We use the Chicago Face Database, containing images of diverse individuals, along with subjective norming data and objective facial measurements. The algorithms encode biases surrounding attractiveness, perpetuating the stereotype that “what is beautiful is good.” Furthermore, women are often misinterpreted as men. We discuss the algorithms’ reductionist nature, and their potential to infringe on users’ autonomy and well-being, as well as the ethical and legal considerations for developers. Future services should monitor algorithms’ behaviors given their prevalence in the information ecosystem and influence on media.}, keywords = {Algorithmic Bias}, pubstate = {published}, tppubtype = {inproceedings} } Image recognition algorithms that automatically tag or moderate content are crucial in many applications but are increasingly opaque. Given transparency concerns, we focus on understanding how algorithms tag people images and their inferences on attractiveness. Theoretically, attractiveness has an evolutionary basis, guiding mating behaviors, although it also drives social behaviors. We test image-tagging APIs as to whether they encode biases surrounding attractiveness. We use the Chicago Face Database, containing images of diverse individuals, along with subjective norming data and objective facial measurements. The algorithms encode biases surrounding attractiveness, perpetuating the stereotype that “what is beautiful is good.” Furthermore, women are often misinterpreted as men. We discuss the algorithms’ reductionist nature, and their potential to infringe on users’ autonomy and well-being, as well as the ethical and legal considerations for developers. Future services should monitor algorithms’ behaviors given their prevalence in the information ecosystem and influence on media. |
Klimis S. Ntalianis Andreas Kener, Jahna Otterbacher Feelings’ Rating and Detection of Similar Locations, Based on Volunteered Crowdsensing and Crowdsourcing Journal Article IEEE Access, 2019. Abstract | Links | BibTeX | Tags: Algorithmic Bias, crowdsourcing @article{Ntalianis2019, title = {Feelings’ Rating and Detection of Similar Locations, Based on Volunteered Crowdsensing and Crowdsourcing}, author = {Klimis S. Ntalianis, Andreas Kener, Jahna Otterbacher}, url = {https://ieeexplore.ieee.org/document/8755832}, doi = {10.1109/ACCESS.2019.2926812}, year = {2019}, date = {2019-07-04}, journal = {IEEE Access}, abstract = {In this paper, an innovative geographical locations' rating scheme is presented, which is based on crowdsensing and crowdsourcing. People sense their surrounding space and submit evaluations through: (a) a smartphone application, and (b) a prototype website. Both have been implemented using the state-of-the-art technologies. Evaluations are pairs of feeling/state and strength, where six different feelings/states and five strength levels are considered. In addition, the detection of similar locations is proposed by maximizing a cross-correlation criterion through a genetic algorithm approach. Technical details of the overall system are provided so that the interested readers can replicate its components. The experimental results on real-world data, which also include comparisons with Google Maps Rating and Tripadvisor, illustrate the merits and limitations of each technology. Finally, the paper is concluded by uncovering and discussing interesting issues for future research.}, keywords = {Algorithmic Bias, crowdsourcing}, pubstate = {published}, tppubtype = {article} } In this paper, an innovative geographical locations' rating scheme is presented, which is based on crowdsensing and crowdsourcing. People sense their surrounding space and submit evaluations through: (a) a smartphone application, and (b) a prototype website. Both have been implemented using the state-of-the-art technologies. Evaluations are pairs of feeling/state and strength, where six different feelings/states and five strength levels are considered. In addition, the detection of similar locations is proposed by maximizing a cross-correlation criterion through a genetic algorithm approach. Technical details of the overall system are provided so that the interested readers can replicate its components. The experimental results on real-world data, which also include comparisons with Google Maps Rating and Tripadvisor, illustrate the merits and limitations of each technology. Finally, the paper is concluded by uncovering and discussing interesting issues for future research. |
Tal, Avital Shulner; Batsuren, Khuyagbaatar; Bogina, Veronika; Giunchiglia, Fausto; Hartman, Alan; Kleanthous-Loizou, Styliani; Kuflik, Tsvi; Otterbacher, Jahna 14th International Workshop On Semantic And Social Media Adaptation And Personalization, SMAP 2019 ACM, 2019. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @workshop{endtoend2019, title = {"End to End" - Towards a Framework for Reducing Biases and Promoting Transparency of Algorithmic Systems}, author = {Avital Shulner Tal and Khuyagbaatar Batsuren and Veronika Bogina and Fausto Giunchiglia and Alan Hartman and Styliani Kleanthous-Loizou and Tsvi Kuflik and Jahna Otterbacher}, url = {https://www.cycat.io/wp-content/uploads/2019/07/1570543680.pdf}, year = {2019}, date = {2019-06-09}, booktitle = {14th International Workshop On Semantic And Social Media Adaptation And Personalization}, publisher = {ACM}, series = {SMAP 2019}, abstract = {Algorithms play an increasing role in our everyday lives. Recently, the harmful potential of biased algorithms has been recognized by researchers and practitioners. We have also witnessed a growing interest in ensuring the fairness and transparency of algorithmic systems. However, so far there is no agreed upon solution and not even an agreed terminology. The proposed research defines the problem space, solution space and a prototype of comprehensive framework for the detection and reducing biases in algorithmic systems.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {workshop} } Algorithms play an increasing role in our everyday lives. Recently, the harmful potential of biased algorithms has been recognized by researchers and practitioners. We have also witnessed a growing interest in ensuring the fairness and transparency of algorithmic systems. However, so far there is no agreed upon solution and not even an agreed terminology. The proposed research defines the problem space, solution space and a prototype of comprehensive framework for the detection and reducing biases in algorithmic systems. |
Jahna Otterbacher Ioannis Katakis, Pantelis Agathangelou Linguistic Bias in Crowdsourced Biographies A Cross-lingual Examination Book Chapter 2019. Abstract | Links | BibTeX | Tags: Algorithmic Bias @inbook{Otterbacher2019c, title = {Linguistic Bias in Crowdsourced Biographies A Cross-lingual Examination}, author = {Jahna Otterbacher, Ioannis Katakis, Pantelis Agathangelou}, url = {https://www.worldscientific.com/doi/abs/10.1142/9789813274884_0012}, year = {2019}, date = {2019-01-01}, abstract = {Biographies make up a significant portion of Wikipedia entries and are a source of information and inspiration for the public. We examine a threat to their objectivity, linguistic biases, which are pervasive in human communication. Linguistic bias, the systematic asymmetry in the language used to describe people as a function of their social groups, plays a role in the perpetuation of stereotypes. Theory predicts that we describe people who are expected – because they are members of our own in-groups or are stereotype-congruent – with more abstract, subjective language, as compared to others. Abstract language has the power to sway our impressions of others as it implies stability over time. Extending our monolingual work, we consider biographies of intellectuals at the English- and Greek-language Wikipedias. We use our recently introduced sentiment analysis tool, DidaxTo, which extracts domain-specific opinion words to build lexicons of subjective words in each language and for each gender, and compare the extent to which abstract language is used. Contrary to expectation, we find evidence of gender-based linguistic bias, with women being described more abstractly as compared to men. However, this is limited to English-language biographies. We discuss the implications of using DidaxTo to monitor linguistic bias in texts produced via crowdsourcing.}, keywords = {Algorithmic Bias}, pubstate = {published}, tppubtype = {inbook} } Biographies make up a significant portion of Wikipedia entries and are a source of information and inspiration for the public. We examine a threat to their objectivity, linguistic biases, which are pervasive in human communication. Linguistic bias, the systematic asymmetry in the language used to describe people as a function of their social groups, plays a role in the perpetuation of stereotypes. Theory predicts that we describe people who are expected – because they are members of our own in-groups or are stereotype-congruent – with more abstract, subjective language, as compared to others. Abstract language has the power to sway our impressions of others as it implies stability over time. Extending our monolingual work, we consider biographies of intellectuals at the English- and Greek-language Wikipedias. We use our recently introduced sentiment analysis tool, DidaxTo, which extracts domain-specific opinion words to build lexicons of subjective words in each language and for each gender, and compare the extent to which abstract language is used. Contrary to expectation, we find evidence of gender-based linguistic bias, with women being described more abstractly as compared to men. However, this is limited to English-language biographies. We discuss the implications of using DidaxTo to monitor linguistic bias in texts produced via crowdsourcing. |