Research
Publications
Pınar Barlas Maximilian Krahn, Styliani Kleanthous Kyriakos Kyriakou Jahna Otterbacher. Shifting our Awareness, Taking Back Tags: Temporal Changes in Computer Vision Services' Social Behaviors Inproceedings Forthcoming the International AAAI Conference on Web and Social Media (ICWSM 2022)., Forthcoming. BibTeX | Tags: Algorithmic Bias @inproceedings{Barlas2022, title = {Shifting our Awareness, Taking Back Tags: Temporal Changes in Computer Vision Services' Social Behaviors}, author = {Pınar Barlas, Maximilian Krahn, Styliani Kleanthous, Kyriakos Kyriakou, Jahna Otterbacher.}, year = {2022}, date = {2022-06-06}, booktitle = {the International AAAI Conference on Web and Social Media (ICWSM 2022).}, keywords = {Algorithmic Bias}, pubstate = {forthcoming}, tppubtype = {inproceedings} } |
Paul D Clough, Jahna Otterbacher Democratizing AI: From Theory to Practice Journal Article Forthcoming The Handbook of Research on Artificial Intelligence, Innovation, and Entrepreneurship. London: Edward Elgar Publishing., Forthcoming. Abstract | BibTeX | Tags: Artificial Intelligence @article{Clough2022, title = {Democratizing AI: From Theory to Practice}, author = {Paul D Clough, Jahna Otterbacher }, year = {2022}, date = {2022-01-31}, journal = {The Handbook of Research on Artificial Intelligence, Innovation, and Entrepreneurship. London: Edward Elgar Publishing.}, abstract = {We are witnessing a movement towards Democratizing AI, with a wide variety of tools, platforms and data sources becoming accessible to more people. This movement is said to be fueling innovation, extending the capabilities of individuals and organizations, by making the creation and application of AI easier. However, beyond the hype, there is a need to understand what this trend means for various stakeholders. Through the lens of socio-political democracy, this chapter examines the democratization of AI. We find that the present state of the “AI Democracy” maps onto only one of three elements of a democracy. Current efforts focus primarily on providing people with the tools and technical infrastructure needed to participate in AI, but not in protecting their freedoms and access to social benefits, which are the other core elements of democracy. We discuss the possibilities for realizing a broader AI democracy, along with the anticipated challenges.}, keywords = {Artificial Intelligence}, pubstate = {forthcoming}, tppubtype = {article} } We are witnessing a movement towards Democratizing AI, with a wide variety of tools, platforms and data sources becoming accessible to more people. This movement is said to be fueling innovation, extending the capabilities of individuals and organizations, by making the creation and application of AI easier. However, beyond the hype, there is a need to understand what this trend means for various stakeholders. Through the lens of socio-political democracy, this chapter examines the democratization of AI. We find that the present state of the “AI Democracy” maps onto only one of three elements of a democracy. Current efforts focus primarily on providing people with the tools and technical infrastructure needed to participate in AI, but not in protecting their freedoms and access to social benefits, which are the other core elements of democracy. We discuss the possibilities for realizing a broader AI democracy, along with the anticipated challenges. |
Judy Kay Tsvi Kuflik, Michael Rovatsos Toward a Transparency by Design Framework (day 4) Journal Article Dagstuhl Reports, Vol. 11, Issue 5 ISSN 2192-5283, pp. 18, 2021. Abstract | Links | BibTeX | Tags: Algorithmic Transparency @article{Kay2021, title = {Toward a Transparency by Design Framework (day 4)}, author = {Judy Kay, Tsvi Kuflik, Michael Rovatsos}, url = {https://drops.dagstuhl.de/opus/volltexte/2021/15566/pdf/dagrep-v011-i005-complete.pdf#page=20}, year = {2021}, date = {2021-12-01}, journal = {Dagstuhl Reports, Vol. 11, Issue 5 ISSN 2192-5283}, pages = {18}, abstract = {During the fourth and final day, the results of the first three days were discussed and summarised into a joint document that is intended to form a basis for a joint paper. The structure of the document followed the results of the discussion of the topics and the order of the discussion in the first three days, organised into the following chapters: 1. Why transparency?}, keywords = {Algorithmic Transparency}, pubstate = {published}, tppubtype = {article} } During the fourth and final day, the results of the first three days were discussed and summarised into a joint document that is intended to form a basis for a joint paper. The structure of the document followed the results of the discussion of the topics and the order of the discussion in the first three days, organised into the following chapters: 1. Why transparency? |
Kalia Orphanou Evgenia Christoforou, Jahna Otterbacher Monica Lestari Paramita Frank Hopfgartner Preserving the memory of the first wave of COVID-19 pandemic: Crowdsourcing a collection of image search queries Inproceedings 2021. Abstract | Links | BibTeX | Tags: Artificial Intelligence @inproceedings{Orphanou2021, title = {Preserving the memory of the first wave of COVID-19 pandemic: Crowdsourcing a collection of image search queries}, author = {Kalia Orphanou, Evgenia Christoforou, Jahna Otterbacher, Monica Lestari Paramita, Frank Hopfgartner}, url = {https://eprints.whiterose.ac.uk/180974/}, year = {2021}, date = {2021-11-10}, abstract = {The unprecedented events of the COVID-19 pandemic have generated an enormous amount of information and populated the Web with new content relevant to the pandemic and its implications. Visual information such as images has been shown to be crucial in the context of scientific communication. Images are often interpreted as being closer to the truth as compared to other forms of communication, because of their physical representation of an event such as the COVID-19 pandemic. In this work, we ask crowdworkers across four regions of Europe that were severely affected by the first wave of pandemic, to provide us with image search queries related to COVID-19 pandemic. The goal of this study is to understand the similarities/differences of the aspects that are most important to users across different locations regarding the first wave of COVID-19 pandemic. Through a content analysis of their queries, we discovered five common themes of concern to all, although the frequency of use differed across regions.}, keywords = {Artificial Intelligence}, pubstate = {published}, tppubtype = {inproceedings} } The unprecedented events of the COVID-19 pandemic have generated an enormous amount of information and populated the Web with new content relevant to the pandemic and its implications. Visual information such as images has been shown to be crucial in the context of scientific communication. Images are often interpreted as being closer to the truth as compared to other forms of communication, because of their physical representation of an event such as the COVID-19 pandemic. In this work, we ask crowdworkers across four regions of Europe that were severely affected by the first wave of pandemic, to provide us with image search queries related to COVID-19 pandemic. The goal of this study is to understand the similarities/differences of the aspects that are most important to users across different locations regarding the first wave of COVID-19 pandemic. Through a content analysis of their queries, we discovered five common themes of concern to all, although the frequency of use differed across regions. |
Styliani Kleanthous Maria Kasinidou, Pınar Barlas Jahna Otterbacher Perception of fairness in algorithmic decisions: Future developers' perspective Journal Article Patterns, 2021. Abstract | Links | BibTeX | Tags: Accountability, Algorithmic Fairness, Algorithmic Transparency, Artificial Intelligence @article{Kleanthous2021, title = {Perception of fairness in algorithmic decisions: Future developers' perspective}, author = {Styliani Kleanthous, Maria Kasinidou, Pınar Barlas, Jahna Otterbacher}, url = {https://www.sciencedirect.com/science/article/pii/S2666389921002476}, year = {2021}, date = {2021-11-03}, journal = {Patterns}, abstract = {Fairness, accountability, transparency, and ethics (FATE) in algorithmic systems is gaining a lot of attention lately. With the continuous advancement of machine learning and artificial intelligence, research and tech companies are coming across incidents where algorithmic systems are making non-objective decisions that may reproduce and/or amplify social stereotypes and inequalities. There is a great effort by the research community on developing frameworks of fairness and algorithmic models to alleviate biases; however, we first need to understand how people perceive the complex construct of algorithmic fairness. In this work, we investigate how young and future developers perceive these concepts. Our results can inform future research on (1) understanding perceptions of algorithmic FATE, (2) highlighting the needs for systematic training and education on FATE, and (3) raising awareness among young developers on the potential impact that the systems they are developing have in society.}, keywords = {Accountability, Algorithmic Fairness, Algorithmic Transparency, Artificial Intelligence}, pubstate = {published}, tppubtype = {article} } Fairness, accountability, transparency, and ethics (FATE) in algorithmic systems is gaining a lot of attention lately. With the continuous advancement of machine learning and artificial intelligence, research and tech companies are coming across incidents where algorithmic systems are making non-objective decisions that may reproduce and/or amplify social stereotypes and inequalities. There is a great effort by the research community on developing frameworks of fairness and algorithmic models to alleviate biases; however, we first need to understand how people perceive the complex construct of algorithmic fairness. In this work, we investigate how young and future developers perceive these concepts. Our results can inform future research on (1) understanding perceptions of algorithmic FATE, (2) highlighting the needs for systematic training and education on FATE, and (3) raising awareness among young developers on the potential impact that the systems they are developing have in society. |
Kyriakos Kyriakou Pınar Barlas, Styliani Kleanthous Evgenia Christoforou Jahna Otterbacher Crowdsourcing Human Oversight on Image Tagging Algorithms: An initial study of image diversity Inproceedings The Ninth AAAI Conference on Human Computation and Crowdsourcing, 2021. Abstract | Links | BibTeX | Tags: crowdsourcing @inproceedings{Kyriakou2021, title = {Crowdsourcing Human Oversight on Image Tagging Algorithms: An initial study of image diversity}, author = {Kyriakos Kyriakou, Pınar Barlas, Styliani Kleanthous, Evgenia Christoforou, Jahna Otterbacher}, url = {https://www.humancomputation.com/assets/wips_demos/HCOMP_2021_paper_104.pdf}, year = {2021}, date = {2021-11-01}, booktitle = {The Ninth AAAI Conference on Human Computation and Crowdsourcing}, journal = {HCOMP}, abstract = {Various stakeholders have called for human oversight of algorithmic processes, as a means to mitigate the possibility for automated discrimination and other social harms. This is even more crucial in light of the democratization of AI, where data and algorithms, such as Cognitive Services, are deployed into various applications and socio-cultural contexts. Inspired by previous work proposing human-in-the-loop governance mechanisms, we run a feasibility study involving image tagging services. Specifically, we ask whether micro-task crowdsourcing can be an effective means for collecting a diverse pool of data for evaluating fairness in a hypothetical scenario of analyzing professional profile photos in a later phase. In this work-in-progress paper, we present our proposed oversight approach and framework for analyzing the diversity of the images provided. Given the subjectivity of fairness judgements, we first aimed to recruit a diverse crowd from three distinct regions. This study lays the groundwork for expanding the approach, to offer developers a means to evaluate Cognitive Services before and/or during deployment.}, keywords = {crowdsourcing}, pubstate = {published}, tppubtype = {inproceedings} } Various stakeholders have called for human oversight of algorithmic processes, as a means to mitigate the possibility for automated discrimination and other social harms. This is even more crucial in light of the democratization of AI, where data and algorithms, such as Cognitive Services, are deployed into various applications and socio-cultural contexts. Inspired by previous work proposing human-in-the-loop governance mechanisms, we run a feasibility study involving image tagging services. Specifically, we ask whether micro-task crowdsourcing can be an effective means for collecting a diverse pool of data for evaluating fairness in a hypothetical scenario of analyzing professional profile photos in a later phase. In this work-in-progress paper, we present our proposed oversight approach and framework for analyzing the diversity of the images provided. Given the subjectivity of fairness judgements, we first aimed to recruit a diverse crowd from three distinct regions. This study lays the groundwork for expanding the approach, to offer developers a means to evaluate Cognitive Services before and/or during deployment. |
Monica Lestari Paramita Kalia Orphanou, Evgenia Christoforou Jahna Otterbacher Frank Hopfgartner Do you see what I see? Images of the COVID-19 pandemic through the lens of Google Inproceedings 2021. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Artificial Intelligence @inproceedings{Paramita2021, title = {Do you see what I see? Images of the COVID-19 pandemic through the lens of Google}, author = {Monica Lestari Paramita, Kalia Orphanou, Evgenia Christoforou, Jahna Otterbacher, Frank Hopfgartner}, url = {https://www.sciencedirect.com/science/article/pii/S0306457321001424}, year = {2021}, date = {2021-09-05}, journal = {Information Processing & Management}, abstract = {During times of crisis, information access is crucial. Given the opaque processes behind modern search engines, it is important to understand the extent to which the “picture” of the Covid-19 pandemic accessed by users differs. We explore variations in what users “see” concerning the pandemic through Google image search, using a two-step approach. First, we crowdsource a search task to users in four regions of Europe, asking them to help us create a photo documentary of Covid-19 by providing image search queries. Analysing the queries, we find five common themes describing information needs. Next, we study three sources of variation – users’ information needs, their geo-locations and query languages – and analyse their influences on the similarity of results. We find that users see the pandemic differently depending on where they live, as evidenced by the 46% similarity across results. When users expressed a given query in different languages, there was no overlap for most of the results. Our analysis suggests that localisation plays a major role in the (dis)similarity of results, and provides evidence of the diverse “picture” of the pandemic seen through Google.}, keywords = {Algorithmic Bias, Artificial Intelligence}, pubstate = {published}, tppubtype = {inproceedings} } During times of crisis, information access is crucial. Given the opaque processes behind modern search engines, it is important to understand the extent to which the “picture” of the Covid-19 pandemic accessed by users differs. We explore variations in what users “see” concerning the pandemic through Google image search, using a two-step approach. First, we crowdsource a search task to users in four regions of Europe, asking them to help us create a photo documentary of Covid-19 by providing image search queries. Analysing the queries, we find five common themes describing information needs. Next, we study three sources of variation – users’ information needs, their geo-locations and query languages – and analyse their influences on the similarity of results. We find that users see the pandemic differently depending on where they live, as evidenced by the 46% similarity across results. When users expressed a given query in different languages, there was no overlap for most of the results. Our analysis suggests that localisation plays a major role in the (dis)similarity of results, and provides evidence of the diverse “picture” of the pandemic seen through Google. |
Styliani Kleanthous Jahna Otterbacher, Jo Bates Fausto Giunchiglia Frank Hopfgartner Tsvi Kuflik Kalia Orphanou Monica Paramita Michael Rovatsos Avital Shulner-Tal L Report on the CyCAT winter school on fairness, accountability, transparency and ethics (FATE) in AI Inproceedings ACM SIGIR Forum, pp. 1–9, ACM New York, NY, USA Association for Computing Machinery, 2021, ISSN: 0163-5840. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @inproceedings{Kleanthous2021b, title = {Report on the CyCAT winter school on fairness, accountability, transparency and ethics (FATE) in AI}, author = {Styliani Kleanthous, Jahna Otterbacher, Jo Bates, Fausto Giunchiglia, Frank Hopfgartner, Tsvi Kuflik, Kalia Orphanou, Monica L Paramita, Michael Rovatsos, Avital Shulner-Tal}, url = {https://doi.org/10.1145/3476415.3476419}, doi = {10.1145/3476415.3476419}, issn = {0163-5840}, year = {2021}, date = {2021-07-16}, booktitle = {ACM SIGIR Forum}, volume = {55}, number = {1}, pages = {1--9}, publisher = {Association for Computing Machinery}, organization = {ACM New York, NY, USA}, abstract = {The first FATE Winter School, organized by the Cyprus Center for Algorithmic Transparency (CyCAT) provided a forum for both students as well as senior researchers to examine the complex topic of Fairness, Accountability, Transparency and Ethics (FATE). Through a program that included two invited keynotes, as well as sessions led by CyCAT partners across Europe and Israel, participants were exposed to a range of approaches on FATE, in a holistic manner. During the Winter School, the team also organized a hands-on activity to evaluate a tool-based intervention where participants interacted with eight prototypes of bias-aware search engines. Finally, participants were invited to join one of four collaborative projects coordinated by CyCAT, thus furthering common understanding and interdisciplinary collaboration on this emerging topic.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {inproceedings} } The first FATE Winter School, organized by the Cyprus Center for Algorithmic Transparency (CyCAT) provided a forum for both students as well as senior researchers to examine the complex topic of Fairness, Accountability, Transparency and Ethics (FATE). Through a program that included two invited keynotes, as well as sessions led by CyCAT partners across Europe and Israel, participants were exposed to a range of approaches on FATE, in a holistic manner. During the Winter School, the team also organized a hands-on activity to evaluate a tool-based intervention where participants interacted with eight prototypes of bias-aware search engines. Finally, participants were invited to join one of four collaborative projects coordinated by CyCAT, thus furthering common understanding and interdisciplinary collaboration on this emerging topic. |
Maria Kasinidou Styliani Kleanthous, Jahna Otterbacher ‘Expected Most of the Results, but Some Others... Surprised Me’: Personality Inference in Image Tagging Services Inproceedings Fogli, Daniela ; Tetteroo, Daniel ; Barricelli, Barbara Rita ; Borsci, Simone ; Markopoulos, Panos ; Papadopoulos, George A (Ed.): International Symposium on End User Development, pp. 187–195, Springer, 2021, ISBN: 978-3-030-79840-6. Abstract | Links | BibTeX | Tags: Algorithmic Bias @inproceedings{Kasinidou2021c, title = {‘Expected Most of the Results, but Some Others... Surprised Me’: Personality Inference in Image Tagging Services}, author = {Maria Kasinidou, Styliani Kleanthous, Jahna Otterbacher}, editor = {Fogli, Daniela and Tetteroo, Daniel and Barricelli, Barbara Rita and Borsci, Simone and Markopoulos, Panos and Papadopoulos, George A.}, url = {https://link.springer.com/chapter/10.1007/978-3-030-79840-6_12}, isbn = {978-3-030-79840-6}, year = {2021}, date = {2021-07-06}, booktitle = {International Symposium on End User Development}, pages = {187--195}, publisher = {Springer}, series = { IS-EUD '21}, abstract = {Image tagging APIs, offered as Cognitive Services in the movement to democratize AI, have become popular in applications that need to provide a personalized user experience. Developers can easily incorporate these services into their applications; however, little is known concerning their behavior under specific circumstances. We consider how two such services behave when predicting elements of the Big-Five personality traits from users’ profile images. We found that personality traits are not equally represented in the APIs’ output tags, with tags focusing mostly on Extraversion. The inaccurate personality prediction and the lack of vocabulary for the equal representation of all personality traits, could result in unreliable implicit user modeling, resulting in sub-optimal – or even undesirable – user experience in the application.}, keywords = {Algorithmic Bias}, pubstate = {published}, tppubtype = {inproceedings} } Image tagging APIs, offered as Cognitive Services in the movement to democratize AI, have become popular in applications that need to provide a personalized user experience. Developers can easily incorporate these services into their applications; however, little is known concerning their behavior under specific circumstances. We consider how two such services behave when predicting elements of the Big-Five personality traits from users’ profile images. We found that personality traits are not equally represented in the APIs’ output tags, with tags focusing mostly on Extraversion. The inaccurate personality prediction and the lack of vocabulary for the equal representation of all personality traits, could result in unreliable implicit user modeling, resulting in sub-optimal – or even undesirable – user experience in the application. |
Maria Kasinidou Styliani Kleanthous, Kalia Orphanou Jahna Otterbacher Association for Computing Machinery, 2021, ISBN: 9781450382144. Abstract | Links | BibTeX | Tags: Algorithmic Fairness, Algorithmic Transparency @proceedings{Kasinidou2021b, title = {Educating Computer Science Students about Algorithmic Fairness, Accountability, Transparency and Ethics}, author = {Maria Kasinidou, Styliani Kleanthous, Kalia Orphanou, Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3430665.3456311}, doi = {10.1145/3430665.3456311}, isbn = {9781450382144}, year = {2021}, date = {2021-06-26}, publisher = {Association for Computing Machinery}, series = {ITiCSE '21}, abstract = {Professionals are increasingly relying on algorithmic systems for decision making however, algorithmic decisions occasionally perceived as biased or not just. Prior work has provided evidences that education can make a difference on the perception of young developers on algorithmic fairness. In this paper, we investigate computer science students' perception of FATE in algorithmic decision-making and whether their views on FATE can be changed by attending a seminar on FATE topics. Participants attended a seminar on FATE in algorithmic decision-making and they were asked to respond to two online questionnaires to measure their pre- and post-seminar perception on FATE. Results show that a short seminar can make a difference in understanding and perception as well as the attitude of the students towards FATE in algorithmic decision support. CS curricula need to be updated and include FATE topics if we want algorithmic decision support systems to be just for all.}, keywords = {Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {proceedings} } Professionals are increasingly relying on algorithmic systems for decision making however, algorithmic decisions occasionally perceived as biased or not just. Prior work has provided evidences that education can make a difference on the perception of young developers on algorithmic fairness. In this paper, we investigate computer science students' perception of FATE in algorithmic decision-making and whether their views on FATE can be changed by attending a seminar on FATE topics. Participants attended a seminar on FATE in algorithmic decision-making and they were asked to respond to two online questionnaires to measure their pre- and post-seminar perception on FATE. Results show that a short seminar can make a difference in understanding and perception as well as the attitude of the students towards FATE in algorithmic decision support. CS curricula need to be updated and include FATE topics if we want algorithmic decision support systems to be just for all. |
Fausto Giunchiglia Styliani Kleanthous, Jahna Otterbacher Tim Draws Transparency Paths - Documenting the Diversity of User Perceptions Inproceedings Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, Association for Computing Machinery, 2021, ISBN: 9781450383677. Abstract | Links | BibTeX | Tags: Algorithmic Transparency, Diversity @inproceedings{Giunchiglia2021, title = {Transparency Paths - Documenting the Diversity of User Perceptions}, author = {Fausto Giunchiglia, Styliani Kleanthous, Jahna Otterbacher, Tim Draws}, url = {https://dl.acm.org/doi/abs/10.1145/3450614.3463292}, doi = {10.1145/3450614.3463292}, isbn = {9781450383677}, year = {2021}, date = {2021-06-21}, booktitle = {Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization}, publisher = {Association for Computing Machinery}, series = {UMAP '21}, abstract = {We are living in an era of global digital platforms, eco-systems of algorithmic processes that serve users worldwide. However, the increasing exposure to diversity online – of information and users – has led to important considerations of bias. A given platform, such as the Google search engine, may demonstrate behaviors that deviate from what users expect, or what they consider fair, relative to their own context and experiences. In this exploratory work, we put forward the notion of transparency paths, a process by which we document our position, choices, and perceptions when developing and/or using algorithmic platforms. We conducted a self-reflection exercise with seven researchers, who collected and analyzed two sets of images; one depicting an everyday activity, “washing hands,” and a second depicting the concept of “home.” Participants had to document their process and choices, and in the end, compare their work to others. Finally, participants were asked to reflect on the definitions of bias and diversity. The exercise revealed the range of perspectives and approaches taken, underscoring the need for future work that will refine the transparency paths methodology.}, keywords = {Algorithmic Transparency, Diversity}, pubstate = {published}, tppubtype = {inproceedings} } We are living in an era of global digital platforms, eco-systems of algorithmic processes that serve users worldwide. However, the increasing exposure to diversity online – of information and users – has led to important considerations of bias. A given platform, such as the Google search engine, may demonstrate behaviors that deviate from what users expect, or what they consider fair, relative to their own context and experiences. In this exploratory work, we put forward the notion of transparency paths, a process by which we document our position, choices, and perceptions when developing and/or using algorithmic platforms. We conducted a self-reflection exercise with seven researchers, who collected and analyzed two sets of images; one depicting an everyday activity, “washing hands,” and a second depicting the concept of “home.” Participants had to document their process and choices, and in the end, compare their work to others. Finally, participants were asked to reflect on the definitions of bias and diversity. The exercise revealed the range of perspectives and approaches taken, underscoring the need for future work that will refine the transparency paths methodology. |
Bamshad Mobasher Styliani Kleanthous, Bettina Berendt Jahna Otterbacher Tsvi Kuflik Avital Shulner Tal FairUMAP 2021: The 4th Workshop on Fairness in User Modeling, Adaptation and Personalization Workshop Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, 2021, ISBN: 978-1-4503-8367-7/21/06.. Abstract | Links | BibTeX | Tags: Algorithmic Fairness @workshop{Mobasher2021, title = {FairUMAP 2021: The 4th Workshop on Fairness in User Modeling, Adaptation and Personalization}, author = {Bamshad Mobasher, Styliani Kleanthous, Bettina Berendt, Jahna Otterbacher, Tsvi Kuflik, Avital Shulner Tal}, url = {https://dl.acm.org/doi/fullHtml/10.1145/3450614.3461454}, doi = {10.1145/3450614.3461454}, isbn = {978-1-4503-8367-7/21/06.}, year = {2021}, date = {2021-06-21}, booktitle = {Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization}, abstract = {User modeling and personalized recommendations, often enabled by data-rich machine learning, are key enabling technologies that allow intelligent systems to learn from users, adapting their output to users’ needs and preferences. These techniques have become an essential part of systems that help users find relevant content in today’s highly complex, information-rich environments. However, there has been a growing recognition that they raise novel ethical, policy, and legal challenges. It has become apparent that a singleminded focus on the user preferences has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, openness to diversity, and other social welfare considerations, are not captured by typical metrics, based on which data-driven personalized models are optimized.}, keywords = {Algorithmic Fairness}, pubstate = {published}, tppubtype = {workshop} } User modeling and personalized recommendations, often enabled by data-rich machine learning, are key enabling technologies that allow intelligent systems to learn from users, adapting their output to users’ needs and preferences. These techniques have become an essential part of systems that help users find relevant content in today’s highly complex, information-rich environments. However, there has been a growing recognition that they raise novel ethical, policy, and legal challenges. It has become apparent that a singleminded focus on the user preferences has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, openness to diversity, and other social welfare considerations, are not captured by typical metrics, based on which data-driven personalized models are optimized. |
Pınar Barlas Kyriakos Kyriakou, Styliani Kleanthous Jahna Otterbacher Person, Human, Neither: The Dehumanization Potential of Automated Image Tagging Proceeding 2021, ISBN: 9781450384735. Abstract | Links | BibTeX | Tags: Artificial Intelligence @proceedings{Barlas2021, title = {Person, Human, Neither: The Dehumanization Potential of Automated Image Tagging}, author = {Pınar Barlas, Kyriakos Kyriakou, Styliani Kleanthous, Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3461702.3462567}, doi = {10.1145/3461702.3462567}, isbn = {9781450384735}, year = {2021}, date = {2021-05-19}, series = {AIES '21}, abstract = {Following the literature on dehumanization via technology, we audit six proprietary image tagging algorithms (ITAs) for their potential to perpetuate dehumanization. We examine the ITAs' outputs on a controlled dataset of images depicting a diverse group of people for tags that indicate the presence of a human in the image. Through an analysis of the (mis)use of these tags, we find that there are some individuals whose 'humanness' is not recognized by an ITA, and that these individuals are often from marginalized social groups. Finally, we compare these findings with the use of the 'face' tag, which can be used for surveillance, revealing that people's faces are often recognized by an ITA even when their 'humanness' is not. Overall, we highlight the subtle ways in which ITAs may inflict widespread, disparate harm, and emphasize the importance of considering the social context of the resulting application.}, keywords = {Artificial Intelligence}, pubstate = {published}, tppubtype = {proceedings} } Following the literature on dehumanization via technology, we audit six proprietary image tagging algorithms (ITAs) for their potential to perpetuate dehumanization. We examine the ITAs' outputs on a controlled dataset of images depicting a diverse group of people for tags that indicate the presence of a human in the image. Through an analysis of the (mis)use of these tags, we find that there are some individuals whose 'humanness' is not recognized by an ITA, and that these individuals are often from marginalized social groups. Finally, we compare these findings with the use of the 'face' tag, which can be used for surveillance, revealing that people's faces are often recognized by an ITA even when their 'humanness' is not. Overall, we highlight the subtle ways in which ITAs may inflict widespread, disparate harm, and emphasize the importance of considering the social context of the resulting application. |
Evgenia Christoforou Pınar Barlas, Jahna Otterbacher It’s About Time: A View of Crowdsourced Data Before and During the Pandemic Proceeding Association for Computing Machinery, 2021, ISBN: 9781450380966. Abstract | Links | BibTeX | Tags: Algorithmic Bias @proceedings{Christoforou2021, title = {It’s About Time: A View of Crowdsourced Data Before and During the Pandemic}, author = {Evgenia Christoforou, Pınar Barlas, Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3411764.3445317}, doi = {10.1145/3411764.3445317}, isbn = {9781450380966}, year = {2021}, date = {2021-05-08}, publisher = {Association for Computing Machinery}, series = {CHI '21}, abstract = {Data attained through crowdsourcing have an essential role in the development of computer vision algorithms. Crowdsourced data might include reporting biases, since crowdworkers usually describe what is “worth saying” in addition to images’ content. We explore how the unprecedented events of 2020, including the unrest surrounding racial discrimination, and the COVID-19 pandemic, might be reflected in responses to an open-ended annotation task on people images, originally executed in 2018 and replicated in 2020. Analyzing themes of Identity and Health conveyed in workers’ tags, we find evidence that supports the potential for temporal sensitivity in crowdsourced data. The 2020 data exhibit more race-marking of images depicting non-Whites, as well as an increase in tags describing Weight. We relate our findings to the emerging research on crowdworkers’ moods. Furthermore, we discuss the implications of (and suggestions for) designing tasks on proprietary platforms, having demonstrated the possibility for additional, unexpected variation in crowdsourced data due to significant events.}, keywords = {Algorithmic Bias}, pubstate = {published}, tppubtype = {proceedings} } Data attained through crowdsourcing have an essential role in the development of computer vision algorithms. Crowdsourced data might include reporting biases, since crowdworkers usually describe what is “worth saying” in addition to images’ content. We explore how the unprecedented events of 2020, including the unrest surrounding racial discrimination, and the COVID-19 pandemic, might be reflected in responses to an open-ended annotation task on people images, originally executed in 2018 and replicated in 2020. Analyzing themes of Identity and Health conveyed in workers’ tags, we find evidence that supports the potential for temporal sensitivity in crowdsourced data. The 2020 data exhibit more race-marking of images depicting non-Whites, as well as an increase in tags describing Weight. We relate our findings to the emerging research on crowdworkers’ moods. Furthermore, we discuss the implications of (and suggestions for) designing tasks on proprietary platforms, having demonstrated the possibility for additional, unexpected variation in crowdsourced data due to significant events. |
Veronika Bogina Alan Hartman, Tsvi Kuflik Avital Shulner-Tal Educating Software and AI Stakeholders About Algorithmic Fairness, Accountability, Transparency and Ethics Journal Article International Journal of Artificial Intelligence in Education, 2021. Abstract | Links | BibTeX | Tags: Accountability, Algorithmic Fairness, Algorithmic Transparency, Education @article{Bogina2021, title = {Educating Software and AI Stakeholders About Algorithmic Fairness, Accountability, Transparency and Ethics}, author = {Veronika Bogina, Alan Hartman, Tsvi Kuflik, Avital Shulner-Tal}, url = {https://link.springer.com/article/10.1007/s40593-021-00248-0}, doi = {10.1007/s40593-021-00248-0}, year = {2021}, date = {2021-04-21}, journal = {International Journal of Artificial Intelligence in Education}, abstract = {This paper discusses educating stakeholders of algorithmic systems (systems that apply Artificial Intelligence/Machine learning algorithms) in the areas of algorithmic fairness, accountability, transparency and ethics (FATE). We begin by establishing the need for such education and identifying the intended consumers of educational materials on the topic. We discuss the topics of greatest concern and in need of educational resources; we also survey the existing materials and past experiences in such education, noting the scarcity of suitable material on aspects of fairness in particular. We use an example of a college admission platform to illustrate our ideas. We conclude with recommendations for further work in the area and report on the first steps taken towards achieving this goal in the framework of an academic graduate seminar course, a graduate summer school, an embedded lecture in a software engineering course, and a workshop for high school teachers.}, keywords = {Accountability, Algorithmic Fairness, Algorithmic Transparency, Education}, pubstate = {published}, tppubtype = {article} } This paper discusses educating stakeholders of algorithmic systems (systems that apply Artificial Intelligence/Machine learning algorithms) in the areas of algorithmic fairness, accountability, transparency and ethics (FATE). We begin by establishing the need for such education and identifying the intended consumers of educational materials on the topic. We discuss the topics of greatest concern and in need of educational resources; we also survey the existing materials and past experiences in such education, noting the scarcity of suitable material on aspects of fairness in particular. We use an example of a college admission platform to illustrate our ideas. We conclude with recommendations for further work in the area and report on the first steps taken towards achieving this goal in the framework of an academic graduate seminar course, a graduate summer school, an embedded lecture in a software engineering course, and a workshop for high school teachers. |
Alison Marie Smith-Renner Styliani Kleanthous Loizou, Jonathan Dodge Casey Dugan Min Kyung Lee Brian Lim Tsvi Kuflik Advait Sarkar Avital Shulner-Tal Simone Stumpf Y TExSS: Transparency and Explanations in Smart Systems Workshop 26th International Conference on Intelligent User Interfaces, 2021, ISBN: 9781450380188. Abstract | Links | BibTeX | Tags: Algorithmic Transparency, Explainability @workshop{Smith-Renner2021, title = {TExSS: Transparency and Explanations in Smart Systems}, author = {Alison Marie Smith-Renner, Styliani Kleanthous Loizou, Jonathan Dodge, Casey Dugan, Min Kyung Lee, Brian Y Lim, Tsvi Kuflik, Advait Sarkar, Avital Shulner-Tal, Simone Stumpf}, url = {https://dl.acm.org/doi/abs/10.1145/3397482.3450705}, doi = {10.1145/3397482.3450705}, isbn = {9781450380188}, year = {2021}, date = {2021-04-14}, booktitle = {26th International Conference on Intelligent User Interfaces}, abstract = {Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop provides a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, we focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings, such as awareness, data provenance, and validation.}, keywords = {Algorithmic Transparency, Explainability}, pubstate = {published}, tppubtype = {workshop} } Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop provides a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, we focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings, such as awareness, data provenance, and validation. |
Fausto Giunchiglia Jahna Otterbacher, Styliani Kleanthous Khuyagbaatar Batsuren Veronika Bogin Tsvi Kuflik Avital Shulner Tal Towards Algorithmic Transparency: A Diversity Perspective Journal Article arXiv preprint arXiv:2104.05658, 2021. Abstract | Links | BibTeX | Tags: Algorithmic Transparency, Diversity @article{Giunchiglia2021b, title = {Towards Algorithmic Transparency: A Diversity Perspective}, author = {Fausto Giunchiglia, Jahna Otterbacher, Styliani Kleanthous, Khuyagbaatar Batsuren, Veronika Bogin, Tsvi Kuflik, Avital Shulner Tal}, url = {https://arxiv.org/abs/2104.05658}, year = {2021}, date = {2021-04-12}, journal = {arXiv preprint arXiv:2104.05658}, abstract = {As the role of algorithmic systems and processes increases in society, so does the risk of bias, which can result in discrimination against individuals and social groups. Research on algorithmic bias has exploded in recent years, highlighting both the problems of bias, and the potential solutions, in terms of algorithmic transparency (AT). Transparency is important for facilitating fairness management as well as explainability in algorithms; however, the concept of diversity, and its relationship to bias and transparency, has been largely left out of the discussion. We reflect on the relationship between diversity and bias, arguing that diversity drives the need for transparency. Using a perspective-taking lens, which takes diversity as a given, we propose a conceptual framework to characterize the problem and solution spaces of AT, to aid its application in algorithmic systems. Example cases from three research domains are described using our framework.}, keywords = {Algorithmic Transparency, Diversity}, pubstate = {published}, tppubtype = {article} } As the role of algorithmic systems and processes increases in society, so does the risk of bias, which can result in discrimination against individuals and social groups. Research on algorithmic bias has exploded in recent years, highlighting both the problems of bias, and the potential solutions, in terms of algorithmic transparency (AT). Transparency is important for facilitating fairness management as well as explainability in algorithms; however, the concept of diversity, and its relationship to bias and transparency, has been largely left out of the discussion. We reflect on the relationship between diversity and bias, arguing that diversity drives the need for transparency. Using a perspective-taking lens, which takes diversity as a given, we propose a conceptual framework to characterize the problem and solution spaces of AT, to aid its application in algorithmic systems. Example cases from three research domains are described using our framework. |
Kalia Orphanou Jahna Otterbacher, Styliani Kleanthous Khuyagbaatar Batsuren Fausto Giunchiglia Veronika Bogina Avital Shulner Tal Tsvi Kuflik Mitigating Bias in Algorithmic Systems: A Fish-Eye View of Problems and Solutions Across Domains Journal Article arXiv preprint arXiv:2103.16953, 2021. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @article{Orphanou2021b, title = {Mitigating Bias in Algorithmic Systems: A Fish-Eye View of Problems and Solutions Across Domains}, author = {Kalia Orphanou, Jahna Otterbacher, Styliani Kleanthous, Khuyagbaatar Batsuren, Fausto Giunchiglia, Veronika Bogina, Avital Shulner Tal, Tsvi Kuflik}, url = {arXiv preprint arXiv:2103.16953}, year = {2021}, date = {2021-03-31}, journal = {arXiv preprint arXiv:2103.16953}, abstract = {Mitigating bias in algorithmic systems is a critical issue drawing attention across communities within the information and computer sciences. Given the complexity of the problem and the involvement of multiple stakeholders, including developers, end-users and third-parties, there is a need to understand the landscape of the sources of bias, and the solutions being proposed to address them. This survey provides a 'fish-eye view', examining approaches across four areas of research. The literature describes three steps toward a comprehensive treatment: bias detection, fairness management and explainability management, and underscores the need to work from within the system as well as from the perspective of stakeholders in the broader context.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {article} } Mitigating bias in algorithmic systems is a critical issue drawing attention across communities within the information and computer sciences. Given the complexity of the problem and the involvement of multiple stakeholders, including developers, end-users and third-parties, there is a need to understand the landscape of the sources of bias, and the solutions being proposed to address them. This survey provides a 'fish-eye view', examining approaches across four areas of research. The literature describes three steps toward a comprehensive treatment: bias detection, fairness management and explainability management, and underscores the need to work from within the system as well as from the perspective of stakeholders in the broader context. |
Maria Kasinidou Styliani Kleanthous, Pınar Barlas Jahna Otterbacher Association for Computing Machinery, 2021, ISBN: 9781450383097. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @proceedings{Kasinidou2021, title = {I agree with the decision, but they didn't deserve this: Future Developers' Perception of Fairness in Algorithmic Decisions}, author = {Maria Kasinidou, Styliani Kleanthous, Pınar Barlas, Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3442188.3445931}, doi = {10.1145/3442188.3445931}, isbn = {9781450383097}, year = {2021}, date = {2021-03-08}, publisher = {Association for Computing Machinery}, series = {FAccT '21}, abstract = {While professionals are increasingly relying on algorithmic systems for making a decision, on some occasions, algorithmic decisions may be perceived as biased or not just. Prior work has looked into the perception of algorithmic decision-making from the user's point of view. In this work, we investigate how students in fields adjacent to algorithm development perceive algorithmic decisionmaking. Participants (N=99) were asked to rate their agreement with statements regarding six constructs that are related to facets of fairness and justice in algorithmic decision-making in three separate scenarios. Two of the three scenarios were independent of each other, while the third scenario presented three different outcomes of the same algorithmic system, demonstrating perception changes triggered by different outputs. Quantitative analysis indicates that a) 'agreeing' with a decision does not mean the person 'deserves the outcome', b) perceiving the factors used in the decision-making as 'appropriate' does not make the decision of the system 'fair' and c) perceiving a system's decision as 'not fair' is affecting the participants' 'trust' in the system. In addition, participants found proportional distribution of benefits more fair than other approaches. Qualitative analysis provides further insights into that information the participants find essential to judge and understand an algorithmic decision-making system's fairness. Finally, the level of academic education has a role to play in the perception of fairness and justice in algorithmic decision-making.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {proceedings} } While professionals are increasingly relying on algorithmic systems for making a decision, on some occasions, algorithmic decisions may be perceived as biased or not just. Prior work has looked into the perception of algorithmic decision-making from the user's point of view. In this work, we investigate how students in fields adjacent to algorithm development perceive algorithmic decisionmaking. Participants (N=99) were asked to rate their agreement with statements regarding six constructs that are related to facets of fairness and justice in algorithmic decision-making in three separate scenarios. Two of the three scenarios were independent of each other, while the third scenario presented three different outcomes of the same algorithmic system, demonstrating perception changes triggered by different outputs. Quantitative analysis indicates that a) 'agreeing' with a decision does not mean the person 'deserves the outcome', b) perceiving the factors used in the decision-making as 'appropriate' does not make the decision of the system 'fair' and c) perceiving a system's decision as 'not fair' is affecting the participants' 'trust' in the system. In addition, participants found proportional distribution of benefits more fair than other approaches. Qualitative analysis provides further insights into that information the participants find essential to judge and understand an algorithmic decision-making system's fairness. Finally, the level of academic education has a role to play in the perception of fairness and justice in algorithmic decision-making. |
Kyriakos Kyriakou Pınar Barlas, Styliani Kleanthous Jahna Otterbacher OpenTag: Understanding Human Perceptions of Image Tagging Algorithms Conference HCOMP-20 2020. Abstract | Links | BibTeX | Tags: Artificial Intelligence @conference{Kyriakou2020b, title = {OpenTag: Understanding Human Perceptions of Image Tagging Algorithms}, author = {Kyriakos Kyriakou, Pınar Barlas, Styliani Kleanthous, Jahna Otterbacher}, url = {http://www.cycat.io/hcomp_2020_paper_76-2/}, year = {2020}, date = {2020-10-25}, series = {HCOMP-20}, abstract = {Image Tagging Algorithms (ITAs) are extensively used in our information ecosystem, from facilitating the retrieval of images in social platforms to learning about users and their preferences. However, audits performed on ITAs have demonstrated that their behaviors often exhibit social biases, especially when analyzing images depicting people. We present OpenTag, a platform that fuses the auditing process with a crowdsourcing approach. Users can upload an image, which is then analyzed by various ITAs, resulting in multiple sets of descriptive tags. With OpenTag, the user can observe and compare the output of multiple ITAs simultaneously, while researchers can study the manner in which users perceive this output. Finally, using the collected data, further audits can be performed on ITAs.}, keywords = {Artificial Intelligence}, pubstate = {published}, tppubtype = {conference} } Image Tagging Algorithms (ITAs) are extensively used in our information ecosystem, from facilitating the retrieval of images in social platforms to learning about users and their preferences. However, audits performed on ITAs have demonstrated that their behaviors often exhibit social biases, especially when analyzing images depicting people. We present OpenTag, a platform that fuses the auditing process with a crowdsourcing approach. Users can upload an image, which is then analyzed by various ITAs, resulting in multiple sets of descriptive tags. With OpenTag, the user can observe and compare the output of multiple ITAs simultaneously, while researchers can study the manner in which users perceive this output. Finally, using the collected data, further audits can be performed on ITAs. |
Evgenia Christoforou Pınar Barlas, Jahna Otterbacher Crowdwork as a Snapshot in Time: Image Annotation Tasks during a Pandemic Conference HCOMP-20 2020. Abstract | Links | BibTeX | Tags: Artificial Intelligence @conference{Christoforou2020, title = {Crowdwork as a Snapshot in Time: Image Annotation Tasks during a Pandemic}, author = {Evgenia Christoforou, Pınar Barlas, Jahna Otterbacher}, url = {http://www.cycat.io/hcomp_2020_paper_79/}, year = {2020}, date = {2020-10-25}, series = {HCOMP-20}, abstract = {While crowdsourcing provides a convenient solution for tapping into human intelligence, a concern is the bias inherent in the data collected. Events related to the COVID-19 pandemic had an impact on people globally, and crowdworkers were no exception. Given the evidence concerning mood and stress on work, we explore how temporal events might affect crowdsourced data. We replicated an image annotation task conducted in 2018, in which workers describe people images. We expected 2020 annotations to contain more references to health, as compared to 2018 data. Overall, we find no evidence that health-related tags were used more often in 2020, but instead we find a significant increase in the use of tags related to weight (e.g., fat, chubby, overweight). This result, coupled with the “stay at home” act in effect in 2020, illustrate how crowdwork is impacted by temporal events. }, keywords = {Artificial Intelligence}, pubstate = {published}, tppubtype = {conference} } While crowdsourcing provides a convenient solution for tapping into human intelligence, a concern is the bias inherent in the data collected. Events related to the COVID-19 pandemic had an impact on people globally, and crowdworkers were no exception. Given the evidence concerning mood and stress on work, we explore how temporal events might affect crowdsourced data. We replicated an image annotation task conducted in 2018, in which workers describe people images. We expected 2020 annotations to contain more references to health, as compared to 2018 data. Overall, we find no evidence that health-related tags were used more often in 2020, but instead we find a significant increase in the use of tags related to weight (e.g., fat, chubby, overweight). This result, coupled with the “stay at home” act in effect in 2020, illustrate how crowdwork is impacted by temporal events. |
Barlas, Pınar ; Kyriakou, Kyriakos ; Guest, Olivia ; Kleanthous, Styliani ; Otterbacher, Jahna 2020. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Artificial Intelligence @proceedings{Barlas2020b, title = {To "See" is to Stereotype: Image Tagging Algorithms, Gender Recognition, and the Accuracy-Fairness Trade-off}, author = {Barlas, Pınar and Kyriakou, Kyriakos and Guest, Olivia and Kleanthous, Styliani and Otterbacher, Jahna}, url = {https://dl.acm.org/doi/abs/10.1145/3432931}, doi = {10.1145/3432931}, year = {2020}, date = {2020-10-17}, series = {CSCW3 20}, abstract = {Machine-learned computer vision algorithms for tagging images are increasingly used by developers and researchers, having become popularized as easy-to-use "cognitive services." Yet these tools struggle with gender recognition, particularly when processing images of women, people of color and non-binary individuals. Socio-technical researchers have cited data bias as a key problem; training datasets often over-represent images of people and contexts that convey social stereotypes. The social psychology literature explains that people learn social stereotypes, in part, by observing others in particular roles and contexts, and can inadvertently learn to associate gender with scenes, occupations and activities. Thus, we study the extent to which image tagging algorithms mimic this phenomenon. We design a controlled experiment, to examine the interdependence between algorithmic recognition of context and the depicted person's gender. In the spirit of auditing to understand machine behaviors, we create a highly controlled dataset of people images, imposed on gender-stereotyped backgrounds. Our methodology is reproducible and our code publicly available. Evaluating five proprietary algorithms, we find that in three, gender inference is hindered when a background is introduced. Of the two that "see" both backgrounds and gender, it is the one whose output is most consistent with human stereotyping processes that is superior in recognizing gender. We discuss the accuracy--fairness trade-off, as well as the importance of auditing black boxes in better understanding this double-edged sword.}, keywords = {Algorithmic Bias, Artificial Intelligence}, pubstate = {published}, tppubtype = {proceedings} } Machine-learned computer vision algorithms for tagging images are increasingly used by developers and researchers, having become popularized as easy-to-use "cognitive services." Yet these tools struggle with gender recognition, particularly when processing images of women, people of color and non-binary individuals. Socio-technical researchers have cited data bias as a key problem; training datasets often over-represent images of people and contexts that convey social stereotypes. The social psychology literature explains that people learn social stereotypes, in part, by observing others in particular roles and contexts, and can inadvertently learn to associate gender with scenes, occupations and activities. Thus, we study the extent to which image tagging algorithms mimic this phenomenon. We design a controlled experiment, to examine the interdependence between algorithmic recognition of context and the depicted person's gender. In the spirit of auditing to understand machine behaviors, we create a highly controlled dataset of people images, imposed on gender-stereotyped backgrounds. Our methodology is reproducible and our code publicly available. Evaluating five proprietary algorithms, we find that in three, gender inference is hindered when a background is introduced. Of the two that "see" both backgrounds and gender, it is the one whose output is most consistent with human stereotyping processes that is superior in recognizing gender. We discuss the accuracy--fairness trade-off, as well as the importance of auditing black boxes in better understanding this double-edged sword. |
Xavier Alameda-Pineda Miriam Redi, Jahna Otterbacher Nicu Sebe Shih-Fu Chang Proceedings of the 28th ACM International Conference on Multimedia, 2020, ISBN: 9781450379885. Abstract | Links | BibTeX | Tags: Accountability, Algorithmic Fairness, Algorithmic Transparency, Ethics @workshop{Alameda-Pineda2020, title = {FATE/MM 20: 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in MultiMedia}, author = {Xavier Alameda-Pineda, Miriam Redi, Jahna Otterbacher, Nicu Sebe, Shih-Fu Chang}, url = {https://dl.acm.org/doi/abs/10.1145/3394171.3421896}, doi = {10.1145/3394171.3421896}, isbn = {9781450379885}, year = {2020}, date = {2020-10-12}, booktitle = {Proceedings of the 28th ACM International Conference on Multimedia}, abstract = {The series of FAT/FAccT events aim at bringing together researchers and practitioners interested in fairness, accountability, transparency and ethics of computational methods. The FATE/MM workshop focuses on addressing these issues in the Multimedia field. Multimedia computing technologies operate today at an unprecedented scale, with a growing community of scientists interested in multimedia models, tools and applications. Such continued growth has great implications not only for the scientific community, but also for the society as a whole. Typical risks of large-scale computational models include model bias and algorithmic discrimination. These risks become particularly prominent in the multimedia field, which historically has been focusing on user-centered technologies. To ensure a healthy and constructive development of the best multimedia technologies, this workshop offers a space to discuss how to develop ethical, fair, unbiased, representative, and transparent multimedia models, bringing together researchers from different areas to present computational solutions to these issues.}, keywords = {Accountability, Algorithmic Fairness, Algorithmic Transparency, Ethics}, pubstate = {published}, tppubtype = {workshop} } The series of FAT/FAccT events aim at bringing together researchers and practitioners interested in fairness, accountability, transparency and ethics of computational methods. The FATE/MM workshop focuses on addressing these issues in the Multimedia field. Multimedia computing technologies operate today at an unprecedented scale, with a growing community of scientists interested in multimedia models, tools and applications. Such continued growth has great implications not only for the scientific community, but also for the society as a whole. Typical risks of large-scale computational models include model bias and algorithmic discrimination. These risks become particularly prominent in the multimedia field, which historically has been focusing on user-centered technologies. To ensure a healthy and constructive development of the best multimedia technologies, this workshop offers a space to discuss how to develop ethical, fair, unbiased, representative, and transparent multimedia models, bringing together researchers from different areas to present computational solutions to these issues. |
Veronika Bogina Julia Sheidin, Tsvi Kuflik Shlomo Berkovsky Visualizing Program Genres' Temporal-Based Similarity in Linear TV Recommendations Inproceedings Proceedings of the International Conference on Advanced Visual Interfaces, 2020, ISBN: 9781450375351. Abstract | Links | BibTeX | Tags: Recommender Systems @inproceedings{Bogina2020, title = {Visualizing Program Genres' Temporal-Based Similarity in Linear TV Recommendations}, author = {Veronika Bogina, Julia Sheidin, Tsvi Kuflik, Shlomo Berkovsky}, url = {https://dl.acm.org/doi/abs/10.1145/3399715.3399813}, doi = {10.1145/3399715.3399813}, isbn = {9781450375351}, year = {2020}, date = {2020-09-28}, booktitle = {Proceedings of the International Conference on Advanced Visual Interfaces}, journal = {Electronic Commerce Research and Applications}, abstract = {There is an increasing evidence that data visualization is an important and useful tool for quick understanding and filtering of large amounts of data. In this paper, we contribute to this body of work with a study that compares chord and ranked list for presentation of a temporal TV program genre similarity in next-program recommendations. We consider genre similarity based on the similarity of temporal viewing patterns. We discover that chord presentation allows users to see the whole picture and improves their ability to choose items beyond the ranked list of top similar items. We believe that similarity visualization may be useful for the provision of both the recommendations and their explanations to the end users.}, keywords = {Recommender Systems}, pubstate = {published}, tppubtype = {inproceedings} } There is an increasing evidence that data visualization is an important and useful tool for quick understanding and filtering of large amounts of data. In this paper, we contribute to this body of work with a study that compares chord and ranked list for presentation of a temporal TV program genre similarity in next-program recommendations. We consider genre similarity based on the similarity of temporal viewing patterns. We discover that chord presentation allows users to see the whole picture and improves their ability to choose items beyond the ranked list of top similar items. We believe that similarity visualization may be useful for the provision of both the recommendations and their explanations to the end users. |
Louis Nisiotis, Styliani Kleanthous Lessons Learned Using a Virtual World to Support Collaborative Learning in the Classroom Journal Article Journal of Universal Computer Science, 2020. Abstract | Links | BibTeX | Tags: Collaborative Learning, Education, Virtual Environments @article{Nisiotis2020, title = {Lessons Learned Using a Virtual World to Support Collaborative Learning in the Classroom}, author = {Louis Nisiotis, Styliani Kleanthous}, doi = {https://www.jucs.org/jucs_26_8/lessons_learned_using_a/jucs_26_08_0858_0879_nisiotis.pdf}, year = {2020}, date = {2020-08-28}, journal = {Journal of Universal Computer Science}, abstract = { Using technology in education is crucial to support learning, and Virtual Worlds (VWs) are one of the technologies used by many educators to support their teaching objectives. VWs enable students to connect, synchronously interact, and participate in immersive learning activities. Such VW has been developed at Sheffield Hallam University (UK), and is used to support the teaching of a specific module, as well as for conducting empirical research around the topics of Transactive Memory Systems (TMS) and Students Engagement. TMS is a SKHQRPHQRQ UHSUHVHQWLQJ WKH FROOHFWLYH DZDUHQHVV RI D JURXS¶VVSHFLDOLsation, coordination, and credibility with interesting results. This paper presents the lessons learned while using the VW over the past few years at a higher education institution to support collaborative learning within working groups. A review of these empirical findings is presented, together with the results of a follow up study conducted to further investigate TMS and student Engagement, as well as students perceived Motivation to use a VW for learning, and their Learning Outcomes. The findings of this study are corroborating and contributing to previous results, suggesting that a VW is an effective tool to support collaborative learning activities, allowing students to engage in the learning process, motivate them to participate in activities, and contribute to their overall learning experience. }, keywords = {Collaborative Learning, Education, Virtual Environments}, pubstate = {published}, tppubtype = {article} } Using technology in education is crucial to support learning, and Virtual Worlds (VWs) are one of the technologies used by many educators to support their teaching objectives. VWs enable students to connect, synchronously interact, and participate in immersive learning activities. Such VW has been developed at Sheffield Hallam University (UK), and is used to support the teaching of a specific module, as well as for conducting empirical research around the topics of Transactive Memory Systems (TMS) and Students Engagement. TMS is a SKHQRPHQRQ UHSUHVHQWLQJ WKH FROOHFWLYH DZDUHQHVV RI D JURXS¶VVSHFLDOLsation, coordination, and credibility with interesting results. This paper presents the lessons learned while using the VW over the past few years at a higher education institution to support collaborative learning within working groups. A review of these empirical findings is presented, together with the results of a follow up study conducted to further investigate TMS and student Engagement, as well as students perceived Motivation to use a VW for learning, and their Learning Outcomes. The findings of this study are corroborating and contributing to previous results, suggesting that a VW is an effective tool to support collaborative learning activities, allowing students to engage in the learning process, motivate them to participate in activities, and contribute to their overall learning experience. |
Bamshad Mobasher Styliani Kleanthous, Bettina Berendt Michael Ekstrand Jahna Otterbacher Avital Shulner Tal FairUMAP 2020: The 3rd Workshop on Fairness in User Modeling, Adaptation and Personalization Workshop Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization, 2020, ISBN: 9781450368612. Abstract | Links | BibTeX | Tags: Adaptation, Algorithmic Fairness, Personalization @workshop{Mobasher2020, title = {FairUMAP 2020: The 3rd Workshop on Fairness in User Modeling, Adaptation and Personalization}, author = {Bamshad Mobasher, Styliani Kleanthous, Bettina Berendt, Michael Ekstrand, Jahna Otterbacher, Avital Shulner Tal}, url = {https://dl.acm.org/doi/abs/10.1145/3340631.3398671}, doi = {10.1145/3340631.3398671}, isbn = {9781450368612}, year = {2020}, date = {2020-07-14}, booktitle = {Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization}, abstract = {The 3rd FairUMAP workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on the one hand, and bias, fairness and transparency in algorithmic systems on the other hand.}, keywords = {Adaptation, Algorithmic Fairness, Personalization}, pubstate = {published}, tppubtype = {workshop} } The 3rd FairUMAP workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on the one hand, and bias, fairness and transparency in algorithmic systems on the other hand. |
Barlas, Pınar ; Kyriakou, Kyriakos ; Chrysanthou, Antrea ; Kleanthous, Styliani ; Otterbacher, Jahna OPIAS: Over-Personalization in Information Access Systems Inproceedings Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization, pp. 103–104, 2020, ISBN: 9781450379502. Links | BibTeX | Tags: Artificial Intelligence @inproceedings{Barlas2020, title = {OPIAS: Over-Personalization in Information Access Systems}, author = {Barlas, Pınar and Kyriakou, Kyriakos and Chrysanthou, Antrea and Kleanthous, Styliani and Otterbacher, Jahna}, url = {https://dl.acm.org/doi/abs/10.1145/3386392.3397607}, doi = {10.1145/3386392.3397607}, isbn = {9781450379502}, year = {2020}, date = {2020-07-12}, booktitle = {Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization}, pages = {103–104}, series = {UMAP '20 Adjunct}, keywords = {Artificial Intelligence}, pubstate = {published}, tppubtype = {inproceedings} } |
Kyriakou, Kyriakos ; Kleanthous, Styliani ; Otterbacher, Jahna ; Papadopoulos, George A Emotion-based Stereotypes in Image Analysis Services Inproceedings Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization, pp. 252–259, 2020, ISBN: 9781450379502. Abstract | Links | BibTeX | Tags: Artificial Intelligence @inproceedings{Kyriakou2020, title = {Emotion-based Stereotypes in Image Analysis Services}, author = {Kyriakou, Kyriakos and Kleanthous, Styliani and Otterbacher, Jahna and Papadopoulos, George A.}, url = {https://dl.acm.org/doi/abs/10.1145/3386392.3399567}, doi = {10.1145/3386392.3399567}, isbn = {9781450379502}, year = {2020}, date = {2020-07-12}, booktitle = {Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization}, pages = {252–259}, series = {UMAP '20 Adjunct}, abstract = {Vision-based cognitive services (CogS) have become crucial in a wide range of applications, from real-time security and social networks to smartphone applications. Many services focus on analyzing people images. When it comes to facial analysis, these services can be misleading or even inaccurate, raising ethical concerns such as the amplification of social stereotypes. We analyzed popular Image Tagging CogS that infer emotion from a person's face, considering whether they perpetuate racial and gender stereotypes concerning emotion. By comparing both CogS and Human-generated descriptions on a set of controlled images, we highlight the need for transparency and fairness in CogS. In particular, we document evidence that CogS may actually be more likely than crowdworkers to perpetuate the stereotype of the "angry black man" and often attribute black race individuals with "emotions of hostility".}, keywords = {Artificial Intelligence}, pubstate = {published}, tppubtype = {inproceedings} } Vision-based cognitive services (CogS) have become crucial in a wide range of applications, from real-time security and social networks to smartphone applications. Many services focus on analyzing people images. When it comes to facial analysis, these services can be misleading or even inaccurate, raising ethical concerns such as the amplification of social stereotypes. We analyzed popular Image Tagging CogS that infer emotion from a person's face, considering whether they perpetuate racial and gender stereotypes concerning emotion. By comparing both CogS and Human-generated descriptions on a set of controlled images, we highlight the need for transparency and fairness in CogS. In particular, we document evidence that CogS may actually be more likely than crowdworkers to perpetuate the stereotype of the "angry black man" and often attribute black race individuals with "emotions of hostility". |
Jo Bates Paul Clough, Robert Jaeschke Jahna Otterbacher Kris Unsworth Social and cultural biases in information, algorithms, and systems Journal Article Online Information Review, 2020. Links | BibTeX | Tags: Algorithmic Bias @article{Bates2020, title = {Social and cultural biases in information, algorithms, and systems}, author = {Jo Bates, Paul Clough, Robert Jaeschke, Jahna Otterbacher, Kris Unsworth}, url = {https://eprints.whiterose.ac.uk/158750/}, year = {2020}, date = {2020-03-19}, journal = {Online Information Review}, keywords = {Algorithmic Bias}, pubstate = {published}, tppubtype = {article} } |
Chrysanthou, Antrea ; Barlas, Pınar ; Kyriakou, Kyriakos ; Kleanthous, Styliani ; Otterbacher, Jahna Bursting the Bubble: Tool for Awareness and Research about Overpersonalization in Information Access Systems Inproceedings Proceedings of the 25th International Conference on Intelligent User Interfaces Companion, pp. 112–113, 2020, ISBN: 9781450375139. Abstract | Links | BibTeX | Tags: Artificial Intelligence @inproceedings{chrysanthou2020bursting, title = {Bursting the Bubble: Tool for Awareness and Research about Overpersonalization in Information Access Systems}, author = {Chrysanthou, Antrea and Barlas, Pınar and Kyriakou, Kyriakos and Kleanthous, Styliani and Otterbacher, Jahna}, url = {https://dl.acm.org/doi/abs/10.1145/3379336.3381863}, doi = {10.1145/3379336.3381863}, isbn = {9781450375139}, year = {2020}, date = {2020-03-17}, booktitle = {Proceedings of the 25th International Conference on Intelligent User Interfaces Companion}, pages = {112–113}, series = {IUI '20}, abstract = {Modern information access systems extensively use personalization, automatically filtering and/or ranking content based on the user profile, to guide users to the most relevant material. However, this can also lead to unwanted effects such as the "filter bubble." We present an interactive demonstration system, designed as an educational and research tool, which imitates a search engine, personalizing the search results returned for a query based on the user's characteristics. The system can be tailored to suit any type of audience and context, as well as enabling the collection of responses and interaction data.}, keywords = {Artificial Intelligence}, pubstate = {published}, tppubtype = {inproceedings} } Modern information access systems extensively use personalization, automatically filtering and/or ranking content based on the user profile, to guide users to the most relevant material. However, this can also lead to unwanted effects such as the "filter bubble." We present an interactive demonstration system, designed as an educational and research tool, which imitates a search engine, personalizing the search results returned for a query based on the user's characteristics. The system can be tailored to suit any type of audience and context, as well as enabling the collection of responses and interaction data. |
Antrea Chrysanthou Styliani Kleanthous, Elena Matsi Interacting in mixed reality: exploring behavioral differences between children and adults Inproceedings Proceedings of the 25th International Conference on Intelligent User Interfaces, 2020, ISBN: 9781450371186. Abstract | Links | BibTeX | Tags: mixed reality @inproceedings{Chrysanthou2020bb, title = {Interacting in mixed reality: exploring behavioral differences between children and adults}, author = {Antrea Chrysanthou, Styliani Kleanthous, Elena Matsi}, url = {https://dl.acm.org/doi/abs/10.1145/3377325.3377532}, doi = {10.1145/3377325.3377532}, isbn = {9781450371186}, year = {2020}, date = {2020-03-17}, booktitle = {Proceedings of the 25th International Conference on Intelligent User Interfaces}, abstract = {With the development of intelligent interfaces and emerging technologies children and adult users are provided with exciting interaction approaches in several applications. Holographic applications were until recently only available to few people, mostly experts and researchers. In this work, we are investigating the differences between children and adult users towards their interaction behavior in mixed reality, when they were asked to perform a task. Analysis of the results demonstrates that children can be more efficient during their interaction in these environments while adults are more confident and their experience and knowledge is an advantage in achieving a task.}, keywords = {mixed reality}, pubstate = {published}, tppubtype = {inproceedings} } With the development of intelligent interfaces and emerging technologies children and adult users are provided with exciting interaction approaches in several applications. Holographic applications were until recently only available to few people, mostly experts and researchers. In this work, we are investigating the differences between children and adult users towards their interaction behavior in mixed reality, when they were asked to perform a task. Analysis of the results demonstrates that children can be more efficient during their interaction in these environments while adults are more confident and their experience and knowledge is an advantage in achieving a task. |
Alison Smith-Renner Styliani Kleanthous, Brian Lim Tsvi Kuflik Simone Stumpf Jahna Otterbacher Advait Sarkar Casey Dugan Avital Shulner ExSS-ATEC: Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies 2020 Workshop Proceedings of the 25th International Conference on Intelligent User Interfaces Companion, 2020, ISBN: 9781450375139. Abstract | Links | BibTeX | Tags: Algorithmic Transparency, Explainability @workshop{Smith-Renner2020, title = {ExSS-ATEC: Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies 2020}, author = {Alison Smith-Renner, Styliani Kleanthous, Brian Lim, Tsvi Kuflik, Simone Stumpf, Jahna Otterbacher, Advait Sarkar, Casey Dugan, Avital Shulner}, url = {https://dl.acm.org/doi/abs/10.1145/3379336.3379361}, doi = {10.1145/3379336.3379361}, isbn = {9781450375139}, year = {2020}, date = {2020-03-17}, booktitle = {Proceedings of the 25th International Conference on Intelligent User Interfaces Companion}, abstract = {Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, our goal is to focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system's inter-workings, such as awareness, data provenance, and validation.}, keywords = {Algorithmic Transparency, Explainability}, pubstate = {published}, tppubtype = {workshop} } Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, our goal is to focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system's inter-workings, such as awareness, data provenance, and validation. |
Otterbacher, Jahna ; Barlas, Pınar ; Kleanthous, Styliani ; Kyriakou, Kyriakos How Do We Talk about Other People? Group (Un)Fairness in Natural Language Image Descriptions Inproceedings Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, pp. 106-114, 2019. Abstract | Links | BibTeX | Tags: Artificial Intelligence @inproceedings{otterbacher2019we, title = {How Do We Talk about Other People? Group (Un)Fairness in Natural Language Image Descriptions}, author = {Otterbacher, Jahna and Barlas, Pınar and Kleanthous, Styliani and Kyriakou, Kyriakos}, url = {https://ojs.aaai.org/index.php/HCOMP/article/view/5267}, year = {2019}, date = {2019-10-28}, booktitle = {Proceedings of the AAAI Conference on Human Computation and Crowdsourcing}, volume = {7}, number = {1}, pages = {106-114}, series = {HCOMP-19}, abstract = {Crowdsourcing plays a key role in developing algorithms for image recognition or captioning. Major datasets, such as MS COCO or Flickr30K, have been built by eliciting natural language descriptions of images from workers. Yet such elicitation tasks are susceptible to human biases, including stereotyping people depicted in images. Given the growing concerns surrounding discrimination in algorithms, as well as in the data used to train them, it is necessary to take a critical look at this practice. We conduct experiments at Figure Eight using a controlled set of people images. Men and women of various races are positioned in the same manner, wearing a grey t-shirt. We prompt workers for 10 descriptive labels, and consider them using the human-centric approach, which assumes reporting bias. We find that “what’s worth saying” about these uniform images often differs as a function of the gender and race of the depicted person, violating the notion of group fairness. Although this diversity in natural language people descriptions is expected and often beneficial, it could result in automated disparate impact if not managed properly.}, keywords = {Artificial Intelligence}, pubstate = {published}, tppubtype = {inproceedings} } Crowdsourcing plays a key role in developing algorithms for image recognition or captioning. Major datasets, such as MS COCO or Flickr30K, have been built by eliciting natural language descriptions of images from workers. Yet such elicitation tasks are susceptible to human biases, including stereotyping people depicted in images. Given the growing concerns surrounding discrimination in algorithms, as well as in the data used to train them, it is necessary to take a critical look at this practice. We conduct experiments at Figure Eight using a controlled set of people images. Men and women of various races are positioned in the same manner, wearing a grey t-shirt. We prompt workers for 10 descriptive labels, and consider them using the human-centric approach, which assumes reporting bias. We find that “what’s worth saying” about these uniform images often differs as a function of the gender and race of the depicted person, violating the notion of group fairness. Although this diversity in natural language people descriptions is expected and often beneficial, it could result in automated disparate impact if not managed properly. |
Otterbacher, Jahna Fairness in Algorithmic and Crowd-Generated Descriptions of People Images Inproceedings Proceedings of the 1st International Workshop on Fairness, Accountability, and Transparency in MultiMedia, 2019, ISBN: 9781450369152. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness @inproceedings{Otterbacher2019b, title = {Fairness in Algorithmic and Crowd-Generated Descriptions of People Images}, author = {Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3347447.3352693}, doi = {10.1145/3347447.3352693}, isbn = {9781450369152}, year = {2019}, date = {2019-10-15}, booktitle = {Proceedings of the 1st International Workshop on Fairness, Accountability, and Transparency in MultiMedia}, abstract = {Image analysis algorithms have become indispensable in the mod- ern information ecosystem. Beyond their early use in restricted domains (e.g., military, medical), they are now widely used in con- sumer applications and social media. With the rise of the “Algorithm Economy,"1 image analysis algorithms are increasingly being com- mercialized as Cognitive Services. This practice is proving to be a boon to the development of applications where user modeling, personalization and adaptation are required. From e-stores, where image recognition is used to curate a “personal style" for a given shopper based on previously viewed items,2 to dating apps, which can now act as “visual matchmakers,"3 the technology has gained increasing influence in our digital interactions and experiences. However, proprietary image tagging services are black boxes and there are numerous social and ethical issues surrounding their use in contexts where people can be harmed. In this talk, I will discuss recent work in analyzing proprietary image tagging services (e.g., Clarifai, Google Vision, Amazon Rekognition) for their gender and racial biases when tagging images depicting people. I will present our techniques for discrimination discovery in this domain [2], as well as our work on understanding user perceptions of fairness [1]. Finally, I will explore the sources of such biases, by comparing human versus machine descriptions of the same people images [3].}, keywords = {Algorithmic Bias, Algorithmic Fairness}, pubstate = {published}, tppubtype = {inproceedings} } Image analysis algorithms have become indispensable in the mod- ern information ecosystem. Beyond their early use in restricted domains (e.g., military, medical), they are now widely used in con- sumer applications and social media. With the rise of the “Algorithm Economy,"1 image analysis algorithms are increasingly being com- mercialized as Cognitive Services. This practice is proving to be a boon to the development of applications where user modeling, personalization and adaptation are required. From e-stores, where image recognition is used to curate a “personal style" for a given shopper based on previously viewed items,2 to dating apps, which can now act as “visual matchmakers,"3 the technology has gained increasing influence in our digital interactions and experiences. However, proprietary image tagging services are black boxes and there are numerous social and ethical issues surrounding their use in contexts where people can be harmed. In this talk, I will discuss recent work in analyzing proprietary image tagging services (e.g., Clarifai, Google Vision, Amazon Rekognition) for their gender and racial biases when tagging images depicting people. I will present our techniques for discrimination discovery in this domain [2], as well as our work on understanding user perceptions of fairness [1]. Finally, I will explore the sources of such biases, by comparing human versus machine descriptions of the same people images [3]. |
Maria Matsangidou, Jahna Otterbacher What Is Beautiful Continues to Be Good Inproceedings IFIP Conference on Human-Computer Interaction, 2019. Abstract | Links | BibTeX | Tags: Algorithmic Bias @inproceedings{Matsangidou2019, title = {What Is Beautiful Continues to Be Good}, author = {Maria Matsangidou, Jahna Otterbacher}, url = {https://link.springer.com/chapter/10.1007/978-3-030-29390-1_14}, year = {2019}, date = {2019-09-02}, booktitle = {IFIP Conference on Human-Computer Interaction}, abstract = {Image recognition algorithms that automatically tag or moderate content are crucial in many applications but are increasingly opaque. Given transparency concerns, we focus on understanding how algorithms tag people images and their inferences on attractiveness. Theoretically, attractiveness has an evolutionary basis, guiding mating behaviors, although it also drives social behaviors. We test image-tagging APIs as to whether they encode biases surrounding attractiveness. We use the Chicago Face Database, containing images of diverse individuals, along with subjective norming data and objective facial measurements. The algorithms encode biases surrounding attractiveness, perpetuating the stereotype that “what is beautiful is good.” Furthermore, women are often misinterpreted as men. We discuss the algorithms’ reductionist nature, and their potential to infringe on users’ autonomy and well-being, as well as the ethical and legal considerations for developers. Future services should monitor algorithms’ behaviors given their prevalence in the information ecosystem and influence on media.}, keywords = {Algorithmic Bias}, pubstate = {published}, tppubtype = {inproceedings} } Image recognition algorithms that automatically tag or moderate content are crucial in many applications but are increasingly opaque. Given transparency concerns, we focus on understanding how algorithms tag people images and their inferences on attractiveness. Theoretically, attractiveness has an evolutionary basis, guiding mating behaviors, although it also drives social behaviors. We test image-tagging APIs as to whether they encode biases surrounding attractiveness. We use the Chicago Face Database, containing images of diverse individuals, along with subjective norming data and objective facial measurements. The algorithms encode biases surrounding attractiveness, perpetuating the stereotype that “what is beautiful is good.” Furthermore, women are often misinterpreted as men. We discuss the algorithms’ reductionist nature, and their potential to infringe on users’ autonomy and well-being, as well as the ethical and legal considerations for developers. Future services should monitor algorithms’ behaviors given their prevalence in the information ecosystem and influence on media. |
Batsuren Khuyagbaatar Ganbold Amarsanaa, Chagnaa Altangerel Giunchiglia Fausto Building the mongolian wordnet Inproceedings Proceedings of the 10th global WordNet conference, 2019. Abstract | Links | BibTeX | Tags: Artificial Intelligence @inproceedings{Khuyagbaatar2019, title = {Building the mongolian wordnet}, author = {Batsuren Khuyagbaatar, Ganbold Amarsanaa, Chagnaa Altangerel, Giunchiglia Fausto}, url = {https://aclanthology.org/2019.gwc-1.30}, year = {2019}, date = {2019-07-08}, booktitle = {Proceedings of the 10th global WordNet conference}, abstract = {This paper presents the Mongolian Wordnet (MOW), and a general methodology of how to construct it from various sources e.g. lexical resources and expert translations. As of today, the MOW contains 23,665 synsets, 26,875 words, 2,979 glosses, and 213 examples. The manual evaluation of the resource estimated its quality at 96.4%.}, keywords = {Artificial Intelligence}, pubstate = {published}, tppubtype = {inproceedings} } This paper presents the Mongolian Wordnet (MOW), and a general methodology of how to construct it from various sources e.g. lexical resources and expert translations. As of today, the MOW contains 23,665 synsets, 26,875 words, 2,979 glosses, and 213 examples. The manual evaluation of the resource estimated its quality at 96.4%. |
Klimis S. Ntalianis Andreas Kener, Jahna Otterbacher Feelings’ Rating and Detection of Similar Locations, Based on Volunteered Crowdsensing and Crowdsourcing Journal Article IEEE Access, 2019. Abstract | Links | BibTeX | Tags: Algorithmic Bias, crowdsourcing @article{Ntalianis2019, title = {Feelings’ Rating and Detection of Similar Locations, Based on Volunteered Crowdsensing and Crowdsourcing}, author = {Klimis S. Ntalianis, Andreas Kener, Jahna Otterbacher}, url = {https://ieeexplore.ieee.org/document/8755832}, doi = {10.1109/ACCESS.2019.2926812}, year = {2019}, date = {2019-07-04}, journal = {IEEE Access}, abstract = {In this paper, an innovative geographical locations' rating scheme is presented, which is based on crowdsensing and crowdsourcing. People sense their surrounding space and submit evaluations through: (a) a smartphone application, and (b) a prototype website. Both have been implemented using the state-of-the-art technologies. Evaluations are pairs of feeling/state and strength, where six different feelings/states and five strength levels are considered. In addition, the detection of similar locations is proposed by maximizing a cross-correlation criterion through a genetic algorithm approach. Technical details of the overall system are provided so that the interested readers can replicate its components. The experimental results on real-world data, which also include comparisons with Google Maps Rating and Tripadvisor, illustrate the merits and limitations of each technology. Finally, the paper is concluded by uncovering and discussing interesting issues for future research.}, keywords = {Algorithmic Bias, crowdsourcing}, pubstate = {published}, tppubtype = {article} } In this paper, an innovative geographical locations' rating scheme is presented, which is based on crowdsensing and crowdsourcing. People sense their surrounding space and submit evaluations through: (a) a smartphone application, and (b) a prototype website. Both have been implemented using the state-of-the-art technologies. Evaluations are pairs of feeling/state and strength, where six different feelings/states and five strength levels are considered. In addition, the detection of similar locations is proposed by maximizing a cross-correlation criterion through a genetic algorithm approach. Technical details of the overall system are provided so that the interested readers can replicate its components. The experimental results on real-world data, which also include comparisons with Google Maps Rating and Tripadvisor, illustrate the merits and limitations of each technology. Finally, the paper is concluded by uncovering and discussing interesting issues for future research. |
Louis Nisiotis, Styliani Kleanthou The Relationship Between Students' Engagement and the Development of Transactive Memory Systems in MUVE: An Experience Report Inproceedings Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education, 2019, ISBN: 9781450368957. Abstract | Links | BibTeX | Tags: Education, Virtual Environments @inproceedings{Nisiotis2019, title = {The Relationship Between Students' Engagement and the Development of Transactive Memory Systems in MUVE: An Experience Report}, author = {Louis Nisiotis, Styliani Kleanthou}, url = {https://dl.acm.org/doi/abs/10.1145/3304221.3319743}, doi = {10.1145/3304221.3319743}, isbn = {9781450368957}, year = {2019}, date = {2019-07-02}, booktitle = {Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education}, abstract = {The use of educational Multi-User Virtual Environments that provide synchronous interaction, interactive and social learning experiences have the potential to increase student engagement. Due to increased social and cognitive presence, the use of such environments can result in greater student engagement when compared to traditional asynchronous learning environments. In this work, we hypothesized that students' engagement in collaborative learning activities will increase if Transactive Memory System constructs are present. Thus, we employed the theory of TMS that emphasizes the importance of Specialization, Coordination and Credibility between members in a team. The results show that there is a significant correlation between the development of TMS and students' engagement.}, keywords = {Education, Virtual Environments}, pubstate = {published}, tppubtype = {inproceedings} } The use of educational Multi-User Virtual Environments that provide synchronous interaction, interactive and social learning experiences have the potential to increase student engagement. Due to increased social and cognitive presence, the use of such environments can result in greater student engagement when compared to traditional asynchronous learning environments. In this work, we hypothesized that students' engagement in collaborative learning activities will increase if Transactive Memory System constructs are present. Thus, we employed the theory of TMS that emphasizes the importance of Specialization, Coordination and Credibility between members in a team. The results show that there is a significant correlation between the development of TMS and students' engagement. |
Kyriakou, Kyriakos; Barlas, Pınar; Kleanthous, Styliani; Otterbacher, Jahna Fairness in Proprietary Image Tagging Algorithms: A Cross-Platform Audit on People Images Conference ICWSM 2019 AAAI, 2019, ISSN: 2334-0770. Abstract | Links | BibTeX | Tags: Artificial Intelligence @conference{KyriakouICWSM2019, title = {Fairness in Proprietary Image Tagging Algorithms: A Cross-Platform Audit on People Images}, author = {Kyriakos Kyriakou and Pınar Barlas and Styliani Kleanthous and Jahna Otterbacher}, url = {http://www.cycat.io/wp-content/uploads/2019/05/ICWSM_tagging_b_eye_as_v4-2.pdf}, issn = {2334-0770}, year = {2019}, date = {2019-06-15}, publisher = {AAAI}, series = {ICWSM 2019}, abstract = {There are increasing expectations that algorithms should be- have in a manner that is socially just. We consider the case of image tagging APIs and their interpretations of people im- ages. Image taggers have become indispensable in our in- formation ecosystem, facilitating new modes of visual com- munication and sharing. Recently, they have become widely available as Cognitive Services. But while tagging APIs of- fer developers an inexpensive and convenient means to add functionality to their creations, most are opaque and propri- etary. Through a cross-platform comparison of six taggers, we show that behaviors differ significantly. While some of- fer more interpretation on images, they may exhibit less fair- ness toward the depicted persons, by misuse of gender-related tags and/or making judgments on a person’s physical appear- ance. We also discuss the difficulties of studying fairness in situations where algorithmic systems cannot be benchmarked against a ground truth.}, keywords = {Artificial Intelligence}, pubstate = {published}, tppubtype = {conference} } There are increasing expectations that algorithms should be- have in a manner that is socially just. We consider the case of image tagging APIs and their interpretations of people im- ages. Image taggers have become indispensable in our in- formation ecosystem, facilitating new modes of visual com- munication and sharing. Recently, they have become widely available as Cognitive Services. But while tagging APIs of- fer developers an inexpensive and convenient means to add functionality to their creations, most are opaque and propri- etary. Through a cross-platform comparison of six taggers, we show that behaviors differ significantly. While some of- fer more interpretation on images, they may exhibit less fair- ness toward the depicted persons, by misuse of gender-related tags and/or making judgments on a person’s physical appear- ance. We also discuss the difficulties of studying fairness in situations where algorithmic systems cannot be benchmarked against a ground truth. |
Barlas, Pınar; Kyriakou, Kyriakos; Kleanthous, Styliani; Otterbacher, Jahna Social B(eye)as: Human and Machine Descriptions of People Images Conference ICWSM 2019 AAAI, 2019, ISSN: 2334-0770. Abstract | Links | BibTeX | Tags: Artificial Intelligence @conference{BarlasICWSM2019, title = {Social B(eye)as: Human and Machine Descriptions of People Images}, author = {Pınar Barlas and Kyriakos Kyriakou and Styliani Kleanthous and Jahna Otterbacher}, url = {http://www.cycat.io/wp-content/uploads/2019/05/ICWSM_dataset_CAMERAREADY-2.pdf}, issn = {2334-0770}, year = {2019}, date = {2019-06-15}, publisher = {AAAI}, series = {ICWSM 2019}, abstract = {Image analysis algorithms have become an indispensable tool in our information ecosystem, facilitating new forms of visual communication and information sharing. At the same time, they enable large-scale socio-technical research which would otherwise be difficult to carry out. However, their outputs may exhibit social bias, especially when analyzing people images. Since most algorithms are proprietary and opaque, we propose a method of auditing their outputs for social biases. To be able to compare how algorithms interpret a controlled set of people images, we collected descriptions across six image tagging algorithms. In order to compare these results to human behavior, we also collected descriptions on the same images from crowdworkers in two anglophone regions. The dataset we present consists of tags from these eight taggers, along with a typology of concepts, and a python script to calculate vector scores for each image and tagger. Using our methodology, researchers can see the behaviors of the image tagging algorithms and compare them to those of crowdworkers. Beyond computer vision auditing, the dataset of human- and machine-produced tags, the typology, and the vectorization method can be used to explore a range of research questions related to both algorithmic and human behaviors.}, keywords = {Artificial Intelligence}, pubstate = {published}, tppubtype = {conference} } Image analysis algorithms have become an indispensable tool in our information ecosystem, facilitating new forms of visual communication and information sharing. At the same time, they enable large-scale socio-technical research which would otherwise be difficult to carry out. However, their outputs may exhibit social bias, especially when analyzing people images. Since most algorithms are proprietary and opaque, we propose a method of auditing their outputs for social biases. To be able to compare how algorithms interpret a controlled set of people images, we collected descriptions across six image tagging algorithms. In order to compare these results to human behavior, we also collected descriptions on the same images from crowdworkers in two anglophone regions. The dataset we present consists of tags from these eight taggers, along with a typology of concepts, and a python script to calculate vector scores for each image and tagger. Using our methodology, researchers can see the behaviors of the image tagging algorithms and compare them to those of crowdworkers. Beyond computer vision auditing, the dataset of human- and machine-produced tags, the typology, and the vectorization method can be used to explore a range of research questions related to both algorithmic and human behaviors. |
Barlas, Pınar; Kyriakou, Kyriakos; Kleanthous, Styliani; Otterbacher, Jahna What Makes an Image Tagger Fair? - Proprietary Auto-tagging and Interpretations on People Images Conference UMAP 2019 ACM, 2019. Abstract | Links | BibTeX | Tags: Artificial Intelligence @conference{BarlasUMAP2019, title = {What Makes an Image Tagger Fair? - Proprietary Auto-tagging and Interpretations on People Images}, author = {Pınar Barlas and Kyriakos Kyriakou and Styliani Kleanthous and Jahna Otterbacher}, url = {http://www.cycat.io/wp-content/uploads/2019/05/Barlas-et-al.-2019-What-Makes-an-Image-Tagger-Fair-Proprietary-Auto-tagging-and-Interpretations-on-People-Images-1.pdf}, doi = {10.1145/3320435.3320442}, year = {2019}, date = {2019-06-13}, publisher = {ACM}, series = {UMAP 2019}, abstract = {Image analysis algorithms have been a boon to personalization in digital systems and are now widely available via easy-to-use APIs. However, it is important to ensure that they behave fairly in applications that involve processing images of people, such as dating apps. We conduct an experiment to shed light on the factors influencing the perception of “fairness." Participants are shown a photo along with two descriptions (human- and algorithm-generated). They are then asked to indicate which is “more fair" in the context of a dating site, and explain their reasoning. We vary a number of factors, including the gender, race and attractiveness of the person in the photo. While participants generally found human-generated tags to be more fair, API tags were judged as being more fair in one setting - where the image depicted an “attractive," white individual. In their explanations, participants often mention accuracy, as well as the objectivity/subjectivity of the tags in the description. We relate our work to the ongoing conversation about fairness in opaque tools like image tagging APIs, and their potential to result in harm.}, keywords = {Artificial Intelligence}, pubstate = {published}, tppubtype = {conference} } Image analysis algorithms have been a boon to personalization in digital systems and are now widely available via easy-to-use APIs. However, it is important to ensure that they behave fairly in applications that involve processing images of people, such as dating apps. We conduct an experiment to shed light on the factors influencing the perception of “fairness." Participants are shown a photo along with two descriptions (human- and algorithm-generated). They are then asked to indicate which is “more fair" in the context of a dating site, and explain their reasoning. We vary a number of factors, including the gender, race and attractiveness of the person in the photo. While participants generally found human-generated tags to be more fair, API tags were judged as being more fair in one setting - where the image depicted an “attractive," white individual. In their explanations, participants often mention accuracy, as well as the objectivity/subjectivity of the tags in the description. We relate our work to the ongoing conversation about fairness in opaque tools like image tagging APIs, and their potential to result in harm. |
Kleanthous, Styliani; Otterbacher, Jahna HAPPIE 2019 ACM, 2019. Abstract | Links | BibTeX | Tags: Information Retrieval, Information Studies, Information Systems @workshop{KleanthousHAPPIE2019, title = {Shaping the Reaction: Community Characteristics and Emotional Tone of Citizen Responses to Robotics Videos at TED versus YouTube}, author = {Styliani Kleanthous and Jahna Otterbacher}, url = {http://www.cycat.io/wp-content/uploads/2019/05/happ03-kleanthous.pdf}, year = {2019}, date = {2019-06-09}, publisher = {ACM}, series = {HAPPIE 2019}, abstract = {When modelling for the social we need to consider more than one medium. Little is known as to how platform community characteristics shape the discussion and how communicators could best engage each community, taking into consideration these characteristics. We consider comments on TED videos featuring roboticists, shared at TED.com and YouTube. We find evidence of different social norms and importantly, approaches to comment writing. The emotional tone is more positive at TED; however, there is little emotional escalation in either platform. The study highlights the importance of considering the community characteristics of a medium, when communicating with the public in a case study of emerging technologies.}, keywords = {Information Retrieval, Information Studies, Information Systems}, pubstate = {published}, tppubtype = {workshop} } When modelling for the social we need to consider more than one medium. Little is known as to how platform community characteristics shape the discussion and how communicators could best engage each community, taking into consideration these characteristics. We consider comments on TED videos featuring roboticists, shared at TED.com and YouTube. We find evidence of different social norms and importantly, approaches to comment writing. The emotional tone is more positive at TED; however, there is little emotional escalation in either platform. The study highlights the importance of considering the community characteristics of a medium, when communicating with the public in a case study of emerging technologies. |
Tal, Avital Shulner; Batsuren, Khuyagbaatar; Bogina, Veronika; Giunchiglia, Fausto; Hartman, Alan; Kleanthous-Loizou, Styliani; Kuflik, Tsvi; Otterbacher, Jahna 14th International Workshop On Semantic And Social Media Adaptation And Personalization, SMAP 2019 ACM, 2019. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @workshop{endtoend2019, title = {"End to End" - Towards a Framework for Reducing Biases and Promoting Transparency of Algorithmic Systems}, author = {Avital Shulner Tal and Khuyagbaatar Batsuren and Veronika Bogina and Fausto Giunchiglia and Alan Hartman and Styliani Kleanthous-Loizou and Tsvi Kuflik and Jahna Otterbacher}, url = {http://www.cycat.io/wp-content/uploads/2019/07/1570543680.pdf}, year = {2019}, date = {2019-06-09}, booktitle = {14th International Workshop On Semantic And Social Media Adaptation And Personalization}, publisher = {ACM}, series = {SMAP 2019}, abstract = {Algorithms play an increasing role in our everyday lives. Recently, the harmful potential of biased algorithms has been recognized by researchers and practitioners. We have also witnessed a growing interest in ensuring the fairness and transparency of algorithmic systems. However, so far there is no agreed upon solution and not even an agreed terminology. The proposed research defines the problem space, solution space and a prototype of comprehensive framework for the detection and reducing biases in algorithmic systems.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {workshop} } Algorithms play an increasing role in our everyday lives. Recently, the harmful potential of biased algorithms has been recognized by researchers and practitioners. We have also witnessed a growing interest in ensuring the fairness and transparency of algorithmic systems. However, so far there is no agreed upon solution and not even an agreed terminology. The proposed research defines the problem space, solution space and a prototype of comprehensive framework for the detection and reducing biases in algorithmic systems. |
Bettina Berendt Veronika Bogina, Robin Burke Michael Ekstrand Alan Hartman Styliani Kleanthous Tsvi Kuflik Bamshad Mobasher Jahna Otterbacher FairUMAP 2019 Chairs' Welcome Overview Workshop Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization, 2019, ISBN: 9781450367110. Abstract | Links | BibTeX | Tags: Algorithmic Fairness @workshop{Berendt2019, title = {FairUMAP 2019 Chairs' Welcome Overview}, author = {Bettina Berendt, Veronika Bogina, Robin Burke, Michael Ekstrand, Alan Hartman, Styliani Kleanthous, Tsvi Kuflik, Bamshad Mobasher, Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3314183.3323842}, doi = {10.1145/3314183.3323842}, isbn = {9781450367110}, year = {2019}, date = {2019-06-06}, booktitle = {Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization}, abstract = {It is our great pleasure to welcome you to the Second FairUMAP workshop at UMAP 2019. This full-day workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on one hand, and bias, fairness and transparency in algorithmic systems on the other hand. The workshop was motivated by the observation that these two fields increasingly impact one another. Personalization has become a ubiquitous and essential part of systems that help users find relevant information in today's highly complex, information-rich online environments. Machine learning techniques applied to big data, as done by recommender systems, and user modeling in general, are key enabling technologies that allow intelligent systems to learn from users and adapt their output to users' needs and preferences. However, there has been a growing recognition that these underlying technologies raise novel ethical, legal, and policy challenges. It has become apparent that a single-minded focus on user characteristics has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, and other social welfare considerations are not captured by typical metrics based on which data-driven personalized models are optimized. Indeed, widely-used personalization systems in popular sites such as Facebook, Google News and YouTube have been heavily criticized for personalizing information delivery too heavily at the cost of these other objectives.}, keywords = {Algorithmic Fairness}, pubstate = {published}, tppubtype = {workshop} } It is our great pleasure to welcome you to the Second FairUMAP workshop at UMAP 2019. This full-day workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on one hand, and bias, fairness and transparency in algorithmic systems on the other hand. The workshop was motivated by the observation that these two fields increasingly impact one another. Personalization has become a ubiquitous and essential part of systems that help users find relevant information in today's highly complex, information-rich online environments. Machine learning techniques applied to big data, as done by recommender systems, and user modeling in general, are key enabling technologies that allow intelligent systems to learn from users and adapt their output to users' needs and preferences. However, there has been a growing recognition that these underlying technologies raise novel ethical, legal, and policy challenges. It has become apparent that a single-minded focus on user characteristics has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, and other social welfare considerations are not captured by typical metrics based on which data-driven personalized models are optimized. Indeed, widely-used personalization systems in popular sites such as Facebook, Google News and YouTube have been heavily criticized for personalizing information delivery too heavily at the cost of these other objectives. |
Styliani Kleanthous, Elena Matsi Analyzing user's task-driven interaction in mixed reality Inproceedings Proceedings of the 24th International Conference on Intelligent User Interfaces, 2019, ISBN: 9781450362726. Abstract | Links | BibTeX | Tags: mixed reality @inproceedings{Kleanthous2019, title = {Analyzing user's task-driven interaction in mixed reality}, author = {Styliani Kleanthous, Elena Matsi}, url = {https://dl.acm.org/doi/abs/10.1145/3301275.3302286}, doi = {10.1145/3301275.3302286}, isbn = {9781450362726}, year = {2019}, date = {2019-03-17}, booktitle = {Proceedings of the 24th International Conference on Intelligent User Interfaces}, abstract = {Mixed reality (MR) provides exciting interaction approaches in several applications. The user experience of interacting in these visually rich environments depends highly on the way the user perceives, processes, and comprehends visual information. In this work we are investigating the differences between Field Dependent - Field Independent users towards their interaction behavior in a MR environment when they were asked to perform a specific task. A study was conducted using Microsoft HoloLens device in which participants interacted with a popular HoloLens application, modified by the authors to log user interaction data in real time. Analysis of the results demonstrates the differences in the visual processing of information, especially in visually complex environments and the impact on the user's interaction behavior.}, keywords = {mixed reality}, pubstate = {published}, tppubtype = {inproceedings} } Mixed reality (MR) provides exciting interaction approaches in several applications. The user experience of interacting in these visually rich environments depends highly on the way the user perceives, processes, and comprehends visual information. In this work we are investigating the differences between Field Dependent - Field Independent users towards their interaction behavior in a MR environment when they were asked to perform a specific task. A study was conducted using Microsoft HoloLens device in which participants interacted with a popular HoloLens application, modified by the authors to log user interaction data in real time. Analysis of the results demonstrates the differences in the visual processing of information, especially in visually complex environments and the impact on the user's interaction behavior. |
Styliani Kleanthous Tsvi Kuflik, Jahna Otterbacher Alan Hartman Casey Dugan Veronika Bogina Intelligent user interfaces for algorithmic transparency in emerging technologies Workshop Proceedings of the 24th International Conference on Intelligent User Interfaces: Companion, 2019, ISBN: 9781450366731. Abstract | Links | BibTeX | Tags: Algorithmic Transparency @workshop{Kleanthous2019b, title = {Intelligent user interfaces for algorithmic transparency in emerging technologies}, author = {Styliani Kleanthous, Tsvi Kuflik, Jahna Otterbacher, Alan Hartman, Casey Dugan, Veronika Bogina}, url = {https://dl.acm.org/doi/abs/10.1145/3308557.3313125}, doi = {10.1145/3308557.3313125}, isbn = {9781450366731}, year = {2019}, date = {2019-03-16}, booktitle = {Proceedings of the 24th International Conference on Intelligent User Interfaces: Companion}, abstract = {The workshop focus is on Algorithmic Transparency (AT) in Emerging Technologies. Naturally, the user interface is where and how the Algorithmic Transparency should occur and the challenge we aim at is how intelligent user interfaces can make a system transparent to its users.}, keywords = {Algorithmic Transparency}, pubstate = {published}, tppubtype = {workshop} } The workshop focus is on Algorithmic Transparency (AT) in Emerging Technologies. Naturally, the user interface is where and how the Algorithmic Transparency should occur and the challenge we aim at is how intelligent user interfaces can make a system transparent to its users. |
Batsuren, Khuyagbaatar; Bella, Gabor; Giunchiglia, Fausto CogNet: A Large-Scale Cognate Database Inproceedings Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3136–3145, Florence, Italy, 2019. Abstract | Links | BibTeX | Tags: @inproceedings{batsuren-etal-2019-cognet, title = {CogNet: A Large-Scale Cognate Database}, author = {Khuyagbaatar Batsuren and Gabor Bella and Fausto Giunchiglia}, url = {https://www.aclweb.org/anthology/P19-1302}, year = {2019}, date = {2019-01-01}, booktitle = {Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics}, pages = {3136--3145}, address = {Florence, Italy}, abstract = {This paper introduces CogNet, a new, large-scale lexical database that provides cognates -words of common origin and meaning- across languages. The database currently contains 3.1 million cognate pairs across 338 languages using 35 writing systems. The paper also describes the automated method by which cognates were computed from publicly available wordnets, with an accuracy evaluated to 94%. Finally, it presents statistics about the cognate data and some initial insights into it, hinting at a possible future exploitation of the resource by various fields of lingustics.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } This paper introduces CogNet, a new, large-scale lexical database that provides cognates -words of common origin and meaning- across languages. The database currently contains 3.1 million cognate pairs across 338 languages using 35 writing systems. The paper also describes the automated method by which cognates were computed from publicly available wordnets, with an accuracy evaluated to 94%. Finally, it presents statistics about the cognate data and some initial insights into it, hinting at a possible future exploitation of the resource by various fields of lingustics. |
Jahna Otterbacher Ioannis Katakis, Pantelis Agathangelou Linguistic Bias in Crowdsourced Biographies A Cross-lingual Examination Book Chapter 2019. Abstract | Links | BibTeX | Tags: Algorithmic Bias @inbook{Otterbacher2019c, title = {Linguistic Bias in Crowdsourced Biographies A Cross-lingual Examination}, author = {Jahna Otterbacher, Ioannis Katakis, Pantelis Agathangelou}, url = {https://www.worldscientific.com/doi/abs/10.1142/9789813274884_0012}, year = {2019}, date = {2019-01-01}, abstract = {Biographies make up a significant portion of Wikipedia entries and are a source of information and inspiration for the public. We examine a threat to their objectivity, linguistic biases, which are pervasive in human communication. Linguistic bias, the systematic asymmetry in the language used to describe people as a function of their social groups, plays a role in the perpetuation of stereotypes. Theory predicts that we describe people who are expected – because they are members of our own in-groups or are stereotype-congruent – with more abstract, subjective language, as compared to others. Abstract language has the power to sway our impressions of others as it implies stability over time. Extending our monolingual work, we consider biographies of intellectuals at the English- and Greek-language Wikipedias. We use our recently introduced sentiment analysis tool, DidaxTo, which extracts domain-specific opinion words to build lexicons of subjective words in each language and for each gender, and compare the extent to which abstract language is used. Contrary to expectation, we find evidence of gender-based linguistic bias, with women being described more abstractly as compared to men. However, this is limited to English-language biographies. We discuss the implications of using DidaxTo to monitor linguistic bias in texts produced via crowdsourcing.}, keywords = {Algorithmic Bias}, pubstate = {published}, tppubtype = {inbook} } Biographies make up a significant portion of Wikipedia entries and are a source of information and inspiration for the public. We examine a threat to their objectivity, linguistic biases, which are pervasive in human communication. Linguistic bias, the systematic asymmetry in the language used to describe people as a function of their social groups, plays a role in the perpetuation of stereotypes. Theory predicts that we describe people who are expected – because they are members of our own in-groups or are stereotype-congruent – with more abstract, subjective language, as compared to others. Abstract language has the power to sway our impressions of others as it implies stability over time. Extending our monolingual work, we consider biographies of intellectuals at the English- and Greek-language Wikipedias. We use our recently introduced sentiment analysis tool, DidaxTo, which extracts domain-specific opinion words to build lexicons of subjective words in each language and for each gender, and compare the extent to which abstract language is used. Contrary to expectation, we find evidence of gender-based linguistic bias, with women being described more abstractly as compared to men. However, this is limited to English-language biographies. We discuss the implications of using DidaxTo to monitor linguistic bias in texts produced via crowdsourcing. |