Research
Publications
Styliani Kleanthous Maria Kasinidou, Pınar Barlas Jahna Otterbacher Perception of fairness in algorithmic decisions: Future developers' perspective Journal Article Patterns, 2021. Abstract | Links | BibTeX | Tags: Accountability, Algorithmic Fairness, Algorithmic Transparency, Artificial Intelligence @article{Kleanthous2021, title = {Perception of fairness in algorithmic decisions: Future developers' perspective}, author = {Styliani Kleanthous, Maria Kasinidou, Pınar Barlas, Jahna Otterbacher}, url = {https://www.sciencedirect.com/science/article/pii/S2666389921002476}, year = {2021}, date = {2021-11-03}, journal = {Patterns}, abstract = {Fairness, accountability, transparency, and ethics (FATE) in algorithmic systems is gaining a lot of attention lately. With the continuous advancement of machine learning and artificial intelligence, research and tech companies are coming across incidents where algorithmic systems are making non-objective decisions that may reproduce and/or amplify social stereotypes and inequalities. There is a great effort by the research community on developing frameworks of fairness and algorithmic models to alleviate biases; however, we first need to understand how people perceive the complex construct of algorithmic fairness. In this work, we investigate how young and future developers perceive these concepts. Our results can inform future research on (1) understanding perceptions of algorithmic FATE, (2) highlighting the needs for systematic training and education on FATE, and (3) raising awareness among young developers on the potential impact that the systems they are developing have in society.}, keywords = {Accountability, Algorithmic Fairness, Algorithmic Transparency, Artificial Intelligence}, pubstate = {published}, tppubtype = {article} } Fairness, accountability, transparency, and ethics (FATE) in algorithmic systems is gaining a lot of attention lately. With the continuous advancement of machine learning and artificial intelligence, research and tech companies are coming across incidents where algorithmic systems are making non-objective decisions that may reproduce and/or amplify social stereotypes and inequalities. There is a great effort by the research community on developing frameworks of fairness and algorithmic models to alleviate biases; however, we first need to understand how people perceive the complex construct of algorithmic fairness. In this work, we investigate how young and future developers perceive these concepts. Our results can inform future research on (1) understanding perceptions of algorithmic FATE, (2) highlighting the needs for systematic training and education on FATE, and (3) raising awareness among young developers on the potential impact that the systems they are developing have in society. |
Styliani Kleanthous Jahna Otterbacher, Jo Bates Fausto Giunchiglia Frank Hopfgartner Tsvi Kuflik Kalia Orphanou Monica Paramita Michael Rovatsos Avital Shulner-Tal L Report on the CyCAT winter school on fairness, accountability, transparency and ethics (FATE) in AI Inproceedings ACM SIGIR Forum, pp. 1–9, ACM New York, NY, USA Association for Computing Machinery, 2021, ISSN: 0163-5840. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @inproceedings{Kleanthous2021b, title = {Report on the CyCAT winter school on fairness, accountability, transparency and ethics (FATE) in AI}, author = {Styliani Kleanthous, Jahna Otterbacher, Jo Bates, Fausto Giunchiglia, Frank Hopfgartner, Tsvi Kuflik, Kalia Orphanou, Monica L Paramita, Michael Rovatsos, Avital Shulner-Tal}, url = {https://doi.org/10.1145/3476415.3476419}, doi = {10.1145/3476415.3476419}, issn = {0163-5840}, year = {2021}, date = {2021-07-16}, booktitle = {ACM SIGIR Forum}, volume = {55}, number = {1}, pages = {1--9}, publisher = {Association for Computing Machinery}, organization = {ACM New York, NY, USA}, abstract = {The first FATE Winter School, organized by the Cyprus Center for Algorithmic Transparency (CyCAT) provided a forum for both students as well as senior researchers to examine the complex topic of Fairness, Accountability, Transparency and Ethics (FATE). Through a program that included two invited keynotes, as well as sessions led by CyCAT partners across Europe and Israel, participants were exposed to a range of approaches on FATE, in a holistic manner. During the Winter School, the team also organized a hands-on activity to evaluate a tool-based intervention where participants interacted with eight prototypes of bias-aware search engines. Finally, participants were invited to join one of four collaborative projects coordinated by CyCAT, thus furthering common understanding and interdisciplinary collaboration on this emerging topic.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {inproceedings} } The first FATE Winter School, organized by the Cyprus Center for Algorithmic Transparency (CyCAT) provided a forum for both students as well as senior researchers to examine the complex topic of Fairness, Accountability, Transparency and Ethics (FATE). Through a program that included two invited keynotes, as well as sessions led by CyCAT partners across Europe and Israel, participants were exposed to a range of approaches on FATE, in a holistic manner. During the Winter School, the team also organized a hands-on activity to evaluate a tool-based intervention where participants interacted with eight prototypes of bias-aware search engines. Finally, participants were invited to join one of four collaborative projects coordinated by CyCAT, thus furthering common understanding and interdisciplinary collaboration on this emerging topic. |
Maria Kasinidou Styliani Kleanthous, Kalia Orphanou Jahna Otterbacher Association for Computing Machinery, 2021, ISBN: 9781450382144. Abstract | Links | BibTeX | Tags: Algorithmic Fairness, Algorithmic Transparency @proceedings{Kasinidou2021b, title = {Educating Computer Science Students about Algorithmic Fairness, Accountability, Transparency and Ethics}, author = {Maria Kasinidou, Styliani Kleanthous, Kalia Orphanou, Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3430665.3456311}, doi = {10.1145/3430665.3456311}, isbn = {9781450382144}, year = {2021}, date = {2021-06-26}, publisher = {Association for Computing Machinery}, series = {ITiCSE '21}, abstract = {Professionals are increasingly relying on algorithmic systems for decision making however, algorithmic decisions occasionally perceived as biased or not just. Prior work has provided evidences that education can make a difference on the perception of young developers on algorithmic fairness. In this paper, we investigate computer science students' perception of FATE in algorithmic decision-making and whether their views on FATE can be changed by attending a seminar on FATE topics. Participants attended a seminar on FATE in algorithmic decision-making and they were asked to respond to two online questionnaires to measure their pre- and post-seminar perception on FATE. Results show that a short seminar can make a difference in understanding and perception as well as the attitude of the students towards FATE in algorithmic decision support. CS curricula need to be updated and include FATE topics if we want algorithmic decision support systems to be just for all.}, keywords = {Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {proceedings} } Professionals are increasingly relying on algorithmic systems for decision making however, algorithmic decisions occasionally perceived as biased or not just. Prior work has provided evidences that education can make a difference on the perception of young developers on algorithmic fairness. In this paper, we investigate computer science students' perception of FATE in algorithmic decision-making and whether their views on FATE can be changed by attending a seminar on FATE topics. Participants attended a seminar on FATE in algorithmic decision-making and they were asked to respond to two online questionnaires to measure their pre- and post-seminar perception on FATE. Results show that a short seminar can make a difference in understanding and perception as well as the attitude of the students towards FATE in algorithmic decision support. CS curricula need to be updated and include FATE topics if we want algorithmic decision support systems to be just for all. |
Bamshad Mobasher Styliani Kleanthous, Bettina Berendt Jahna Otterbacher Tsvi Kuflik Avital Shulner Tal FairUMAP 2021: The 4th Workshop on Fairness in User Modeling, Adaptation and Personalization Workshop Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, 2021, ISBN: 978-1-4503-8367-7/21/06.. Abstract | Links | BibTeX | Tags: Algorithmic Fairness @workshop{Mobasher2021, title = {FairUMAP 2021: The 4th Workshop on Fairness in User Modeling, Adaptation and Personalization}, author = {Bamshad Mobasher, Styliani Kleanthous, Bettina Berendt, Jahna Otterbacher, Tsvi Kuflik, Avital Shulner Tal}, url = {https://dl.acm.org/doi/fullHtml/10.1145/3450614.3461454}, doi = {10.1145/3450614.3461454}, isbn = {978-1-4503-8367-7/21/06.}, year = {2021}, date = {2021-06-21}, booktitle = {Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization}, abstract = {User modeling and personalized recommendations, often enabled by data-rich machine learning, are key enabling technologies that allow intelligent systems to learn from users, adapting their output to users’ needs and preferences. These techniques have become an essential part of systems that help users find relevant content in today’s highly complex, information-rich environments. However, there has been a growing recognition that they raise novel ethical, policy, and legal challenges. It has become apparent that a singleminded focus on the user preferences has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, openness to diversity, and other social welfare considerations, are not captured by typical metrics, based on which data-driven personalized models are optimized.}, keywords = {Algorithmic Fairness}, pubstate = {published}, tppubtype = {workshop} } User modeling and personalized recommendations, often enabled by data-rich machine learning, are key enabling technologies that allow intelligent systems to learn from users, adapting their output to users’ needs and preferences. These techniques have become an essential part of systems that help users find relevant content in today’s highly complex, information-rich environments. However, there has been a growing recognition that they raise novel ethical, policy, and legal challenges. It has become apparent that a singleminded focus on the user preferences has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, openness to diversity, and other social welfare considerations, are not captured by typical metrics, based on which data-driven personalized models are optimized. |
Veronika Bogina Alan Hartman, Tsvi Kuflik Avital Shulner-Tal Educating Software and AI Stakeholders About Algorithmic Fairness, Accountability, Transparency and Ethics Journal Article International Journal of Artificial Intelligence in Education, 2021. Abstract | Links | BibTeX | Tags: Accountability, Algorithmic Fairness, Algorithmic Transparency, Education @article{Bogina2021, title = {Educating Software and AI Stakeholders About Algorithmic Fairness, Accountability, Transparency and Ethics}, author = {Veronika Bogina, Alan Hartman, Tsvi Kuflik, Avital Shulner-Tal}, url = {https://link.springer.com/article/10.1007/s40593-021-00248-0}, doi = {10.1007/s40593-021-00248-0}, year = {2021}, date = {2021-04-21}, journal = {International Journal of Artificial Intelligence in Education}, abstract = {This paper discusses educating stakeholders of algorithmic systems (systems that apply Artificial Intelligence/Machine learning algorithms) in the areas of algorithmic fairness, accountability, transparency and ethics (FATE). We begin by establishing the need for such education and identifying the intended consumers of educational materials on the topic. We discuss the topics of greatest concern and in need of educational resources; we also survey the existing materials and past experiences in such education, noting the scarcity of suitable material on aspects of fairness in particular. We use an example of a college admission platform to illustrate our ideas. We conclude with recommendations for further work in the area and report on the first steps taken towards achieving this goal in the framework of an academic graduate seminar course, a graduate summer school, an embedded lecture in a software engineering course, and a workshop for high school teachers.}, keywords = {Accountability, Algorithmic Fairness, Algorithmic Transparency, Education}, pubstate = {published}, tppubtype = {article} } This paper discusses educating stakeholders of algorithmic systems (systems that apply Artificial Intelligence/Machine learning algorithms) in the areas of algorithmic fairness, accountability, transparency and ethics (FATE). We begin by establishing the need for such education and identifying the intended consumers of educational materials on the topic. We discuss the topics of greatest concern and in need of educational resources; we also survey the existing materials and past experiences in such education, noting the scarcity of suitable material on aspects of fairness in particular. We use an example of a college admission platform to illustrate our ideas. We conclude with recommendations for further work in the area and report on the first steps taken towards achieving this goal in the framework of an academic graduate seminar course, a graduate summer school, an embedded lecture in a software engineering course, and a workshop for high school teachers. |
Kalia Orphanou Jahna Otterbacher, Styliani Kleanthous Khuyagbaatar Batsuren Fausto Giunchiglia Veronika Bogina Avital Shulner Tal Tsvi Kuflik Mitigating Bias in Algorithmic Systems: A Fish-Eye View of Problems and Solutions Across Domains Journal Article arXiv preprint arXiv:2103.16953, 2021. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @article{Orphanou2021b, title = {Mitigating Bias in Algorithmic Systems: A Fish-Eye View of Problems and Solutions Across Domains}, author = {Kalia Orphanou, Jahna Otterbacher, Styliani Kleanthous, Khuyagbaatar Batsuren, Fausto Giunchiglia, Veronika Bogina, Avital Shulner Tal, Tsvi Kuflik}, url = {arXiv preprint arXiv:2103.16953}, year = {2021}, date = {2021-03-31}, journal = {arXiv preprint arXiv:2103.16953}, abstract = {Mitigating bias in algorithmic systems is a critical issue drawing attention across communities within the information and computer sciences. Given the complexity of the problem and the involvement of multiple stakeholders, including developers, end-users and third-parties, there is a need to understand the landscape of the sources of bias, and the solutions being proposed to address them. This survey provides a 'fish-eye view', examining approaches across four areas of research. The literature describes three steps toward a comprehensive treatment: bias detection, fairness management and explainability management, and underscores the need to work from within the system as well as from the perspective of stakeholders in the broader context.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {article} } Mitigating bias in algorithmic systems is a critical issue drawing attention across communities within the information and computer sciences. Given the complexity of the problem and the involvement of multiple stakeholders, including developers, end-users and third-parties, there is a need to understand the landscape of the sources of bias, and the solutions being proposed to address them. This survey provides a 'fish-eye view', examining approaches across four areas of research. The literature describes three steps toward a comprehensive treatment: bias detection, fairness management and explainability management, and underscores the need to work from within the system as well as from the perspective of stakeholders in the broader context. |
Maria Kasinidou Styliani Kleanthous, Pınar Barlas Jahna Otterbacher Association for Computing Machinery, 2021, ISBN: 9781450383097. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @proceedings{Kasinidou2021, title = {I agree with the decision, but they didn't deserve this: Future Developers' Perception of Fairness in Algorithmic Decisions}, author = {Maria Kasinidou, Styliani Kleanthous, Pınar Barlas, Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3442188.3445931}, doi = {10.1145/3442188.3445931}, isbn = {9781450383097}, year = {2021}, date = {2021-03-08}, publisher = {Association for Computing Machinery}, series = {FAccT '21}, abstract = {While professionals are increasingly relying on algorithmic systems for making a decision, on some occasions, algorithmic decisions may be perceived as biased or not just. Prior work has looked into the perception of algorithmic decision-making from the user's point of view. In this work, we investigate how students in fields adjacent to algorithm development perceive algorithmic decisionmaking. Participants (N=99) were asked to rate their agreement with statements regarding six constructs that are related to facets of fairness and justice in algorithmic decision-making in three separate scenarios. Two of the three scenarios were independent of each other, while the third scenario presented three different outcomes of the same algorithmic system, demonstrating perception changes triggered by different outputs. Quantitative analysis indicates that a) 'agreeing' with a decision does not mean the person 'deserves the outcome', b) perceiving the factors used in the decision-making as 'appropriate' does not make the decision of the system 'fair' and c) perceiving a system's decision as 'not fair' is affecting the participants' 'trust' in the system. In addition, participants found proportional distribution of benefits more fair than other approaches. Qualitative analysis provides further insights into that information the participants find essential to judge and understand an algorithmic decision-making system's fairness. Finally, the level of academic education has a role to play in the perception of fairness and justice in algorithmic decision-making.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {proceedings} } While professionals are increasingly relying on algorithmic systems for making a decision, on some occasions, algorithmic decisions may be perceived as biased or not just. Prior work has looked into the perception of algorithmic decision-making from the user's point of view. In this work, we investigate how students in fields adjacent to algorithm development perceive algorithmic decisionmaking. Participants (N=99) were asked to rate their agreement with statements regarding six constructs that are related to facets of fairness and justice in algorithmic decision-making in three separate scenarios. Two of the three scenarios were independent of each other, while the third scenario presented three different outcomes of the same algorithmic system, demonstrating perception changes triggered by different outputs. Quantitative analysis indicates that a) 'agreeing' with a decision does not mean the person 'deserves the outcome', b) perceiving the factors used in the decision-making as 'appropriate' does not make the decision of the system 'fair' and c) perceiving a system's decision as 'not fair' is affecting the participants' 'trust' in the system. In addition, participants found proportional distribution of benefits more fair than other approaches. Qualitative analysis provides further insights into that information the participants find essential to judge and understand an algorithmic decision-making system's fairness. Finally, the level of academic education has a role to play in the perception of fairness and justice in algorithmic decision-making. |
Xavier Alameda-Pineda Miriam Redi, Jahna Otterbacher Nicu Sebe Shih-Fu Chang Proceedings of the 28th ACM International Conference on Multimedia, 2020, ISBN: 9781450379885. Abstract | Links | BibTeX | Tags: Accountability, Algorithmic Fairness, Algorithmic Transparency, Ethics @workshop{Alameda-Pineda2020, title = {FATE/MM 20: 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in MultiMedia}, author = {Xavier Alameda-Pineda, Miriam Redi, Jahna Otterbacher, Nicu Sebe, Shih-Fu Chang}, url = {https://dl.acm.org/doi/abs/10.1145/3394171.3421896}, doi = {10.1145/3394171.3421896}, isbn = {9781450379885}, year = {2020}, date = {2020-10-12}, booktitle = {Proceedings of the 28th ACM International Conference on Multimedia}, abstract = {The series of FAT/FAccT events aim at bringing together researchers and practitioners interested in fairness, accountability, transparency and ethics of computational methods. The FATE/MM workshop focuses on addressing these issues in the Multimedia field. Multimedia computing technologies operate today at an unprecedented scale, with a growing community of scientists interested in multimedia models, tools and applications. Such continued growth has great implications not only for the scientific community, but also for the society as a whole. Typical risks of large-scale computational models include model bias and algorithmic discrimination. These risks become particularly prominent in the multimedia field, which historically has been focusing on user-centered technologies. To ensure a healthy and constructive development of the best multimedia technologies, this workshop offers a space to discuss how to develop ethical, fair, unbiased, representative, and transparent multimedia models, bringing together researchers from different areas to present computational solutions to these issues.}, keywords = {Accountability, Algorithmic Fairness, Algorithmic Transparency, Ethics}, pubstate = {published}, tppubtype = {workshop} } The series of FAT/FAccT events aim at bringing together researchers and practitioners interested in fairness, accountability, transparency and ethics of computational methods. The FATE/MM workshop focuses on addressing these issues in the Multimedia field. Multimedia computing technologies operate today at an unprecedented scale, with a growing community of scientists interested in multimedia models, tools and applications. Such continued growth has great implications not only for the scientific community, but also for the society as a whole. Typical risks of large-scale computational models include model bias and algorithmic discrimination. These risks become particularly prominent in the multimedia field, which historically has been focusing on user-centered technologies. To ensure a healthy and constructive development of the best multimedia technologies, this workshop offers a space to discuss how to develop ethical, fair, unbiased, representative, and transparent multimedia models, bringing together researchers from different areas to present computational solutions to these issues. |
Bamshad Mobasher Styliani Kleanthous, Bettina Berendt Michael Ekstrand Jahna Otterbacher Avital Shulner Tal FairUMAP 2020: The 3rd Workshop on Fairness in User Modeling, Adaptation and Personalization Workshop Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization, 2020, ISBN: 9781450368612. Abstract | Links | BibTeX | Tags: Adaptation, Algorithmic Fairness, Personalization @workshop{Mobasher2020, title = {FairUMAP 2020: The 3rd Workshop on Fairness in User Modeling, Adaptation and Personalization}, author = {Bamshad Mobasher, Styliani Kleanthous, Bettina Berendt, Michael Ekstrand, Jahna Otterbacher, Avital Shulner Tal}, url = {https://dl.acm.org/doi/abs/10.1145/3340631.3398671}, doi = {10.1145/3340631.3398671}, isbn = {9781450368612}, year = {2020}, date = {2020-07-14}, booktitle = {Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization}, abstract = {The 3rd FairUMAP workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on the one hand, and bias, fairness and transparency in algorithmic systems on the other hand.}, keywords = {Adaptation, Algorithmic Fairness, Personalization}, pubstate = {published}, tppubtype = {workshop} } The 3rd FairUMAP workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on the one hand, and bias, fairness and transparency in algorithmic systems on the other hand. |
Otterbacher, Jahna Fairness in Algorithmic and Crowd-Generated Descriptions of People Images Inproceedings Proceedings of the 1st International Workshop on Fairness, Accountability, and Transparency in MultiMedia, 2019, ISBN: 9781450369152. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness @inproceedings{Otterbacher2019b, title = {Fairness in Algorithmic and Crowd-Generated Descriptions of People Images}, author = {Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3347447.3352693}, doi = {10.1145/3347447.3352693}, isbn = {9781450369152}, year = {2019}, date = {2019-10-15}, booktitle = {Proceedings of the 1st International Workshop on Fairness, Accountability, and Transparency in MultiMedia}, abstract = {Image analysis algorithms have become indispensable in the mod- ern information ecosystem. Beyond their early use in restricted domains (e.g., military, medical), they are now widely used in con- sumer applications and social media. With the rise of the “Algorithm Economy,"1 image analysis algorithms are increasingly being com- mercialized as Cognitive Services. This practice is proving to be a boon to the development of applications where user modeling, personalization and adaptation are required. From e-stores, where image recognition is used to curate a “personal style" for a given shopper based on previously viewed items,2 to dating apps, which can now act as “visual matchmakers,"3 the technology has gained increasing influence in our digital interactions and experiences. However, proprietary image tagging services are black boxes and there are numerous social and ethical issues surrounding their use in contexts where people can be harmed. In this talk, I will discuss recent work in analyzing proprietary image tagging services (e.g., Clarifai, Google Vision, Amazon Rekognition) for their gender and racial biases when tagging images depicting people. I will present our techniques for discrimination discovery in this domain [2], as well as our work on understanding user perceptions of fairness [1]. Finally, I will explore the sources of such biases, by comparing human versus machine descriptions of the same people images [3].}, keywords = {Algorithmic Bias, Algorithmic Fairness}, pubstate = {published}, tppubtype = {inproceedings} } Image analysis algorithms have become indispensable in the mod- ern information ecosystem. Beyond their early use in restricted domains (e.g., military, medical), they are now widely used in con- sumer applications and social media. With the rise of the “Algorithm Economy,"1 image analysis algorithms are increasingly being com- mercialized as Cognitive Services. This practice is proving to be a boon to the development of applications where user modeling, personalization and adaptation are required. From e-stores, where image recognition is used to curate a “personal style" for a given shopper based on previously viewed items,2 to dating apps, which can now act as “visual matchmakers,"3 the technology has gained increasing influence in our digital interactions and experiences. However, proprietary image tagging services are black boxes and there are numerous social and ethical issues surrounding their use in contexts where people can be harmed. In this talk, I will discuss recent work in analyzing proprietary image tagging services (e.g., Clarifai, Google Vision, Amazon Rekognition) for their gender and racial biases when tagging images depicting people. I will present our techniques for discrimination discovery in this domain [2], as well as our work on understanding user perceptions of fairness [1]. Finally, I will explore the sources of such biases, by comparing human versus machine descriptions of the same people images [3]. |
Tal, Avital Shulner; Batsuren, Khuyagbaatar; Bogina, Veronika; Giunchiglia, Fausto; Hartman, Alan; Kleanthous-Loizou, Styliani; Kuflik, Tsvi; Otterbacher, Jahna 14th International Workshop On Semantic And Social Media Adaptation And Personalization, SMAP 2019 ACM, 2019. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @workshop{endtoend2019, title = {"End to End" - Towards a Framework for Reducing Biases and Promoting Transparency of Algorithmic Systems}, author = {Avital Shulner Tal and Khuyagbaatar Batsuren and Veronika Bogina and Fausto Giunchiglia and Alan Hartman and Styliani Kleanthous-Loizou and Tsvi Kuflik and Jahna Otterbacher}, url = {https://www.cycat.io/wp-content/uploads/2019/07/1570543680.pdf}, year = {2019}, date = {2019-06-09}, booktitle = {14th International Workshop On Semantic And Social Media Adaptation And Personalization}, publisher = {ACM}, series = {SMAP 2019}, abstract = {Algorithms play an increasing role in our everyday lives. Recently, the harmful potential of biased algorithms has been recognized by researchers and practitioners. We have also witnessed a growing interest in ensuring the fairness and transparency of algorithmic systems. However, so far there is no agreed upon solution and not even an agreed terminology. The proposed research defines the problem space, solution space and a prototype of comprehensive framework for the detection and reducing biases in algorithmic systems.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {workshop} } Algorithms play an increasing role in our everyday lives. Recently, the harmful potential of biased algorithms has been recognized by researchers and practitioners. We have also witnessed a growing interest in ensuring the fairness and transparency of algorithmic systems. However, so far there is no agreed upon solution and not even an agreed terminology. The proposed research defines the problem space, solution space and a prototype of comprehensive framework for the detection and reducing biases in algorithmic systems. |
Bettina Berendt Veronika Bogina, Robin Burke Michael Ekstrand Alan Hartman Styliani Kleanthous Tsvi Kuflik Bamshad Mobasher Jahna Otterbacher FairUMAP 2019 Chairs' Welcome Overview Workshop Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization, 2019, ISBN: 9781450367110. Abstract | Links | BibTeX | Tags: Algorithmic Fairness @workshop{Berendt2019, title = {FairUMAP 2019 Chairs' Welcome Overview}, author = {Bettina Berendt, Veronika Bogina, Robin Burke, Michael Ekstrand, Alan Hartman, Styliani Kleanthous, Tsvi Kuflik, Bamshad Mobasher, Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3314183.3323842}, doi = {10.1145/3314183.3323842}, isbn = {9781450367110}, year = {2019}, date = {2019-06-06}, booktitle = {Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization}, abstract = {It is our great pleasure to welcome you to the Second FairUMAP workshop at UMAP 2019. This full-day workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on one hand, and bias, fairness and transparency in algorithmic systems on the other hand. The workshop was motivated by the observation that these two fields increasingly impact one another. Personalization has become a ubiquitous and essential part of systems that help users find relevant information in today's highly complex, information-rich online environments. Machine learning techniques applied to big data, as done by recommender systems, and user modeling in general, are key enabling technologies that allow intelligent systems to learn from users and adapt their output to users' needs and preferences. However, there has been a growing recognition that these underlying technologies raise novel ethical, legal, and policy challenges. It has become apparent that a single-minded focus on user characteristics has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, and other social welfare considerations are not captured by typical metrics based on which data-driven personalized models are optimized. Indeed, widely-used personalization systems in popular sites such as Facebook, Google News and YouTube have been heavily criticized for personalizing information delivery too heavily at the cost of these other objectives.}, keywords = {Algorithmic Fairness}, pubstate = {published}, tppubtype = {workshop} } It is our great pleasure to welcome you to the Second FairUMAP workshop at UMAP 2019. This full-day workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on one hand, and bias, fairness and transparency in algorithmic systems on the other hand. The workshop was motivated by the observation that these two fields increasingly impact one another. Personalization has become a ubiquitous and essential part of systems that help users find relevant information in today's highly complex, information-rich online environments. Machine learning techniques applied to big data, as done by recommender systems, and user modeling in general, are key enabling technologies that allow intelligent systems to learn from users and adapt their output to users' needs and preferences. However, there has been a growing recognition that these underlying technologies raise novel ethical, legal, and policy challenges. It has become apparent that a single-minded focus on user characteristics has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, and other social welfare considerations are not captured by typical metrics based on which data-driven personalized models are optimized. Indeed, widely-used personalization systems in popular sites such as Facebook, Google News and YouTube have been heavily criticized for personalizing information delivery too heavily at the cost of these other objectives. |