Research
Publications
Judy Kay Tsvi Kuflik, Michael Rovatsos Toward a Transparency by Design Framework (day 4) Journal Article Dagstuhl Reports, Vol. 11, Issue 5 ISSN 2192-5283, pp. 18, 2021. Abstract | Links | BibTeX | Tags: Algorithmic Transparency @article{Kay2021, title = {Toward a Transparency by Design Framework (day 4)}, author = {Judy Kay, Tsvi Kuflik, Michael Rovatsos}, url = {https://drops.dagstuhl.de/opus/volltexte/2021/15566/pdf/dagrep-v011-i005-complete.pdf#page=20}, year = {2021}, date = {2021-12-01}, journal = {Dagstuhl Reports, Vol. 11, Issue 5 ISSN 2192-5283}, pages = {18}, abstract = {During the fourth and final day, the results of the first three days were discussed and summarised into a joint document that is intended to form a basis for a joint paper. The structure of the document followed the results of the discussion of the topics and the order of the discussion in the first three days, organised into the following chapters: 1. Why transparency?}, keywords = {Algorithmic Transparency}, pubstate = {published}, tppubtype = {article} } During the fourth and final day, the results of the first three days were discussed and summarised into a joint document that is intended to form a basis for a joint paper. The structure of the document followed the results of the discussion of the topics and the order of the discussion in the first three days, organised into the following chapters: 1. Why transparency? |
Styliani Kleanthous Maria Kasinidou, Pınar Barlas Jahna Otterbacher Perception of fairness in algorithmic decisions: Future developers' perspective Journal Article Patterns, 2021. Abstract | Links | BibTeX | Tags: Accountability, Algorithmic Fairness, Algorithmic Transparency, Artificial Intelligence @article{Kleanthous2021, title = {Perception of fairness in algorithmic decisions: Future developers' perspective}, author = {Styliani Kleanthous, Maria Kasinidou, Pınar Barlas, Jahna Otterbacher}, url = {https://www.sciencedirect.com/science/article/pii/S2666389921002476}, year = {2021}, date = {2021-11-03}, journal = {Patterns}, abstract = {Fairness, accountability, transparency, and ethics (FATE) in algorithmic systems is gaining a lot of attention lately. With the continuous advancement of machine learning and artificial intelligence, research and tech companies are coming across incidents where algorithmic systems are making non-objective decisions that may reproduce and/or amplify social stereotypes and inequalities. There is a great effort by the research community on developing frameworks of fairness and algorithmic models to alleviate biases; however, we first need to understand how people perceive the complex construct of algorithmic fairness. In this work, we investigate how young and future developers perceive these concepts. Our results can inform future research on (1) understanding perceptions of algorithmic FATE, (2) highlighting the needs for systematic training and education on FATE, and (3) raising awareness among young developers on the potential impact that the systems they are developing have in society.}, keywords = {Accountability, Algorithmic Fairness, Algorithmic Transparency, Artificial Intelligence}, pubstate = {published}, tppubtype = {article} } Fairness, accountability, transparency, and ethics (FATE) in algorithmic systems is gaining a lot of attention lately. With the continuous advancement of machine learning and artificial intelligence, research and tech companies are coming across incidents where algorithmic systems are making non-objective decisions that may reproduce and/or amplify social stereotypes and inequalities. There is a great effort by the research community on developing frameworks of fairness and algorithmic models to alleviate biases; however, we first need to understand how people perceive the complex construct of algorithmic fairness. In this work, we investigate how young and future developers perceive these concepts. Our results can inform future research on (1) understanding perceptions of algorithmic FATE, (2) highlighting the needs for systematic training and education on FATE, and (3) raising awareness among young developers on the potential impact that the systems they are developing have in society. |
Styliani Kleanthous Jahna Otterbacher, Jo Bates Fausto Giunchiglia Frank Hopfgartner Tsvi Kuflik Kalia Orphanou Monica Paramita Michael Rovatsos Avital Shulner-Tal L Report on the CyCAT winter school on fairness, accountability, transparency and ethics (FATE) in AI Inproceedings ACM SIGIR Forum, pp. 1–9, ACM New York, NY, USA Association for Computing Machinery, 2021, ISSN: 0163-5840. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @inproceedings{Kleanthous2021b, title = {Report on the CyCAT winter school on fairness, accountability, transparency and ethics (FATE) in AI}, author = {Styliani Kleanthous, Jahna Otterbacher, Jo Bates, Fausto Giunchiglia, Frank Hopfgartner, Tsvi Kuflik, Kalia Orphanou, Monica L Paramita, Michael Rovatsos, Avital Shulner-Tal}, url = {https://doi.org/10.1145/3476415.3476419}, doi = {10.1145/3476415.3476419}, issn = {0163-5840}, year = {2021}, date = {2021-07-16}, booktitle = {ACM SIGIR Forum}, volume = {55}, number = {1}, pages = {1--9}, publisher = {Association for Computing Machinery}, organization = {ACM New York, NY, USA}, abstract = {The first FATE Winter School, organized by the Cyprus Center for Algorithmic Transparency (CyCAT) provided a forum for both students as well as senior researchers to examine the complex topic of Fairness, Accountability, Transparency and Ethics (FATE). Through a program that included two invited keynotes, as well as sessions led by CyCAT partners across Europe and Israel, participants were exposed to a range of approaches on FATE, in a holistic manner. During the Winter School, the team also organized a hands-on activity to evaluate a tool-based intervention where participants interacted with eight prototypes of bias-aware search engines. Finally, participants were invited to join one of four collaborative projects coordinated by CyCAT, thus furthering common understanding and interdisciplinary collaboration on this emerging topic.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {inproceedings} } The first FATE Winter School, organized by the Cyprus Center for Algorithmic Transparency (CyCAT) provided a forum for both students as well as senior researchers to examine the complex topic of Fairness, Accountability, Transparency and Ethics (FATE). Through a program that included two invited keynotes, as well as sessions led by CyCAT partners across Europe and Israel, participants were exposed to a range of approaches on FATE, in a holistic manner. During the Winter School, the team also organized a hands-on activity to evaluate a tool-based intervention where participants interacted with eight prototypes of bias-aware search engines. Finally, participants were invited to join one of four collaborative projects coordinated by CyCAT, thus furthering common understanding and interdisciplinary collaboration on this emerging topic. |
Maria Kasinidou Styliani Kleanthous, Kalia Orphanou Jahna Otterbacher Association for Computing Machinery, 2021, ISBN: 9781450382144. Abstract | Links | BibTeX | Tags: Algorithmic Fairness, Algorithmic Transparency @proceedings{Kasinidou2021b, title = {Educating Computer Science Students about Algorithmic Fairness, Accountability, Transparency and Ethics}, author = {Maria Kasinidou, Styliani Kleanthous, Kalia Orphanou, Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3430665.3456311}, doi = {10.1145/3430665.3456311}, isbn = {9781450382144}, year = {2021}, date = {2021-06-26}, publisher = {Association for Computing Machinery}, series = {ITiCSE '21}, abstract = {Professionals are increasingly relying on algorithmic systems for decision making however, algorithmic decisions occasionally perceived as biased or not just. Prior work has provided evidences that education can make a difference on the perception of young developers on algorithmic fairness. In this paper, we investigate computer science students' perception of FATE in algorithmic decision-making and whether their views on FATE can be changed by attending a seminar on FATE topics. Participants attended a seminar on FATE in algorithmic decision-making and they were asked to respond to two online questionnaires to measure their pre- and post-seminar perception on FATE. Results show that a short seminar can make a difference in understanding and perception as well as the attitude of the students towards FATE in algorithmic decision support. CS curricula need to be updated and include FATE topics if we want algorithmic decision support systems to be just for all.}, keywords = {Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {proceedings} } Professionals are increasingly relying on algorithmic systems for decision making however, algorithmic decisions occasionally perceived as biased or not just. Prior work has provided evidences that education can make a difference on the perception of young developers on algorithmic fairness. In this paper, we investigate computer science students' perception of FATE in algorithmic decision-making and whether their views on FATE can be changed by attending a seminar on FATE topics. Participants attended a seminar on FATE in algorithmic decision-making and they were asked to respond to two online questionnaires to measure their pre- and post-seminar perception on FATE. Results show that a short seminar can make a difference in understanding and perception as well as the attitude of the students towards FATE in algorithmic decision support. CS curricula need to be updated and include FATE topics if we want algorithmic decision support systems to be just for all. |
Fausto Giunchiglia Styliani Kleanthous, Jahna Otterbacher Tim Draws Transparency Paths - Documenting the Diversity of User Perceptions Inproceedings Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, Association for Computing Machinery, 2021, ISBN: 9781450383677. Abstract | Links | BibTeX | Tags: Algorithmic Transparency, Diversity @inproceedings{Giunchiglia2021, title = {Transparency Paths - Documenting the Diversity of User Perceptions}, author = {Fausto Giunchiglia, Styliani Kleanthous, Jahna Otterbacher, Tim Draws}, url = {https://dl.acm.org/doi/abs/10.1145/3450614.3463292}, doi = {10.1145/3450614.3463292}, isbn = {9781450383677}, year = {2021}, date = {2021-06-21}, booktitle = {Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization}, publisher = {Association for Computing Machinery}, series = {UMAP '21}, abstract = {We are living in an era of global digital platforms, eco-systems of algorithmic processes that serve users worldwide. However, the increasing exposure to diversity online – of information and users – has led to important considerations of bias. A given platform, such as the Google search engine, may demonstrate behaviors that deviate from what users expect, or what they consider fair, relative to their own context and experiences. In this exploratory work, we put forward the notion of transparency paths, a process by which we document our position, choices, and perceptions when developing and/or using algorithmic platforms. We conducted a self-reflection exercise with seven researchers, who collected and analyzed two sets of images; one depicting an everyday activity, “washing hands,” and a second depicting the concept of “home.” Participants had to document their process and choices, and in the end, compare their work to others. Finally, participants were asked to reflect on the definitions of bias and diversity. The exercise revealed the range of perspectives and approaches taken, underscoring the need for future work that will refine the transparency paths methodology.}, keywords = {Algorithmic Transparency, Diversity}, pubstate = {published}, tppubtype = {inproceedings} } We are living in an era of global digital platforms, eco-systems of algorithmic processes that serve users worldwide. However, the increasing exposure to diversity online – of information and users – has led to important considerations of bias. A given platform, such as the Google search engine, may demonstrate behaviors that deviate from what users expect, or what they consider fair, relative to their own context and experiences. In this exploratory work, we put forward the notion of transparency paths, a process by which we document our position, choices, and perceptions when developing and/or using algorithmic platforms. We conducted a self-reflection exercise with seven researchers, who collected and analyzed two sets of images; one depicting an everyday activity, “washing hands,” and a second depicting the concept of “home.” Participants had to document their process and choices, and in the end, compare their work to others. Finally, participants were asked to reflect on the definitions of bias and diversity. The exercise revealed the range of perspectives and approaches taken, underscoring the need for future work that will refine the transparency paths methodology. |
Veronika Bogina Alan Hartman, Tsvi Kuflik Avital Shulner-Tal Educating Software and AI Stakeholders About Algorithmic Fairness, Accountability, Transparency and Ethics Journal Article International Journal of Artificial Intelligence in Education, 2021. Abstract | Links | BibTeX | Tags: Accountability, Algorithmic Fairness, Algorithmic Transparency, Education @article{Bogina2021, title = {Educating Software and AI Stakeholders About Algorithmic Fairness, Accountability, Transparency and Ethics}, author = {Veronika Bogina, Alan Hartman, Tsvi Kuflik, Avital Shulner-Tal}, url = {https://link.springer.com/article/10.1007/s40593-021-00248-0}, doi = {10.1007/s40593-021-00248-0}, year = {2021}, date = {2021-04-21}, journal = {International Journal of Artificial Intelligence in Education}, abstract = {This paper discusses educating stakeholders of algorithmic systems (systems that apply Artificial Intelligence/Machine learning algorithms) in the areas of algorithmic fairness, accountability, transparency and ethics (FATE). We begin by establishing the need for such education and identifying the intended consumers of educational materials on the topic. We discuss the topics of greatest concern and in need of educational resources; we also survey the existing materials and past experiences in such education, noting the scarcity of suitable material on aspects of fairness in particular. We use an example of a college admission platform to illustrate our ideas. We conclude with recommendations for further work in the area and report on the first steps taken towards achieving this goal in the framework of an academic graduate seminar course, a graduate summer school, an embedded lecture in a software engineering course, and a workshop for high school teachers.}, keywords = {Accountability, Algorithmic Fairness, Algorithmic Transparency, Education}, pubstate = {published}, tppubtype = {article} } This paper discusses educating stakeholders of algorithmic systems (systems that apply Artificial Intelligence/Machine learning algorithms) in the areas of algorithmic fairness, accountability, transparency and ethics (FATE). We begin by establishing the need for such education and identifying the intended consumers of educational materials on the topic. We discuss the topics of greatest concern and in need of educational resources; we also survey the existing materials and past experiences in such education, noting the scarcity of suitable material on aspects of fairness in particular. We use an example of a college admission platform to illustrate our ideas. We conclude with recommendations for further work in the area and report on the first steps taken towards achieving this goal in the framework of an academic graduate seminar course, a graduate summer school, an embedded lecture in a software engineering course, and a workshop for high school teachers. |
Alison Marie Smith-Renner Styliani Kleanthous Loizou, Jonathan Dodge Casey Dugan Min Kyung Lee Brian Lim Tsvi Kuflik Advait Sarkar Avital Shulner-Tal Simone Stumpf Y TExSS: Transparency and Explanations in Smart Systems Workshop 26th International Conference on Intelligent User Interfaces, 2021, ISBN: 9781450380188. Abstract | Links | BibTeX | Tags: Algorithmic Transparency, Explainability @workshop{Smith-Renner2021, title = {TExSS: Transparency and Explanations in Smart Systems}, author = {Alison Marie Smith-Renner, Styliani Kleanthous Loizou, Jonathan Dodge, Casey Dugan, Min Kyung Lee, Brian Y Lim, Tsvi Kuflik, Advait Sarkar, Avital Shulner-Tal, Simone Stumpf}, url = {https://dl.acm.org/doi/abs/10.1145/3397482.3450705}, doi = {10.1145/3397482.3450705}, isbn = {9781450380188}, year = {2021}, date = {2021-04-14}, booktitle = {26th International Conference on Intelligent User Interfaces}, abstract = {Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop provides a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, we focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings, such as awareness, data provenance, and validation.}, keywords = {Algorithmic Transparency, Explainability}, pubstate = {published}, tppubtype = {workshop} } Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop provides a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, we focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings, such as awareness, data provenance, and validation. |
Fausto Giunchiglia Jahna Otterbacher, Styliani Kleanthous Khuyagbaatar Batsuren Veronika Bogin Tsvi Kuflik Avital Shulner Tal Towards Algorithmic Transparency: A Diversity Perspective Journal Article arXiv preprint arXiv:2104.05658, 2021. Abstract | Links | BibTeX | Tags: Algorithmic Transparency, Diversity @article{Giunchiglia2021b, title = {Towards Algorithmic Transparency: A Diversity Perspective}, author = {Fausto Giunchiglia, Jahna Otterbacher, Styliani Kleanthous, Khuyagbaatar Batsuren, Veronika Bogin, Tsvi Kuflik, Avital Shulner Tal}, url = {https://arxiv.org/abs/2104.05658}, year = {2021}, date = {2021-04-12}, journal = {arXiv preprint arXiv:2104.05658}, abstract = {As the role of algorithmic systems and processes increases in society, so does the risk of bias, which can result in discrimination against individuals and social groups. Research on algorithmic bias has exploded in recent years, highlighting both the problems of bias, and the potential solutions, in terms of algorithmic transparency (AT). Transparency is important for facilitating fairness management as well as explainability in algorithms; however, the concept of diversity, and its relationship to bias and transparency, has been largely left out of the discussion. We reflect on the relationship between diversity and bias, arguing that diversity drives the need for transparency. Using a perspective-taking lens, which takes diversity as a given, we propose a conceptual framework to characterize the problem and solution spaces of AT, to aid its application in algorithmic systems. Example cases from three research domains are described using our framework.}, keywords = {Algorithmic Transparency, Diversity}, pubstate = {published}, tppubtype = {article} } As the role of algorithmic systems and processes increases in society, so does the risk of bias, which can result in discrimination against individuals and social groups. Research on algorithmic bias has exploded in recent years, highlighting both the problems of bias, and the potential solutions, in terms of algorithmic transparency (AT). Transparency is important for facilitating fairness management as well as explainability in algorithms; however, the concept of diversity, and its relationship to bias and transparency, has been largely left out of the discussion. We reflect on the relationship between diversity and bias, arguing that diversity drives the need for transparency. Using a perspective-taking lens, which takes diversity as a given, we propose a conceptual framework to characterize the problem and solution spaces of AT, to aid its application in algorithmic systems. Example cases from three research domains are described using our framework. |
Kalia Orphanou Jahna Otterbacher, Styliani Kleanthous Khuyagbaatar Batsuren Fausto Giunchiglia Veronika Bogina Avital Shulner Tal Tsvi Kuflik Mitigating Bias in Algorithmic Systems: A Fish-Eye View of Problems and Solutions Across Domains Journal Article arXiv preprint arXiv:2103.16953, 2021. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @article{Orphanou2021b, title = {Mitigating Bias in Algorithmic Systems: A Fish-Eye View of Problems and Solutions Across Domains}, author = {Kalia Orphanou, Jahna Otterbacher, Styliani Kleanthous, Khuyagbaatar Batsuren, Fausto Giunchiglia, Veronika Bogina, Avital Shulner Tal, Tsvi Kuflik}, url = {arXiv preprint arXiv:2103.16953}, year = {2021}, date = {2021-03-31}, journal = {arXiv preprint arXiv:2103.16953}, abstract = {Mitigating bias in algorithmic systems is a critical issue drawing attention across communities within the information and computer sciences. Given the complexity of the problem and the involvement of multiple stakeholders, including developers, end-users and third-parties, there is a need to understand the landscape of the sources of bias, and the solutions being proposed to address them. This survey provides a 'fish-eye view', examining approaches across four areas of research. The literature describes three steps toward a comprehensive treatment: bias detection, fairness management and explainability management, and underscores the need to work from within the system as well as from the perspective of stakeholders in the broader context.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {article} } Mitigating bias in algorithmic systems is a critical issue drawing attention across communities within the information and computer sciences. Given the complexity of the problem and the involvement of multiple stakeholders, including developers, end-users and third-parties, there is a need to understand the landscape of the sources of bias, and the solutions being proposed to address them. This survey provides a 'fish-eye view', examining approaches across four areas of research. The literature describes three steps toward a comprehensive treatment: bias detection, fairness management and explainability management, and underscores the need to work from within the system as well as from the perspective of stakeholders in the broader context. |
Maria Kasinidou Styliani Kleanthous, Pınar Barlas Jahna Otterbacher Association for Computing Machinery, 2021, ISBN: 9781450383097. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @proceedings{Kasinidou2021, title = {I agree with the decision, but they didn't deserve this: Future Developers' Perception of Fairness in Algorithmic Decisions}, author = {Maria Kasinidou, Styliani Kleanthous, Pınar Barlas, Jahna Otterbacher}, url = {https://dl.acm.org/doi/abs/10.1145/3442188.3445931}, doi = {10.1145/3442188.3445931}, isbn = {9781450383097}, year = {2021}, date = {2021-03-08}, publisher = {Association for Computing Machinery}, series = {FAccT '21}, abstract = {While professionals are increasingly relying on algorithmic systems for making a decision, on some occasions, algorithmic decisions may be perceived as biased or not just. Prior work has looked into the perception of algorithmic decision-making from the user's point of view. In this work, we investigate how students in fields adjacent to algorithm development perceive algorithmic decisionmaking. Participants (N=99) were asked to rate their agreement with statements regarding six constructs that are related to facets of fairness and justice in algorithmic decision-making in three separate scenarios. Two of the three scenarios were independent of each other, while the third scenario presented three different outcomes of the same algorithmic system, demonstrating perception changes triggered by different outputs. Quantitative analysis indicates that a) 'agreeing' with a decision does not mean the person 'deserves the outcome', b) perceiving the factors used in the decision-making as 'appropriate' does not make the decision of the system 'fair' and c) perceiving a system's decision as 'not fair' is affecting the participants' 'trust' in the system. In addition, participants found proportional distribution of benefits more fair than other approaches. Qualitative analysis provides further insights into that information the participants find essential to judge and understand an algorithmic decision-making system's fairness. Finally, the level of academic education has a role to play in the perception of fairness and justice in algorithmic decision-making.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {proceedings} } While professionals are increasingly relying on algorithmic systems for making a decision, on some occasions, algorithmic decisions may be perceived as biased or not just. Prior work has looked into the perception of algorithmic decision-making from the user's point of view. In this work, we investigate how students in fields adjacent to algorithm development perceive algorithmic decisionmaking. Participants (N=99) were asked to rate their agreement with statements regarding six constructs that are related to facets of fairness and justice in algorithmic decision-making in three separate scenarios. Two of the three scenarios were independent of each other, while the third scenario presented three different outcomes of the same algorithmic system, demonstrating perception changes triggered by different outputs. Quantitative analysis indicates that a) 'agreeing' with a decision does not mean the person 'deserves the outcome', b) perceiving the factors used in the decision-making as 'appropriate' does not make the decision of the system 'fair' and c) perceiving a system's decision as 'not fair' is affecting the participants' 'trust' in the system. In addition, participants found proportional distribution of benefits more fair than other approaches. Qualitative analysis provides further insights into that information the participants find essential to judge and understand an algorithmic decision-making system's fairness. Finally, the level of academic education has a role to play in the perception of fairness and justice in algorithmic decision-making. |
Xavier Alameda-Pineda Miriam Redi, Jahna Otterbacher Nicu Sebe Shih-Fu Chang Proceedings of the 28th ACM International Conference on Multimedia, 2020, ISBN: 9781450379885. Abstract | Links | BibTeX | Tags: Accountability, Algorithmic Fairness, Algorithmic Transparency, Ethics @workshop{Alameda-Pineda2020, title = {FATE/MM 20: 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in MultiMedia}, author = {Xavier Alameda-Pineda, Miriam Redi, Jahna Otterbacher, Nicu Sebe, Shih-Fu Chang}, url = {https://dl.acm.org/doi/abs/10.1145/3394171.3421896}, doi = {10.1145/3394171.3421896}, isbn = {9781450379885}, year = {2020}, date = {2020-10-12}, booktitle = {Proceedings of the 28th ACM International Conference on Multimedia}, abstract = {The series of FAT/FAccT events aim at bringing together researchers and practitioners interested in fairness, accountability, transparency and ethics of computational methods. The FATE/MM workshop focuses on addressing these issues in the Multimedia field. Multimedia computing technologies operate today at an unprecedented scale, with a growing community of scientists interested in multimedia models, tools and applications. Such continued growth has great implications not only for the scientific community, but also for the society as a whole. Typical risks of large-scale computational models include model bias and algorithmic discrimination. These risks become particularly prominent in the multimedia field, which historically has been focusing on user-centered technologies. To ensure a healthy and constructive development of the best multimedia technologies, this workshop offers a space to discuss how to develop ethical, fair, unbiased, representative, and transparent multimedia models, bringing together researchers from different areas to present computational solutions to these issues.}, keywords = {Accountability, Algorithmic Fairness, Algorithmic Transparency, Ethics}, pubstate = {published}, tppubtype = {workshop} } The series of FAT/FAccT events aim at bringing together researchers and practitioners interested in fairness, accountability, transparency and ethics of computational methods. The FATE/MM workshop focuses on addressing these issues in the Multimedia field. Multimedia computing technologies operate today at an unprecedented scale, with a growing community of scientists interested in multimedia models, tools and applications. Such continued growth has great implications not only for the scientific community, but also for the society as a whole. Typical risks of large-scale computational models include model bias and algorithmic discrimination. These risks become particularly prominent in the multimedia field, which historically has been focusing on user-centered technologies. To ensure a healthy and constructive development of the best multimedia technologies, this workshop offers a space to discuss how to develop ethical, fair, unbiased, representative, and transparent multimedia models, bringing together researchers from different areas to present computational solutions to these issues. |
Alison Smith-Renner Styliani Kleanthous, Brian Lim Tsvi Kuflik Simone Stumpf Jahna Otterbacher Advait Sarkar Casey Dugan Avital Shulner ExSS-ATEC: Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies 2020 Workshop Proceedings of the 25th International Conference on Intelligent User Interfaces Companion, 2020, ISBN: 9781450375139. Abstract | Links | BibTeX | Tags: Algorithmic Transparency, Explainability @workshop{Smith-Renner2020, title = {ExSS-ATEC: Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies 2020}, author = {Alison Smith-Renner, Styliani Kleanthous, Brian Lim, Tsvi Kuflik, Simone Stumpf, Jahna Otterbacher, Advait Sarkar, Casey Dugan, Avital Shulner}, url = {https://dl.acm.org/doi/abs/10.1145/3379336.3379361}, doi = {10.1145/3379336.3379361}, isbn = {9781450375139}, year = {2020}, date = {2020-03-17}, booktitle = {Proceedings of the 25th International Conference on Intelligent User Interfaces Companion}, abstract = {Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, our goal is to focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system's inter-workings, such as awareness, data provenance, and validation.}, keywords = {Algorithmic Transparency, Explainability}, pubstate = {published}, tppubtype = {workshop} } Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, our goal is to focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system's inter-workings, such as awareness, data provenance, and validation. |
Tal, Avital Shulner; Batsuren, Khuyagbaatar; Bogina, Veronika; Giunchiglia, Fausto; Hartman, Alan; Kleanthous-Loizou, Styliani; Kuflik, Tsvi; Otterbacher, Jahna 14th International Workshop On Semantic And Social Media Adaptation And Personalization, SMAP 2019 ACM, 2019. Abstract | Links | BibTeX | Tags: Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency @workshop{endtoend2019, title = {"End to End" - Towards a Framework for Reducing Biases and Promoting Transparency of Algorithmic Systems}, author = {Avital Shulner Tal and Khuyagbaatar Batsuren and Veronika Bogina and Fausto Giunchiglia and Alan Hartman and Styliani Kleanthous-Loizou and Tsvi Kuflik and Jahna Otterbacher}, url = {https://www.cycat.io/wp-content/uploads/2019/07/1570543680.pdf}, year = {2019}, date = {2019-06-09}, booktitle = {14th International Workshop On Semantic And Social Media Adaptation And Personalization}, publisher = {ACM}, series = {SMAP 2019}, abstract = {Algorithms play an increasing role in our everyday lives. Recently, the harmful potential of biased algorithms has been recognized by researchers and practitioners. We have also witnessed a growing interest in ensuring the fairness and transparency of algorithmic systems. However, so far there is no agreed upon solution and not even an agreed terminology. The proposed research defines the problem space, solution space and a prototype of comprehensive framework for the detection and reducing biases in algorithmic systems.}, keywords = {Algorithmic Bias, Algorithmic Fairness, Algorithmic Transparency}, pubstate = {published}, tppubtype = {workshop} } Algorithms play an increasing role in our everyday lives. Recently, the harmful potential of biased algorithms has been recognized by researchers and practitioners. We have also witnessed a growing interest in ensuring the fairness and transparency of algorithmic systems. However, so far there is no agreed upon solution and not even an agreed terminology. The proposed research defines the problem space, solution space and a prototype of comprehensive framework for the detection and reducing biases in algorithmic systems. |
Styliani Kleanthous Tsvi Kuflik, Jahna Otterbacher Alan Hartman Casey Dugan Veronika Bogina Intelligent user interfaces for algorithmic transparency in emerging technologies Workshop Proceedings of the 24th International Conference on Intelligent User Interfaces: Companion, 2019, ISBN: 9781450366731. Abstract | Links | BibTeX | Tags: Algorithmic Transparency @workshop{Kleanthous2019b, title = {Intelligent user interfaces for algorithmic transparency in emerging technologies}, author = {Styliani Kleanthous, Tsvi Kuflik, Jahna Otterbacher, Alan Hartman, Casey Dugan, Veronika Bogina}, url = {https://dl.acm.org/doi/abs/10.1145/3308557.3313125}, doi = {10.1145/3308557.3313125}, isbn = {9781450366731}, year = {2019}, date = {2019-03-16}, booktitle = {Proceedings of the 24th International Conference on Intelligent User Interfaces: Companion}, abstract = {The workshop focus is on Algorithmic Transparency (AT) in Emerging Technologies. Naturally, the user interface is where and how the Algorithmic Transparency should occur and the challenge we aim at is how intelligent user interfaces can make a system transparent to its users.}, keywords = {Algorithmic Transparency}, pubstate = {published}, tppubtype = {workshop} } The workshop focus is on Algorithmic Transparency (AT) in Emerging Technologies. Naturally, the user interface is where and how the Algorithmic Transparency should occur and the challenge we aim at is how intelligent user interfaces can make a system transparent to its users. |