Research
Publications
Styliani Kleanthous Maria Kasinidou, Pınar Barlas Jahna Otterbacher Perception of fairness in algorithmic decisions: Future developers' perspective Journal Article Patterns, 2021. Abstract | Links | BibTeX | Tags: Accountability, Algorithmic Fairness, Algorithmic Transparency, Artificial Intelligence @article{Kleanthous2021, title = {Perception of fairness in algorithmic decisions: Future developers' perspective}, author = {Styliani Kleanthous, Maria Kasinidou, Pınar Barlas, Jahna Otterbacher}, url = {https://www.sciencedirect.com/science/article/pii/S2666389921002476}, year = {2021}, date = {2021-11-03}, journal = {Patterns}, abstract = {Fairness, accountability, transparency, and ethics (FATE) in algorithmic systems is gaining a lot of attention lately. With the continuous advancement of machine learning and artificial intelligence, research and tech companies are coming across incidents where algorithmic systems are making non-objective decisions that may reproduce and/or amplify social stereotypes and inequalities. There is a great effort by the research community on developing frameworks of fairness and algorithmic models to alleviate biases; however, we first need to understand how people perceive the complex construct of algorithmic fairness. In this work, we investigate how young and future developers perceive these concepts. Our results can inform future research on (1) understanding perceptions of algorithmic FATE, (2) highlighting the needs for systematic training and education on FATE, and (3) raising awareness among young developers on the potential impact that the systems they are developing have in society.}, keywords = {Accountability, Algorithmic Fairness, Algorithmic Transparency, Artificial Intelligence}, pubstate = {published}, tppubtype = {article} } Fairness, accountability, transparency, and ethics (FATE) in algorithmic systems is gaining a lot of attention lately. With the continuous advancement of machine learning and artificial intelligence, research and tech companies are coming across incidents where algorithmic systems are making non-objective decisions that may reproduce and/or amplify social stereotypes and inequalities. There is a great effort by the research community on developing frameworks of fairness and algorithmic models to alleviate biases; however, we first need to understand how people perceive the complex construct of algorithmic fairness. In this work, we investigate how young and future developers perceive these concepts. Our results can inform future research on (1) understanding perceptions of algorithmic FATE, (2) highlighting the needs for systematic training and education on FATE, and (3) raising awareness among young developers on the potential impact that the systems they are developing have in society. |
Veronika Bogina Alan Hartman, Tsvi Kuflik Avital Shulner-Tal Educating Software and AI Stakeholders About Algorithmic Fairness, Accountability, Transparency and Ethics Journal Article International Journal of Artificial Intelligence in Education, 2021. Abstract | Links | BibTeX | Tags: Accountability, Algorithmic Fairness, Algorithmic Transparency, Education @article{Bogina2021, title = {Educating Software and AI Stakeholders About Algorithmic Fairness, Accountability, Transparency and Ethics}, author = {Veronika Bogina, Alan Hartman, Tsvi Kuflik, Avital Shulner-Tal}, url = {https://link.springer.com/article/10.1007/s40593-021-00248-0}, doi = {10.1007/s40593-021-00248-0}, year = {2021}, date = {2021-04-21}, journal = {International Journal of Artificial Intelligence in Education}, abstract = {This paper discusses educating stakeholders of algorithmic systems (systems that apply Artificial Intelligence/Machine learning algorithms) in the areas of algorithmic fairness, accountability, transparency and ethics (FATE). We begin by establishing the need for such education and identifying the intended consumers of educational materials on the topic. We discuss the topics of greatest concern and in need of educational resources; we also survey the existing materials and past experiences in such education, noting the scarcity of suitable material on aspects of fairness in particular. We use an example of a college admission platform to illustrate our ideas. We conclude with recommendations for further work in the area and report on the first steps taken towards achieving this goal in the framework of an academic graduate seminar course, a graduate summer school, an embedded lecture in a software engineering course, and a workshop for high school teachers.}, keywords = {Accountability, Algorithmic Fairness, Algorithmic Transparency, Education}, pubstate = {published}, tppubtype = {article} } This paper discusses educating stakeholders of algorithmic systems (systems that apply Artificial Intelligence/Machine learning algorithms) in the areas of algorithmic fairness, accountability, transparency and ethics (FATE). We begin by establishing the need for such education and identifying the intended consumers of educational materials on the topic. We discuss the topics of greatest concern and in need of educational resources; we also survey the existing materials and past experiences in such education, noting the scarcity of suitable material on aspects of fairness in particular. We use an example of a college admission platform to illustrate our ideas. We conclude with recommendations for further work in the area and report on the first steps taken towards achieving this goal in the framework of an academic graduate seminar course, a graduate summer school, an embedded lecture in a software engineering course, and a workshop for high school teachers. |
Xavier Alameda-Pineda Miriam Redi, Jahna Otterbacher Nicu Sebe Shih-Fu Chang Proceedings of the 28th ACM International Conference on Multimedia, 2020, ISBN: 9781450379885. Abstract | Links | BibTeX | Tags: Accountability, Algorithmic Fairness, Algorithmic Transparency, Ethics @workshop{Alameda-Pineda2020, title = {FATE/MM 20: 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in MultiMedia}, author = {Xavier Alameda-Pineda, Miriam Redi, Jahna Otterbacher, Nicu Sebe, Shih-Fu Chang}, url = {https://dl.acm.org/doi/abs/10.1145/3394171.3421896}, doi = {10.1145/3394171.3421896}, isbn = {9781450379885}, year = {2020}, date = {2020-10-12}, booktitle = {Proceedings of the 28th ACM International Conference on Multimedia}, abstract = {The series of FAT/FAccT events aim at bringing together researchers and practitioners interested in fairness, accountability, transparency and ethics of computational methods. The FATE/MM workshop focuses on addressing these issues in the Multimedia field. Multimedia computing technologies operate today at an unprecedented scale, with a growing community of scientists interested in multimedia models, tools and applications. Such continued growth has great implications not only for the scientific community, but also for the society as a whole. Typical risks of large-scale computational models include model bias and algorithmic discrimination. These risks become particularly prominent in the multimedia field, which historically has been focusing on user-centered technologies. To ensure a healthy and constructive development of the best multimedia technologies, this workshop offers a space to discuss how to develop ethical, fair, unbiased, representative, and transparent multimedia models, bringing together researchers from different areas to present computational solutions to these issues.}, keywords = {Accountability, Algorithmic Fairness, Algorithmic Transparency, Ethics}, pubstate = {published}, tppubtype = {workshop} } The series of FAT/FAccT events aim at bringing together researchers and practitioners interested in fairness, accountability, transparency and ethics of computational methods. The FATE/MM workshop focuses on addressing these issues in the Multimedia field. Multimedia computing technologies operate today at an unprecedented scale, with a growing community of scientists interested in multimedia models, tools and applications. Such continued growth has great implications not only for the scientific community, but also for the society as a whole. Typical risks of large-scale computational models include model bias and algorithmic discrimination. These risks become particularly prominent in the multimedia field, which historically has been focusing on user-centered technologies. To ensure a healthy and constructive development of the best multimedia technologies, this workshop offers a space to discuss how to develop ethical, fair, unbiased, representative, and transparent multimedia models, bringing together researchers from different areas to present computational solutions to these issues. |