Research
Publications
Alison Marie Smith-Renner Styliani Kleanthous Loizou, Jonathan Dodge Casey Dugan Min Kyung Lee Brian Lim Tsvi Kuflik Advait Sarkar Avital Shulner-Tal Simone Stumpf Y TExSS: Transparency and Explanations in Smart Systems Workshop 26th International Conference on Intelligent User Interfaces, 2021, ISBN: 9781450380188. Abstract | Links | BibTeX | Tags: Algorithmic Transparency, Explainability @workshop{Smith-Renner2021, title = {TExSS: Transparency and Explanations in Smart Systems}, author = {Alison Marie Smith-Renner, Styliani Kleanthous Loizou, Jonathan Dodge, Casey Dugan, Min Kyung Lee, Brian Y Lim, Tsvi Kuflik, Advait Sarkar, Avital Shulner-Tal, Simone Stumpf}, url = {https://dl.acm.org/doi/abs/10.1145/3397482.3450705}, doi = {10.1145/3397482.3450705}, isbn = {9781450380188}, year = {2021}, date = {2021-04-14}, booktitle = {26th International Conference on Intelligent User Interfaces}, abstract = {Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop provides a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, we focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings, such as awareness, data provenance, and validation.}, keywords = {Algorithmic Transparency, Explainability}, pubstate = {published}, tppubtype = {workshop} } Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop provides a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, we focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings, such as awareness, data provenance, and validation. |
Alison Smith-Renner Styliani Kleanthous, Brian Lim Tsvi Kuflik Simone Stumpf Jahna Otterbacher Advait Sarkar Casey Dugan Avital Shulner ExSS-ATEC: Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies 2020 Workshop Proceedings of the 25th International Conference on Intelligent User Interfaces Companion, 2020, ISBN: 9781450375139. Abstract | Links | BibTeX | Tags: Algorithmic Transparency, Explainability @workshop{Smith-Renner2020, title = {ExSS-ATEC: Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies 2020}, author = {Alison Smith-Renner, Styliani Kleanthous, Brian Lim, Tsvi Kuflik, Simone Stumpf, Jahna Otterbacher, Advait Sarkar, Casey Dugan, Avital Shulner}, url = {https://dl.acm.org/doi/abs/10.1145/3379336.3379361}, doi = {10.1145/3379336.3379361}, isbn = {9781450375139}, year = {2020}, date = {2020-03-17}, booktitle = {Proceedings of the 25th International Conference on Intelligent User Interfaces Companion}, abstract = {Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, our goal is to focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system's inter-workings, such as awareness, data provenance, and validation.}, keywords = {Algorithmic Transparency, Explainability}, pubstate = {published}, tppubtype = {workshop} } Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, our goal is to focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system's inter-workings, such as awareness, data provenance, and validation. |