Methodology

Research

Methodology

According to the Association for Computing Machinery (ACM), in its statement on Algorithmic Transparency and Accountability [1], many algorithmic processes are opaque. For instance, it is often difficult to interpret results from models induced by new machine learning techniques such as deep learning [2]. In addition, there are social and economic challenges for achieving algorithmic transparency, such as the need for developers/owners of such processes to protect trade secrets, or even the privacy concerns of users.

Given these considerations, as well as the rapid changes taking place in the information landscape, such as the consolidation of information services worldwide, our first assumption upon which CyCAT will be structured is: users require tools (both technical and educational) to help them become aware of algorithmic processes in the systems they use.

CyCAT is inspired by of the ACM Statement’s  principles for promoting algorithmic transparency and accountability. Three of the seven principles describe approaches to combating algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings:

  • Awareness: Researchers can raise stakeholders’ awareness of the potential for biases and social harms that could result from developing and using a given analytic system.
  • Data provenance: Researchers can facilitate the exploration of the potential biases brought about by human and automated data-gathering processes that are used to create training data for algorithmic systems.
  • Validation and testing of outputs: Researchers can develop rigorous techniques for testing models and assumptions used in analytic systems, evaluating the potential of social, discriminatory harm.

The above principles are also the basis of the second assumption upon which CyCAT is structured: solutions for promoting transparency, even for opaque processes, can be developed by third parties, based on observing and analyzing system inputs and outputs, as well as user behavior.

Therefore CyCAT focuses on understanding the nature and impact of human biases in information systems, and develops tools, techniques and trainings to promote algorithmic transparency and media-related algorithmic literacy.