Fairness in User Modeling, Adaptation and Personalization (FairUMAP 2019)


Personalization has become a ubiquitous and essential part of systems that help users find relevant information in today’s highly complex information-rich online environments. Machine learning, recommender systems, and user modeling are key enabling technologies that allow intelligent systems to learn from users and adapt their output to users’ needs and preferences. However, there has been a growing recognition that these underlying technologies raise novel ethical, legal, and policy challenges.  It has become apparent that a single-minded focus on the user preferences has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, openness to diversity, and other social welfare considerations are not captured by typical metrics based on which data-driven personalized models are optimized. Indeed, widely-used personalization systems in such popular sites such as Facebook, Google News and YouTube have been heavily criticized for personalizing information delivery too heavily at the cost of these other objectives.

Bias, fairness, and transparency in machine learning are topics of considerable recent research interest. However, more work is needed to expand and extend this work into algorithmic and modeling approaches where personalization and user modeling are of primary importance. In particular, it is essential to address these challenges from the standpoint of understanding stereotypes in users’ behaviors and their influence on user or group decisions.

The 2nd Workshop on Fairness in User Modeling, Adaptation, and Personalization aims to bring together experts from academia and industry to discuss ethical, social, and legal concerns related to personalization and user modeling with the goal of exploring a variety of mechanisms and modeling approaches that help mitigate bias and achieve fairness in personalized systems.

Topics of interest include, but are not limited to the following.

  • Bias and discrimination in user modeling, personalization and recommendation
  • Computational techniques and algorithms for fairness-aware personalization
  • Definitions, metrics and criteria for optimizing and evaluating fairness-related aspects of personalized systems
  • Data preprocessing and transformation methods to address bias in training data
  • User modeling approaches that take fairness and bias into account
  • User studies and other empirical studies to evaluate the impact of personalization on fairness, balance, diversity, and other social welfare criteria
  • Balancing needs of multiple stakeholders in recommender systems and other personalized systems
  • ‘Filter bubble’ or ‘balkanization’ effects of personalization
  • Transparent and accurate explanations for recommendations and other personalization outcomes

Research papers reporting original results as well as position papers proposing novel and ground-breaking ideas pertaining to the workshop topics are solicited. See the Submission page for more details.

The actual time of the workshop will be confirmed at a later stage.

 

Important Dates

  • Submission deadline: March 13, 2019 (23:59 American Samoa Zone – UTC-11) (closed)
  • Notification of acceptance: March 26, 2019
  • Camera-ready due: April 3, 2019 (23:59 American Samoa Zone – UTC-11) (closed)

9 June 2019 9:00 am (GMT+3)

UMAP 2019, Larnaca, Cyprus


Speakers:

Dr. Peter Brusilovsky (invited speaker)
Dr. Nava Tintarev (invited speaker)