SluitenHelpPrint
Switch to English
Cursus: INFOMHCML
INFOMHCML
Human centered machine learning
Cursus informatie
CursuscodeINFOMHCML
Studiepunten (EC)7,5
Cursusdoelen

After following this course, students can:

  • implement methods to measure and improve the explainability and fairness of machine learning (ML) models;
  • formulate and conduct a research project on human-centered ML and report on its outcome;
  • reflect on their own activities as a researcher and is aware of social and ethical responsibilities concerning applications of ML;
  • read and critically assess state-of-the art literature on human-centered ML;
  • explain and apply theoretical concepts, models and algorithms related to explainability and fairness of ML models;
  • present one’s own research in both written and spoken English to diverse audiences (e.g., various backgrounds and interests).

Assessment
Your grade will be determined as follows:

  • Written paper discussion (20% of the final mark)
  • Exam 40%
  • Project 40%
  • Programming assignments (pass or fail).

These are the repair options:

  • Paper discussion: it is not possible to repair the paper discussion.
  • Exam: we offer a retake, but only if the 4.0 <= exam grade < 5.5.  
  • Project: it is not possible to repair the project. 
  • Programming assignments: it is not possible to repair the programming assignments. They need to be submitted by the provided deadlines. Each week delay leads to a reduction of 0.2 points (out of 10) from the final grade.

Prerequisites
The course requires familiarity with machine learning (including neural networks) and proficiency in Python.
It is recommended that students have completed at least one course on machine learning, such as INFOMPR Pattern Recognition or INFOMAML Advanced Machine Learning.
We expect students to already have experience with developing and evaluating machine learning systems. 

Inhoud
The impact of machine learning (ML) systems on our society has been increasing rapidly, ranging from systems that influence the content that we see online (e.g., ranking algorithms, advertising algorithms) to systems that enhance or even replace human decision making (e.g. in hiring processes). However, machine learning systems often perpetuate or even amplify societal biases—biases we are often not even aware of.
What’s more, most machine learning systems are not transparent, which hampers their practical uptake and makes it challenging to know when to trust (or not trust) the output of these systems.


This course will familiarize students with a growing set of concepts and techniques to develop and assess machine learning systems, so that these systems are fair and interpretable, and can be used in responsible ways.
More specifically, we will discuss methods to measure the fairness of ML systems and to make ML systems fairer. The course also covers different approaches to creating and evaluating interpretable or explainable ML models.

The course will cover examples from various areas of AI. Given the expertise of the lecturers we will also zoom in on specific examples from natural language processing and multimodal affective computing research.
Our discussion will also be informed by relevant literature from the social sciences. An interest in these areas is therefore desirable.


Course form
There will be lectures and practical exercises. Students will also provide written reflections on academic articles. The course also contains a group project.

Study material
Most of the material we will read are academic articles. A few candidate readings are:
  • Ribeiro et al, “Why Should I Trust You? Explaining the Predictions of Any Classifier ", ., KDD 2016
  • Lundberg and Lee, "A Unified Approach to Interpreting Model Predictions",, NeurIPS 2017
  • Solon Barocas, Moritz Hardt, Arvind Narayanan, "Fairness and Machine learning, Limitations and Opportunities", https://fairmlbook.org/
  • Buolamwini and Gebru, "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification", Proceedings of Machine Learning Research 2018
  • Bolukbasi et al,"Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings", NeurIPS 2016
  • Li et al, "Beyond Saliency: Understanding Convolutional Neural Networks from Saliency Prediction on Layer-wise Relevance Propagation", Image and Vision Computing 2019.
A selection of the course material of 2021-2022 can be found here:
https://github.com/dongpng/human-centered-ml (slides, syllabus, programming assignments).
Course material for 2022-2023 may change, although this should give a good impression of the type of content and activities.

If you have questions about the course, please contact the course coordinator, dr. Nguyen.

 
SluitenHelpPrint
Switch to English