SluitenHelpPrint
Switch to English
Cursus: INFOMHCML
INFOMHCML
Human centered machine learning
Cursus informatie
CursuscodeINFOMHCML
Studiepunten (EC)7,5
Cursusdoelen

This course will familiarize students with a growing set of concepts and techniques to develop and assess machine learning systems, so that these systems are fair and interpretable, and can be used in responsible ways.
More specifically, we will discuss methods to measure the fairness of machine learning systems (ranging from fairness metrics to the creation of challenging evaluation datasets) and machine learning techniques to reduce biases in ML systems (e.g., data augmentation, representation learning).
The course also covers different approaches to creating and evaluating interpretable or explainable ML approaches (e.g. post-hoc local explanations, influence functions, counterfactual explanations).

Assessment
Your grade will be determined as follows:

  • 'Paper presentation and discussion: 20% (paper presentation) and pass or fail (written paper discussion)
  • Midterm 40%
  • Project 40%
  • Programming assignments (pass or fail).

We offer a few repair options:

  • Midterm: we will offer a retake, but only if the 4.0 <= midterm grade < 5.5. 
  • Paper presentations: it is not possible to repair the paper presentations.
  • Project: Ii is not possible to repair the project. However, if you pass all other components, you only need to redo the project when you retake the course.
  • Programming assignments: it is not possible to repair the programming assignments. They need to be submitted by the provided deadlines. Each week delay leads to a reduction of 0.2 points (out of 10) from the final grade.
Prerequisites
The course requires familiarity with machine learning (including neural networks) and proficiency in Python.
It is recommended that students have completed at least one course on machine learning, such as INFOMPR Pattern Recognition or INFOMAML Advanced Machine Learning.
We expect students to already have experience with developing and evaluating machine learning systems. 
Inhoud
The impact of machine learning (ML) systems on our society has been increasing rapidly, ranging from systems that influence the content that we see online (e.g., ranking algorithms, advertising algorithms) to systems that enhance or even replace human decision making (e.g. in hiring processes). However, machine learning systems often perpetuate or even amplify societal biases—biases we are often not even aware of.
What’s more, most machine learning systems are not transparent, which hampers their practical uptake and makes it challenging to know when to trust (or not trust) the output of these systems.

The course will cover examples from various areas of AI. Given the expertise of the lecturers we will also zoom in on specific examples from natural language processing and multimodal affective computing research. Our discussion will also be informed by relevant literature from the social sciences. An interest in these areas is therefore desirable.

Course form
There will be lectures and practical exercises. Students are also expected to discuss and present academic articles. The course also contains a group project.

Study material
Most of the material we will read are academic articles. A few candidate readings are:
  • Ribeiro et al, “Why Should I Trust You? Explaining the Predictions of Any Classifier ", ., KDD 2016
  • Lundberg and Lee, "A Unified Approach to Interpreting Model Predictions",, NeurIPS 2017
  • Solon Barocas, Moritz Hardt, Arvind Narayanan, "Fairness and Machine learning, Limitations and Opportunities", https://fairmlbook.org/
  • Buolamwini and Gebru, "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification", Proceedings of Machine Learning Research 2018
  • Bolukbasi et al,"Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings", NeurIPS 2016
  • Li et al, "Beyond Saliency: Understanding Convolutional Neural Networks from Saliency Prediction on Layer-wise Relevance Propagation", Image and Vision Computing 2019.
A selection of the course material of 2020-2021 can be found here:
https://github.com/dongpng/human-centered-ml (slides, syllabus, programming assignments).
Course material for 2021-2022 may change, although this should give a good impression of the type of content and activities.

If you have questions about the course, please contact the course coordinator, dr. Nguyen.
Dr. Nguyen will be on leave from Jan-April 2022. If you have any questions during that time, please contact dr. Kaya with dr. Nguyen as cc.
 
SluitenHelpPrint
Switch to English