-
Notifications
You must be signed in to change notification settings - Fork 4
09_exercise_A5
🏠
21.06.22
In this exercise session, we will review and discuss the fourth assignment. After that, we will get familiar with a tool for explainable AI in the fifth assignment.
- 10:15 - 10:20: Welcome and arrival
- 10:20 - 11:00: Assignment 4 peer-review
- 11:00 - 11:30: Assignment 4 discussion
- 11:35 - 11:40: Assignment 5 introduction
- 11:40 - 11:45: Goodbye and outlook
Slides for the ninth exercise.
🕐 Due: Tue, 2022-07-05 10:00 AM
LIME was introduced as a model-agnostic local explanation method. It based on the work from this paper by Ribeiro et. al. It will be our starting point into the area of trying to explain "less interpretable" machine learning models' behaviors.
Create your group subfolder in hcds-summer-2022/assignments/A5_Explanations/
with your group name. Create a jupyter notebook file called A5_Explanations.ipynb
inside your group folder. You will use this notebook to work on the assignment's tasks and to document your steps. In the end, your group's final solution should be contained within the notebook.
❗ Don't forget to submit your group's final commit hash to the Exercise Programming Assignment 5
Whiteboard assignment!
We will use the tutorial Basic usage, two class. We explain random forest classifiers. provided in the LIME repository on GitHub as an introduction to LIME.
- Get familiar with the LIME GitHub repository on GitHub and read their README.
- Follow the installations instructions in the README file.
- Get the tutorial notebook running on your machines.
- Step through the notebook and understand what is happening in each line. ✏️ Please add markdown cells to your notebook to document your understanding of what is happening. You can also write down the question you have in mind, while stepping through the notebook, or include links or references you used to get a deeper understanding of the notebook.
- Look at explanations for at least three different documents of the given newsgroup dataset (currently document number 83 is used).
- ✏️ Add a new markdown cell and please answer the following questions:
- What did you learn about the model?
- How well do you think the classifier works and why?
- How useful is LIME for a non-data-scientist (e.g. non-ML-experts or designers) and why?
- What questions is LIME able to answer and why?
- Would you incorporate tools like LIME into your data science practice and how?
Now that you (hopefully) have an understanding of LIME's basic functionality, it is time to go back to the South German Credit dataset2 you already know from the previous assignments.
- Train multiple "less interpretable" classifiers on the South German Credit data.
- Make use of LIME to generate local explanations for predictions on the same entries over all your classifiers.
- ✏️ Write down your insights.
- Is there a difference between the classifiers?
- Were the explanations helpful to you?
- Do you think that LIME helps you better understand the models' behaviors?
- Are you more confidant in the models than before?
[Optional] Try LIME out with different datasets and models you find interesting and check out the different LIME examples.
Assigned groups will review each other's submissions from the last assignment.
You can find the submission of your peers here: /hcds-summer-2022/assignments/A4_Fairness/
After you checked that the other group has actively done the assignment:
- Add the other group to the
done.txt
in the submission folder. - Sign with your group name, e.g. group 1 (reviewed by group 6).
The assigned groups are:
Group_08 and Group_03,
Group_HJR and Group_02,
Group_04 and Group_01
We are interested in your feedback in order to improve this course. We will read all of your feedback and evaluate it. What you share may have a direct impact on the rest of the course, or future iterations of it.
Add a file called feedback.txt
to your group folder.
Write down your feedback on the lecture, the exercises, or the assignments in the text file. Please also write down roughly how much time you needed for the assignment. You may also write about your insights, what you found interesting, or questions that you have.
-
A5_Explanations.ipynb
-
feedback.txt
-
commit hash
1 M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier,” arXiv:1602.04938 [cs, stat], Aug. 2016, Accessed: Jun. 16, 2022. [Online]. Available: http://arxiv.org/abs/1602.04938
2 Ulrike Grömping (2019). South German Credit Data Set, UCI Machine Learning Repository.
Content is available under Attribution-Share Alike 3.0 Unported unless otherwise noted.