- Level Professional
- Duration 30 hours
- Course by University of Glasgow
-
Offered by
About
This course will introduce the concepts of interpretability and explainability in machine learning applications. The learner will understand the difference between global, local, model-agnostic and model-specific explanations. State-of-the-art explainability methods such as Permutation Feature Importance (PFI), Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanation (SHAP) are explained and applied in time-series classification. Subsequently, model-specific explanations such as Class-Activation Mapping (CAM) and Gradient-Weighted CAM are explained and implemented. The learners will understand axiomatic attributions and why they are important. Finally, attention mechanisms are going to be incorporated after Recurrent Layers and the attention weights will be visualised to produce local explanations of the model.Modules
Welcome
1
Videos
- Welcome video - Explainable Deep Learning Models for Healthcare
Explanability in Machine Learning Models for Healthcare
2
Videos
- Interpretability vs Explainability
- 'Explainability' in Healthcare Applications
1
Readings
- The importance of explainable prediction models in healthcare
Taxonomy of Explainability Methods
2
Videos
- Taxonomy of Explainability Methods
- Model Agnostic Explainability Methods
2
Readings
- Explainable Artificial Intelligence - Taxonomy
- Model Agnostic Explainability
Permutation Feature Importance in ECG Classification
1
Videos
- Permutation Feature Importance in Time Series Data
5
Readings
- Permutation Feature Importance
- Practical Exercise: Interpretability of the MLP model using Permutation Feature Importanceg
- Practical Exercise: Interpretability of the CNN model using Permutation Feature Importance
- Practical Exercise: Interpretability of the LSTM model using Permutation Feature Importance
- Explainability models in ECG
End of Week 1
1
Assignment
- End of week 1 quiz
1
Discussions
- Week 1 - Your experience
Week 1 - Interactive notebook examples
5
Labs
- Permutation feature importance for classifying heart beats using a CNN
- Light - Permutation feature importance for classifying heart beats using a CNN
- Permutation feature importance for classifying heart beats using an LSTM
- Light - Permutation feature importance for classifying heart beats using an LSTM
- Permutation feature importance for classifying heart beats using a multi-layer perceptron
Local Interpretable Model Agnostic Explanations
2
Videos
- Local Interpretable Model Agnostic Explanations (LIME)
- LIME in Time-Series Classification
4
Readings
- Why Should I Trust You?
- Practical Exercise: Interpretability of heartbeat classification using LIME and an NNMLP model
- Practical Exercise: Interpretability of heartbeat classification using LIME and a CNN model
- Practical Exercise: Interpretability of heartbeat classification using LIME and an LSTM model
Shapley Additive Explanations
1
Videos
- Shapley Additive Explanations
1
Readings
- A Unified Approach to Interpreting Model Predictions
Model-Specific Explanations for Deep Learning: Visualisation Methods
2
Videos
- Model-Specific Explanations: Visualisation Methods
- CAM in Time-Series Classification
2
Readings
- Practical Exercise: Interpretability of CNN models using Class Activation Maps
- Class Activation Mapping
End of Week 2
1
Assignment
- End of week 2 quiz
1
Discussions
- Week 2 - Your experience
Week 2 - Interactive notebook examples
7
Labs
- Interpretability of heartbeat classification using a CNN model and Class Activation Maps
- Interpretability of heartbeat classification using a CNN model and CAM
- LIME interpretability for heartbeat classification with a convolutional neural network
- Interpretability of heartbeat classification using an LSTM model and CAM
- LIME interpretability for heartbeat classification with a long short-term memory network
- Light - LIME interpretability for heartbeat classification with a long short-term memory network
- LIME interpretability for heartbeat classification with a multi-layer perceptron
Gradient Weighted Class Activation Maps
2
Videos
- Gradient Weighted Class Activation Maps
- Grad-CAM in Time-Series Classification
3
Readings
- GRAD - Class Activation Mapping
- Practical Exercise: Interpretability of the CNN model using Gradient-weighted Class Activation Mapping
- Practical Exercise: Interpretability of the LSTM model using Gradient-weighted Class Activation Mapping
Axiomatic Attributions and Integrated Gradients
2
Videos
- Integrated Gradients
- Integrated Gradients in Time Series Classification
3
Readings
- Axiomatic Attribution for Deep Networks
- Practical Exercise: Interpretability of the CNN model using Integrated Gradients
- Practical Exercise: Interpretability of the LSTM model using Integrated Gradients
End of Week 3
1
Assignment
- End of week 3 quiz
1
Discussions
- Week 3 - Your experience
Week 3 - Interactive notebook examples
7
Labs
- Interpretability of heartbeat classification using a CNN model and Grad-CAM
- Interpretability of heartbeat classification using integrated gradients and a CNN model
- Light - Interpretability of heartbeat classification using integrated gradients and a CNN model
- Interpretability of heartbeat classification using an LSTM model and Grad-CAM
- Light - Interpretability of heartbeat classification using an LSTM model and Grad-CAM
- Interpretability of heartbeat classification using integrated gradients and an LSTM model
- Light - Interpretability of heartbeat classification using integrated gradients and an LSTM model
Week 4: Attention in RNN and Autoencoders
3
Videos
- Attention in Deep Learning
- Taxonomy of Attention
- Attention and Explainability
3
Readings
- Survey on Attention Mechanisms
- Practical Exercise: Classification of heartbeats using an LSTM with attention mechanism
- Practical Exercise: Interpretability of the LSTM model with attention mechanism
End of Week 4
1
Assignment
- End of week 4 quiz
1
Discussions
- Week 4 - Your experience
Week 4 - Interactive notebook examples
4
Labs
- Interpretability of heartbeat classification using an LSTM model with attention mechanism
- Light - Interpretability of heartbeat classification using an LSTM model with attention mechanism
- Heartbeat classification using an LSTM model with attention mechanism
- Light - Heartbeat classification using an LSTM model with attention mechanism
Explainable deep learning models for healthcare
1
Assignment
- End of course summative quiz
Auto Summary
Explore interpretability and explainability in machine learning with a focus on healthcare in this advanced course by Coursera. Learn state-of-the-art methods like PFI, LIME, SHAP, CAM, and Gradient-Weighted CAM, and apply them to time-series classification. Ideal for professionals, the course spans 1800 minutes and offers Starter and Professional subscription options.

Fani Deligianni