- Level Foundation
- Duration 9 hours
- Course by University of Michigan
-
Offered by
About
By the end of this second course in the Total Data Quality Specialization, learners will be able to: 1. Learn various metrics for evaluating Total Data Quality (TDQ) at each stage of the TDQ framework. 2. Create a quality concept map that tracks relevant aspects of TDQ from a particular application or data source. 3. Think through relative trade-offs between quality aspects, relative costs and practical constraints imposed by a particular project or study. 4. Identify relevant software and related tools for computing the various metrics. 5. Understand metrics that can be computed for both designed and found/organic data. 6. Apply the metrics to real data and interpret their resulting values from a TDQ perspective. This specialization as a whole aims to explore the Total Data Quality framework in depth and provide learners with more information about the detailed evaluation of total data quality that needs to happen prior to data analysis. The goal is for learners to incorporate evaluations of data quality into their process as a critical component for all projects. We sincerely hope to disseminate knowledge about total data quality to all learners, such as data scientists and quantitative analysts, who have not had sufficient training in the initial steps of the data science process that focus on data collection and evaluation of data quality. We feel that extensive knowledge of data science techniques and statistical analysis procedures will not help a quantitative research study if the data collected/gathered are not of sufficiently high quality. This specialization will focus on the essential first steps in any type of scientific investigation using data: either generating or gathering data, understanding where the data come from, evaluating the quality of the data, and taking steps to maximize the quality of the data prior to performing any kind of statistical analysis or applying data science techniques to answer research questions. Given this focus, there will be little material on the analysis of data, which is covered in myriad existing Coursera specializations. The primary focus of this specialization will be on understanding and maximizing data quality prior to analysis.Modules
Welcome!
1
Videos
- Welcome to Course 2!
2
Readings
- Course Syllabus
- Course Pre-Survey
Validity
1
Assignment
- Interpreting Validity Metrics
4
Videos
- Measuring Validity for Designed Data
- Example 1: Performing CFA and Examining Measurement Invariance in R
- Approaches and Considerations for Measuring Quality for Gathered Data
- Measuring Validity for Gathered Data
2
Readings
- Files for Example 1
- Example 2: A tutorial on estimating 'true-score' multitrait-multimethod models with lavaan in R
Data Origin
1
Assignment
- Interpreting Data Origin Quality Metrics
4
Videos
- Measuring Data Origin Quality for Designed Data
- Examples: Computing Measures of Data Origin Quality for Designed Data in R
- Measuring Data Origin Quality for Gathered Data
- Example 4: Measuring Validity and Data Origin Quality for Gathered Data
2
Readings
- Output and R data file for the next Examples video
- Case Study: Measuring the Quality of Cause-of-Death Data at the CDC
Processing
1
Assignment
- Interpreting Processing Metrics
4
Videos
- Measuring Processing Quality for Designed Data
- Example: Computing Processing Metrics with Real Data and Code
- Measuring Processing Quality for Gathered Data
- Example: Computing Processing Metrics for Gathered Data
1
Readings
- Data files for the next example
Data Access
1
Assignment
- Interpreting Access Metrics
4
Videos
- Measuring Data Access Quality for Designed Data
- Example: Computing Access Metrics with Read Data and Code
- Measuring Data Access Quality for Gathered Data
- Case Study: Measuring Data Access Quality in Gathered Twitter Data
1
Readings
- Case study article: Hino and Fahey 2019
Data Source
1
Assignment
- Interpreting Data Source Quality Metrics
4
Videos
- Measuring Data Source Quality for Designed Data
- Example: Computing Data Source Metrics with Real Data and Code
- Measuring Data Source Quality for Gathered Data
- Example: Computing Data Source Quality Metrics with Real Data and Code
1
Readings
- Data files for the next example
Data Missingness
1
Assignment
- Interpreting Data Missingness Metrics
4
Videos
- Measuring Threats to Data Source Quality: Designed Data
- Example: Computing Data Missingness Metrics with Real Data and Code
- Measuring Data Missingness for Gathered Data
- Example: Computing Data Missingness for Gathered Data
2
Readings
- Link to R software and Examples on GitHub (from previous lecture)
- Data file for the next example
Measuring the Quality of Data Analysis
1
Assignment
- Examining Analysis Quality Metrics and Interpreting Output
4
Videos
- Measuring the Quality of an Analysis of Designed Data
- Example: Computing Measures of Data Analysis Quality for Designed Data in R
- Measuring the Quality of an Analysis of Gathered Data
- Example: Computing Metrics for Quality of Models of Gathered Data
3
Readings
- Files for the next Example
- Suggested readings from the previous lecture
- The Aequitas Bias Toolkit for Auditing Machine Learning Models
Course Conclusion
3
Readings
- Course Conclusion
- References for Measuring Total Data Quality
- Course Post-Survey
Auto Summary
"Measuring Total Data Quality" is a foundational Data Science & AI course led by Coursera. This 540-minute course equips learners with essential skills to evaluate and enhance data quality using various metrics. It covers creating quality concept maps, understanding trade-offs, identifying relevant software, and applying metrics to real data. Ideal for data scientists and quantitative analysts, it emphasizes the importance of data quality before analysis. Subscription options include Starter and Professional.

Brady T. West

James Wagner

Jinseok Kim

Trent D Buskirk