- Level Professional
- المدة 28 ساعات hours
- الطبع بواسطة École Polytechnique Fédérale de Lausanne
-
Offered by
عن
Manipulating big data distributed over a cluster using functional concepts is rampant in industry, and is arguably one of the first widespread industrial uses of functional ideas. This is evidenced by the popularity of MapReduce and Hadoop, and most recently Apache Spark, a fast, in-memory distributed collections framework written in Scala. In this course, we'll see how the data parallel paradigm can be extended to the distributed case, using Spark throughout. We'll cover Spark's programming model in detail, being careful to understand how and when it differs from familiar programming models, like shared-memory parallel collections or sequential Scala collections. Through hands-on examples in Spark and Scala, we'll learn when important issues related to distribution like latency and network communication should be considered and how they can be addressed effectively for improved performance. Learning Outcomes. By the end of this course you will be able to: - read data from persistent storage and load it into Apache Spark, - manipulate data with Spark and Scala, - express algorithms for data analysis in a functional style, - recognize how to avoid shuffles and recomputation in Spark, Recommended background: You should have at least one year programming experience. Proficiency with Java or C# is ideal, but experience with other languages such as C/C++, Python, Javascript or Ruby is also sufficient. You should have some familiarity using the command line. This course is intended to be taken after Parallel Programming: https://www.coursera.org/learn/parprog1. Note that this version of the course uses Scala 2.13. You can find a more recent version of the course that uses Scala 3 here: https://www.coursera.org/learn/scala-spark-big-dataالوحدات
Getting Started
- Example
6
Readings
- Tools Setup (Please read)
- SBT tutorial and Submission of Assignments (Please read)
- Cheat Sheet
- Scala Tutorial
- Scala Style Guide
- Working on Assignments
From Parallel to Distributed
3
Videos
- Introduction, Logistics, What You'll Learn
- Data-Parallel to Distributed Data-Parallel
- Latency
Basics of Spark's RDDs
- Wikipedia (audit version)
- Wikipedia
4
Videos
- RDDs, Spark's Distributed Collection
- RDDs: Transformation and Actions
- Evaluation in Spark: Unlike Scala Collections!
- Cluster Topology Matters!
Reduction Operations & Distributed Key-Value Pairs
- StackOverflow (2 week long assignment) (audit version)
- StackOverflow (2 week long assignment)
4
Videos
- Reduction Operations
- Pair RDDs
- Transformations and Actions on Pair RDDs
- Joins
Partitioning and Shuffling
4
Videos
- Shuffling: What it is and why it's important
- Partitioning
- Optimizing with Partitioners
- Wide vs Narrow Dependencies
SQL, Dataframes, and Datasets
- Time Usage (audit version)
- Time Usage
5
Videos
- Structured vs Unstructured Data
- Spark SQL
- DataFrames (1)
- DataFrames (2)
- Datasets
Auto Summary
Embark on mastering Big Data Analysis with Scala and Spark, a professional-level course designed for data enthusiasts. Led by industry experts on Coursera, this course delves into manipulating big data using Apache Spark and Scala. Over 1680 minutes, learners will explore Spark's programming model, data parallel paradigms, and functional programming techniques. Ideal for those with a year of programming experience, especially in Java or C#, it covers reading data, data manipulation, and performance optimization. Available under the Starter subscription, this course is perfect for professionals looking to enhance their data science skills.

Prof. Heather Miller