Introduction to Big Data (2015)¶
Learn how to apply data science techniques using parallel programming in Apache Spark to explore big (and small) data.
Organizations use their data for decision support and to build data-intensive products and services, such as recommendation, prediction, and diagnostic systems. The collection of skills required by organizations to support these functions has been grouped under the term Data Science. This course will attempt to articulate the expected output of Data Scientists and then teach students how to use PySpark (part of Apache Spark) to deliver against these expectations. The course assignments include Log Mining, Textual Entity Recognition, Collaborative Filtering exercises that teach students how to manipulate data sets using parallel processing with PySpark.
This course covers advanced undergraduate-level material. It requires a programming background and experience with Python (or the ability to learn it quickly). All exercises will use PySpark (part of Apache Spark), but previous experience with Spark or distributed computing is NOT required. Students should take a basic Python course if they need to learn Python or refresh their Python knowledge.
What you’ll learn¶
- Learn how to use Apache Spark to perform data analysis
- How to use parallel programming to explore data sets
- Apply Log Mining, Textual Entity Recognition and Collaborative Filtering to real world data questions
- Module One
- Lecture 1: Introduction to Big Data and Data Science
- Lecture 2: Performing Data Science and Preparing Data
- Module 2
- Module 3
- Module 4
- Module 5