The main purpose of the course is to give students the ability plan and implement big data workflows on HDInsight.
The primary audience for this course is data engineers, data architects, data scientists, and data developers who plan to implement big data engineering workflows on HDInsight.
After completing this course, students will be able to:
- Deploy HDInsight Clusters.
- Authorizing Users to Access Resources.
- Loading Data into HDInsight.
- Troubleshooting HDInsight.
- Implement Batch Solutions.
- Design Batch ETL Solutions for Big Data with Spark
- Analyze Data with Spark SQL.
- Analyze Data with Hive and Phoenix.
- Describe Stream Analytics.
- Implement Spark Streaming Using the DStream API.
- Develop Big Data Real-Time Processing Solutions with Apache Storm.
- Build Solutions that use Kafka and HBase
In addition to their professional experience, students who attend this course should have:
- Programming experience using R, and familiarity with common R packages
- Knowledge of common statistical methods and data analysis best practices.
- Basic knowledge of the Microsoft Windows operating system and its core functionality.
- Working knowledge of relational databases.
- Getting Started with HDInsight
- Deploying HDInsight Clusters
- Authorizing Users to Access Resources
- Loading data into HDInsight
- Troubleshooting HDInsight
- Implementing Batch Solutions
- Design Batch ETL solutions for big data with Spark
- Analyze Data with Spark SQL
- Analyze Data with Hive and Phoenix
- Stream Analytics
- Implementing Streaming Solutions with Kafka and HBase
- Develop big data real-time processing solutions with Apache Storm
- Create Spark Streaming Applications