Each project comes with 2-5 hours of micro-videos explaining the solution.
Get access to 50+ solved projects with iPython notebooks and datasets.
Add project experience to your Linkedin/Github profiles.
In this big data project we build a live workflow for a real project using Apache Airflow which is the new edge workflow management platform. We will go through the use cases of workflow, different tools available to manage workflow, important features of workflow like CLI and UI and how Airflow is differnt. We will install Airflow and run some simple workflows.
In this big data hadoop project, we will download the raw page counts data from wikipedia archieve and we will process them via Hadoop. Then map that processed data to raw SQL data to identify the most lived up pages of a given day. Then we will visualize the proecessed data via Zeppelin Notebooks to identify the daily trends. We will use Qubole to power up Hadoop and Notebooks.
All steps like downloading, copying data to S3, creating tables and processing them via Hadoop would be task in Airflow and we will learn how to craft scheduled workflow in Airflow.
In this big data project, we will talk about Apache Zeppelin. We will write code, write notes, build charts and share all in one single data analytics environment using Hive, Spark and Pig.
In this hadoop hive project, you will work on Hive and HQL to analyze movie ratings using MovieLens dataset for better movie recommendation.
In this big data project, we will continue from a previous hive project "Data engineering on Yelp Datasets using Hadoop tools" and do the entire data processing using spark.