Each project comes with 2-5 hours of micro-videos explaining the solution.
Get access to 50+ solved projects with iPython notebooks and datasets.
Add project experience to your Linkedin/Github profiles.
This is one of the best of investments you can make with regards to career progression and growth in technological knowledge. I was pointed in this direction by a mentor in the IT world who I highly... Read More
Recently I became interested in Hadoop as I think its a great platform for storing and analyzing large structured and unstructured data sets. The experts did a great job not only explaining the... Read More
Lately, the phrase "ETL is dead" has become more popular. But that statement is flatly false. It should rather have been "Batch ETL is growing unpopular". Companies now believe not only in the power of data but also in the power of current-ness of data. This means that a dashboard that reveals sales pattern for yesterday is less correct than one that shows sales pattern in the last 30 minutes.
Kafka is a scalable and distributed streaming and messaging platform is a great choice for building today's ETL pipeline.
In this big data kafka project, we will see this in theory as well as implementation. We will see how data ingestion and loading is done with Kafka connect APIs while transformation will be done with Kafka Streaming API. But this is not all.
In this project, we will show how to build an ETL pipeline on streaming datasets using Kafka.
The goal of this IoT project is to build an argument for generalized streaming architecture for reactive data ingestion based on a microservice architecture.
In this project, we are going to analyze streaming logfile dataset by integrating Kafka and Kylin.