Hone your data engineering skills by creating data pipelines to extract, transform, and load data. Use tools like Apache Spark and Kafkta to handle big data.

Manage Data Pipelines with Apache Airflow

Use Apache Airflow to build and monitor better data pipelines.
13 min read
June 03

DataFrame Transformations in PySpark (Continued)

Continuing to apply transformations to Spark DataFrames using PySpark.
8 min read
May 07

Becoming Familiar with Apache Kafka and Message Queues

An overview of how Kafka works, as well as equivalent message brokers.
6 min read
May 04

Executing Basic DataFrame Transformations in PySpark

Using PySpark to apply transformations to real datasets.
9 min read
April 29

Learning Apache Spark with PySpark & Databricks

Get started with Apache Spark in part 1 of our series, where we leverage Databricks and PySpark.
13 min read
April 26

Building an ETL Pipeline: From JIRA to SQL

An example data pipeline which extracts data from the JIRA Cloud API and loads it to a SQL database.
Data Engineering
12 min read
March 28