Utilize Apache Spark to build speedy data pipelines. Interact with your Spark cluster using PySpark, and get started using Databricks' notebook interface.
Perform SQL-like joins and aggregations on your PySpark DataFrames.
Working with Spark's original data structure API: Resilient Distributed Datasets.
Become familiar with building a structured stream in PySpark using the Databricks interface.
Easy DataFrame cleaning techniques ranging from dropping rows to selecting important data.
Apply transformations to PySpark DataFrames such as creating new columns, filtering rows, or modifying string & number values.
Get started with Apache Spark in part 1 of our series, where we leverage Databricks and PySpark.