Process data at scale with Apache Spark and PySpark. Build pipelines to batch process or stream data in real-time.
Perform SQL-like joins and aggregations on your PySpark DataFrames.
Working with Spark's original data structure API: Resilient Distributed Datasets.
Become familiar with building a structured stream in PySpark using the Databricks interface.
Easy DataFrame cleaning techniques ranging from dropping rows to selecting important data.
Apply transformations to PySpark DataFrames such as creating new columns, filtering rows, or modifying string & number values.
Get started with Apache Spark in part 1 of our series, where we leverage Databricks and PySpark.