Apply transformations to PySpark DataFrames such as creating new columns, filtering rows, or modifying string & number values.
Get started with Apache Spark in part 1 of our series, where we leverage Databricks and PySpark.
Build a pipeline which extracts raw data from the JIRA's Cloud API, transforms it, and loads the data into a SQL database.
Make your GraphQL queries more dynamic with Fragments, plus get started with Mutations.
Now that we have an understanding of GraphQL queries and API setup, it's time to get that data.
Begin to structure complex queries against your GraphQL API.
Brush up on SQL fundamentals such as creating tables, schemas, and views.
Using an Example Where We Downcast Numerical Columns.
The quest to never explicitly set a table schema ever again.
Connect to a PostgreSQL database and execute queries in Python using the Psycopg2 library.