Spark Scala Cheat Sheet



Spark Scala Cheat Sheet

(scala.Double, scala.Float, etc.) scala.AnyRef base type of all reference types. (alias of java.lang.Object, supertype of java.lang.String, scala.List, any user-defined class) scala.Null is a subtype of any scala.AnyRef (null is the only instance of type Null), and scala.Nothing is a subtype of any other type without any instance. Type Parameters. Scala Cheatsheet Thanks to Brendan O’Connor, this cheatsheet aims to be a quick reference of Scala syntactic constructions. Licensed by Brendan O’Connor under a CC-BY-SA 3.0 license.

This guide helps you quickly explore the main features of Delta Lake. It provides code snippets that show how to read from and write to Delta tables from interactive, batch, and streaming queries.

Set up Apache Spark with Delta Lake

Sheet

Follow these instructions to set up Delta Lake with Spark. You can run the steps in this guide on your local machine in the following two ways:

  1. Run interactively: Start the Spark shell (Scala or Python) with Delta Lake and run the code snippets interactively in the shell.
  2. Run as a project: Set up a Maven or SBT project (Scala or Java) with Delta Lake, copy the code snippets into a source file, and run the project. Alternatively, you can use the examples provided in the Github repository.

Set up interactive shell

To use Delta Lake interactively within the Spark’s Scala/Python shell, you need a local installation of Apache Spark. Depending on whether you want to use Python or Scala, you can set up either PySpark or the Spark shell, respectively.

PySpark

Install or upgrade Pyspark (3.0 or above) by running the following:

Then, run PySpark with the Delta Lake package and additional configurations:

Spark Scala Shell

Download the latest version of Apache Spark (3.0 or above) by following instructions from Downloading Spark, either using pip or by downloading and extracting the archive and running spark-shell in the extracted directory.

Set up project

If you want to build a project using Delta Lake binaries from Maven Central Repository, you can use the following Maven coordinates.

Maven

You include Delta Lake in your Maven project by adding it as a dependency in your POM file. Delta Lake compiled with Scala 2.12.

SBT

You include Delta Lake in your SBT project by adding the following line to your build.sbt file:

Python

For setting up a Python project (e.g., for unit testing), you must start the Spark session first with the Delta Lake package and then import the Python APIs.

Create a table

To create a Delta table, write a DataFrame out in the delta format. You can use existing Spark SQL code and change the format from parquet, csv, json, and so on, to delta.

These operations create a new Delta table using the schema that was inferred from your DataFrame. For the full set of options available when you create a new Delta table, see Create a table and Write to a table.

Note

This quickstart uses local paths for Delta table locations. For configuring HDFS or cloud storage for Delta tables, see Storage configuration.

Spark Scala Cheat Sheet Download

Read data

You read data in your Delta table by specifying the path to the files: '/tmp/delta-table':

Update table data

Spark Scala Cheat Sheet Pdf

Delta Lake supports several operations to modify tables using standard DataFrame APIs. This example runs a batch job to overwrite the data in the table:

Overwrite

If you read this table again, you should see only the values 5-9 you have added because you overwrote the previous data.

Conditional update without overwrite

Delta Lake provides programmatic APIs to conditional update, delete, and merge (upsert) data into tables.Here are a few examples.

You should see that some of the existing rows have been updated and new rows have been inserted.

Spark scala cheat sheet printable

For more information on these operations, see Table deletes, updates, and merges.

Read older versions of data using time travel

You can query previous snapshots of your Delta table by using time travel. If you want to access the data that you overwrote, you can query a snapshot of the table before you overwrote the first set of data using the versionAsOf option.

Spark Scala Documentation

You should see the first set of data, from before you overwrote it. Time travel takes advantage of the power of the Delta Lake transaction log to access data that is no longer in the table. Removing the version 0 option (or specifying version 1) would let you see the newer data again. For more information, see Query an older snapshot of a table (time travel).

Write a stream of data to a table

You can also write to a Delta table using Structured Streaming. The Delta Lake transaction log guarantees exactly-once processing, even when there are other streams or batch queries running concurrently against the table. By default, streams run in append mode, which adds new records to the table:

While the stream is running, you can read the table using the earlier commands.

Note

If you’re running this in a shell, you may see the streaming task progress, which make it hard to type commands in that shell. It may be useful to start another shell in a new terminal for querying the table.

You can stop the stream by running stream.stop() in the same terminal that started the stream.

For more information about Delta Lake integration with Structured Streaming, see Table streaming reads and writes.

SparkSpark Scala Cheat Sheet

Read a stream of changes from a table

While the stream is writing to the Delta table, you can also read from that table as streaming source. For example, you can start another streaming query that prints all the changes made to the Delta table. You can specify which version Structured Streaming should start from by providing the startingVersion or startingTimestamp option to get changes from that point onwards. See Structured Streaming for details.