Run Apache Spark workloads on Armada, a multi-cluster Kubernetes batch scheduler.
armada-spark is an open-source integration designed to streamline deployment and management of Apache Spark workloads on Armada. It provides preconfigured Docker images, tooling for efficient image management, and example workflows to simplify local and production deployments.
- Java 11 or 17
- Apache Maven 3.9.6+
- (Optional) kind for local clusters
- An accessible Armada server and Lookout endpoint — see the Armada Operator Quickstart to set one up
The default build targets Spark 3.5.5 and Scala 2.13.8:
mvn clean packageTo target a different Spark/Scala version:
./scripts/set-version.sh 3.3.4 2.12.15 # Spark 3.3.4, Scala 2.12.15
mvn clean package| Spark | Scala | Java |
|---|---|---|
| 3.5.5 | 2.12.18 | 17 |
| 3.5.5 | 2.13.8 | 17 |
| Spark | Scala | Java |
|---|---|---|
| 3.3.4 | 2.12.15 | 11 |
| 3.3.4 | 2.13.8 | 11 |
./scripts/createImage.sh [-i image-name] [-m armada-master-url] [-q armada-queue] [-l armada-lookout-url]| Flag | Description | Example |
|---|---|---|
-i |
Docker image name | spark:armada |
-m |
Armada master URL | armada://localhost:30002 |
-q |
Armada queue | default |
-l |
Armada Lookout URL | http://localhost:30000 |
-p |
Include Python | |
-h |
Display help |
You can store defaults in scripts/config.sh:
export IMAGE_NAME="spark:armada"
export ARMADA_MASTER="armada://localhost:30002"
export ARMADA_QUEUE="default"
export ARMADA_LOOKOUT_URL="http://localhost:30000"
export INCLUDE_PYTHON=true
export USE_KIND=trueFor client mode, set additional values:
export SPARK_DRIVER_HOST="172.18.0.1" # Required for client mode
export SPARK_DRIVER_PORT="7078" # Required for client modeWe recommend kind for local testing. If you are using the Armada Operator Quickstart, it is already based on kind.
kind load docker-image $IMAGE_NAME --name armadaThe default Armada Operator setup allows only localhost access. You can quickly set up a local Armada server configured to allow external access from other hosts, useful for client development and testing. For this configuration:
-
Copy the file
e2e/kind-config-external-access.yamlin this repository tohack/kind-config.yamlin yourarmada-operatorrepository. -
Edit the newly-copied
hack/kind-config.yamlas noted in the beginning comments of that file. -
Run the armada-operator setup commands (usually
make kind-all) to create and start your Armada instance. -
Copy the
$HOME/.kube/configand$HOME/.armadactl.yaml(that Armada Operator will generate) from the Armada server host to your$HOMEdirectory on the client (local) host. Then edit the local.kube/configand on the line that hasserver: https://0.0.0.0:6443, change the0.0.0.0address to the IP address or hostname of the remote Armada server system. -
Generate a copy of the client TLS key, cert, and CA-cert files: (1) go into the
e2esubdirectory, and run./extract-kind-cert.sh- it will generateclient.crt,client.key, andca.crt, from the output ofkubectl config view. These files can be left in this directory. -
Copy the
$HOME/.armadactl.yamlfrom the Armada server host to your home directory on your client system. -
You should then be able to run
kubectl get pods -Aand see a list of the running pods on the remote Armada server, as well as runningarmadactl get queues. -
Verify the functionality of your setup by editing
scripts/config.shand changing the following line:
ARMADA_MASTER=armada://192.168.12.135:30002
to the IP address or hostname of your Armada server. You should not need to change the port number.
Also, set the location of the three TLS certificate files by adding/setting:
export CLIENT_CERT_FILE=e2e/client.crt
export CLIENT_KEY_FILE=e2e/client.key
export CLUSTER_CA_FILE=e2e/ca.crt
- You should be able to now verify the armada-spark configuration by running the E2E tests:
$ ./scripts/dev-e2e.sh
This will save its output to e2e-test.log for further debugging.
Before submitting a pull request, please ensure that your code adheres to the project's coding standards and passes all tests.
To run the unit tests, use the following command:
mvn testTo run the E2E tests, run Armada using the Operator Quickstart guide, then execute:
scripts/test-e2e.shTo check the code for linting issues, use the following command:
mvn spotless:checkTo automatically apply linting fixes, use:
mvn spotless:applyMake sure that the SparkPi job successfully runs on your Armada cluster before submitting a pull request.
# Cluster mode + dynamic allocation
./scripts/submitArmadaSpark.sh -M cluster -A dynamic 100
# Cluster mode + static allocation
./scripts/submitArmadaSpark.sh -M cluster -A static 100
# Client mode + dynamic allocation
./scripts/submitArmadaSpark.sh -M client -A dynamic 100
# Client mode + static allocation
./scripts/submitArmadaSpark.sh -M client -A static 100Run ./scripts/submitArmadaSpark.sh -h for all available options. The script reads ARMADA_MASTER, ARMADA_QUEUE, and ARMADA_LOOKOUT_URL from scripts/config.sh.
The Docker image includes Jupyter support (requires INCLUDE_PYTHON=true):
./scripts/runJupyter.shOpens at http://localhost:8888. Override the port with JUPYTER_PORT in scripts/config.sh. Example notebooks from example/jupyter/notebooks are mounted at /home/spark/workspace/notebooks.
View event logs from completed jobs:
./scripts/runHistoryServer.shRequires S3 credentials and spark.eventLog.enabled=true. UI at http://localhost:18080.
See scripts/filesParameterExample.sh for a working example of distributing local files to driver and executor pods using Spark's --files parameter and SparkFiles.get().
./scripts/benchmark.sh # Run against Armada
./scripts/benchmark.sh -K # Run against native KubernetesUser submits Spark job
└─> ArmadaClusterManager (SPI entry: registers "armada://" master scheme)
├─> TaskSchedulerImpl (Spark core task scheduling)
└─> ArmadaClusterManagerBackend (executor lifecycle, state tracking)
├─> ArmadaExecutorAllocator (demand vs supply, batch job submission)
└─> ArmadaEventWatcher (daemon thread, gRPC event stream)
Cluster-mode submission:
└─> ArmadaClientApplication (SparkApplication SPI)
├─> KubernetesDriverBuilder → PodSpecConverter → PodMerger
└─> ArmadaClient.submitJobs()
Key source directories:
src/main/scala/org/apache/spark/
├── deploy/armada/ # Configuration & job submission
│ ├── Config.scala # spark.armada.* config entries
│ ├── DeploymentModeHelper.scala # Gang scheduling strategy per deploy mode
│ ├── submit/ # Job submission pipeline
│ └── validators/ # Kubernetes validation
└── scheduler/cluster/armada/ # Cluster manager & scheduling
├── ArmadaClusterManager.scala
├── ArmadaClusterManagerBackend.scala
├── ArmadaEventWatcher.scala
└── ArmadaExecutorAllocator.scala
Version-specific sources live in src/main/scala-spark-{3.3,3.5,4.1}/.
See CONTRIBUTING.md for the full development guide, including commit conventions, coding standards, and how to use Claude Code with this project.
Quick reference:
mvn test # Run unit tests
mvn spotless:check # Check formatting
mvn spotless:apply # Auto-fix formatting
scripts/dev-e2e.sh # Run E2E tests (requires a running Armada cluster)