most attendees of dimajix Spark workshops seem to like the hands-on approach I am offering to them using Jupyter notebooks with Spark clusters running in the AWS cloud. But then, when the workshop finishes, the natural question for many attendees is “how can I continue?”. One the one hand, setting up a Spark cluster is not too difficult, but on the other hand, this is probably out of scope for most people. Moreover you still need to get Jupyter notebook running with PySpark, which is again not too difficult, but also out of scope for a starting point.
Docker to the Rescue
So I made up a Docker image, which contains Spark 2.2.0 and Anaconda Python 3.5, which can be run locally on Linux, Windows and probably Mac (I didn’t test on Apple so far). You only need to have Docker installed on your machine, everything else is contained in the single image. The image can be downloaded with the Docker CLI as follows:
docker pull dimajix/jupyter-spark:latest
When the image is downloaded (which is required only once), you can run a Jupyter notebook via
docker rum --rm -p 8888:8888 dimajix/jupyter-spark:latest
Then point your favorite browser to http://localhost:8888 , this will show the Jupyter notebook start page. Since Spark will run in “local” mode, it does not require any cluster resources. But still it will use as much CPUs as it can find in your Docker environment.
In order to access training data in S3, you also need to have some AWS credentials and specify them as environment variables as follows:
docker run --rm -p 8888:8888 -e AWS_ACCESS_KEY_ID= -e AWS_SECRET_ACCESS_KEY= dimajix/jupyter-spark:latest
Note that for accessing data in S3, for some technical reasons, you need to use the schema “s3a” instead of “s3”, i.e. “s3a://dimajix-training/data/alice/”.
More on GitHub
The Docker image also supports a Spark standalone cluster and has some more options to tweak (for example, proxy for accessing S3 for all those sitting behing a firewall and proxy), you can find all the details on GitHub at https://github.com/dimajix/docker-jupyter-spark