Category

Python

Running Jupyter with Spark in Docker

By | PySpark, Python, Spark | No Comments

most attendees of dimajix Spark workshops seem to like the hands-on approach I am offering to them using Jupyter notebooks with Spark clusters running in the AWS cloud. But then, when the workshop finishes, the natural question for many attendees is “how can I continue?”. One the one hand, setting up a Spark cluster is not too difficult, but on the other hand, this is probably out of scope for most people. Moreover you still need to get Jupyter notebook running with PySpark, which is again not too difficult, but also out of scope for a starting point.

Docker to the Rescue

So I made up a Docker image, which contains Spark 2.2.0 and Anaconda Python 3.5, which can be run locally on Linux, Windows and probably Mac (I didn’t test on Apple so far). You only need to have Docker installed on your machine, everything else is contained in the single image. The image can be downloaded with the Docker CLI as follows:
docker pull dimajix/jupyter-spark:latest
When the image is downloaded (which is required only once), you can run a Jupyter notebook via
docker rum --rm -p 8888:8888 dimajix/jupyter-spark:latest
Then point your favorite browser to http://localhost:8888 , this will show the Jupyter notebook start page. Since Spark will run in “local” mode, it does not require any cluster resources. But still it will use as much CPUs as it can find in your Docker environment.

Accessing S3

In order to access training data in S3, you also need to have some AWS credentials and specify them as environment variables as follows:
docker run --rm -p 8888:8888 -e AWS_ACCESS_KEY_ID= -e AWS_SECRET_ACCESS_KEY= dimajix/jupyter-spark:latest
Note that for accessing data in S3, for some technical reasons, you need to use the schema “s3a” instead of “s3”, i.e. “s3a://dimajix-training/data/alice/”.

More on GitHub

The Docker image also supports a Spark standalone cluster and has some more options to tweak (for example, proxy for accessing S3 for all those sitting behing a firewall and proxy), you can find all the details on GitHub at https://github.com/dimajix/docker-jupyter-spark

Jupyter Notebooks with PySpark in AWS

By | Big Data, Cloud, Data Science, PySpark, Python | No Comments

Amazon Elastic MapReduce (EMR) is something wonderful if you need compute capacity on demand. I love it for deploying the technocal environments for my trainings, so every attendee gets its own small Spark cluster in AWS. It comes with Hadoop, Spark, Hive, Hbase, Presto, Pig as working horses and Hue and Zeppelin as convenient frontends, which really support workshops and interactive trainings extremly well. But unfortunately Zeppelin is still lacking behind Jupyter notebooks, especially if you are using Python with PySpark instead of Scala. So if you are into PySpark and EMR, you really want to use Jupyter with PySpark running on top of EMR.

Technically this requires downloading and installing an appropriate Python distribution (like Anaconda for example), configuring an appropriate Jupyter kernel which uses PySpark instead of plain Python. Moreover the Python distribution is required on all participating nodes, so Spark can start Python processes with the same packages on any node in the cluster. Things start to get complicated, especially if you want to large multiple EMR clusters – for example for providing a separate cluster to every attendee of a training.

Obviously this situation requires some sort of automatization. And fortunately a good solution is provided by Terraform from Hashicorp – the perfect tool for deploying multiple clusters for trainings. And by adding a bootstrap action, it is also possible to automatically deploy Anaconda and the Jupyter notebook server on the master node of the EMR cluster.

Read More

Anfrage:

 

Ihr Name (Pflichtfeld)

Ihre E-Mail-Adresse (Pflichtfeld)

Betreff

Ihre Nachricht

×