Amazon Elastic MapReduce (EMR) is something wonderful if you need compute capacity on demand. I love it for deploying the technocal environments for my trainings, so every attendee gets its own small Spark cluster in AWS. It comes with Hadoop, Spark, Hive, Hbase, Presto, Pig as working horses and Hue and Zeppelin as convenient frontends, which really support workshops and interactive trainings extremly well. But unfortunately Zeppelin is still lacking behind Jupyter notebooks, especially if you are using Python with PySpark instead of Scala. So if you are into PySpark and EMR, you really want to use Jupyter with PySpark running on top of EMR.
Technically this requires downloading and installing an appropriate Python distribution (like Anaconda for example), configuring an appropriate Jupyter kernel which uses PySpark instead of plain Python. Moreover the Python distribution is required on all participating nodes, so Spark can start Python processes with the same packages on any node in the cluster. Things start to get complicated, especially if you want to large multiple EMR clusters – for example for providing a separate cluster to every attendee of a training.
Obviously this situation requires some sort of automatization. And fortunately a good solution is provided by Terraform from Hashicorp – the perfect tool for deploying multiple clusters for trainings. And by adding a bootstrap action, it is also possible to automatically deploy Anaconda and the Jupyter notebook server on the master node of the EMR cluster.
Deploying Spark + Juyper in AWS with Terraform
In order to perform an automatic deployment you can simply use dimajix Terraform scripts available at github. Clone the repository onto your local machine, then proceed as follows in order to create a secure deployment:
- Clone the github repository into some directory, for example via
git clone https://github.com/dimajix/terraform-emr-training.git
- Create a new public/private ssh key pair, for example via
ssh-keygen -b 2048 -t rsa -C "EMR Access Key" -f deployer-key
- Copy aws-config.tf.template to aws-config.tf, and insert your AWS credentials and adjust the availability zone.
- Edit main.tf to suite your requirements (additional EMR components, number and size of clusters etc)
- Start the cluster via
terraform get terraform apply
- Create a dynamic SSH tunnel to the cluster via
ssh -i deployer-key -ND 8157 hadoop@public-master-ip-address
- Install Foxy-Proxy standard plugin for Firefox or Chrome, use the provided config file foxy-proxy.xml
- Access the Juper-Notebook via http://public-master-ip-address:8888
- Perform your magic in Jupyter
- Destroy the cluster via
terraform destroy
What’s inside
The Terraform script will create a new VPC and subnets, will start new clusters with Spark, Hive, Pig, Presto, Hue, Zeppelin and Jupyter. You can modify main.tf, where the number of clusters, their common configuration (EC2 instance types) and EMR components are configured.
If you are using FoxyProxy, all services are available at
YARN - http://public-master-ip-address:8088 HDFS - http://public-master-ip-address:50070 Hue - http://public-master-ip-address:8888 Zeppelin - http://public-master-ip-address:8890 Spark History - http://public-master-ip-address:18080 Jupyter Notebook - http://public-master-ip-address:8888