MultiJuicer — A brilliant way to deliver remote cyber security workshops & CTF Events

James Matchett
7 min readMar 10, 2021

TL;DR: How you can easily set up a K8S based JuiceShop Cluster to teach WebApp Security effectively, while being entirely remote

JuiceShop

MultiJuicer

Instructors Guide

A dashboard view from which you can watch participant’s progress
The dashboard configured to watch participants progress

The pandemic caused by Covid-19 has changed many things about how we work, how we communicate and how we learn.

Prior to this I used to love delivering in-person workshops on cyber security, featuring live demos, anecdotes from industry and ultimately, a hands-on exercise to get the attendees hacking on their own.

The way I’d usually do this is by getting them to install OWASP JuiceShop, a deliberately vulnerable web application that the attendees ran on their own machine and practiced different exploits from XSS, SQL Injection, Payload manipulation and many more. I found that the practical experience that this tool gave to the attendees was priceless as it really brought to life everything I talked about during the presentation section of the workshop.

Unfortunately, this doesn’t quite work well remotely, I found you can’t keep an eye on attendees progress easily as well as getting each of the attendees to install node, docker and a couple of the other pre-reqs became a headache each time I had to teach the workshop to a new set of people, I knew there had to be an easier to way to run this effectively.

One solution was just to provision a set of VMs in the cloud for each participant running JuiceShop, but this quite messy, it didn’t scale well and it still had the same problem of not being able to closely follow each participant’s progress.

Simply put — Juice Shop just isn’t intended to be used by multiple users at a time.

Introducing MultiJuicer by Iteratec — An automatically managed Kubernetes cluster that runs individual JuiceShop instances for each participant to engage with

High level architecture of MultiJuicer

As you can see, every participant hits the same external endpoint, yet the JuiceBalancer ensures that each team’s traffic only reaches their correct instance, which makes it perfect for provisioning both team based and individual workshops.

Even better, it supports a centralised monitoring Grafana solution that allows you to follow each team’s progress through each of the JuiceShop challenges. I’ll walk through how to configure this platform for Azure, but there are guides for other hosting provides such as AWS here

This azure guide is drawn from here

First, open up an azure console, either via portal.azure.com or via the AZ CLI installed on your machine, and we’ll create the cluster

# Before we can do anything we need a resource group
az group create --location westeurope --name multi-juicer

# let's create the cluster now
# I decreased the node count to 2, to dodge the default core limit
az aks create --resource-group multi-juicer --name juicy-k8s --node-count 2

# now to authenticate fetch the credentials for the new cluster
az aks get-credentials --resource-group multi-juicer --name juicy-k8s

# verify by running
# should print "juicy-k8s"
kubectl config current-context

Next, we’ll use Helm to install MultiJuicer onto the cluster

# You'll need to add the multi-juicer helm repo to your helm repos
helm repo add multi-juicer https://iteratec.github.io/multi-juicer/

# for helm <= 2
helm install multi-juicer/multi-juicer --name multi-juicer

# for helm >= 3
helm install multi-juicer multi-juicer/multi-juicer

# kubernetes will now spin up the pods
# to verify every thing is starting up, run:
kubectl get pods
# This should show you two pods a juice-balancer pod and a progress-watchdog pod
# Wait until both pods are ready

At this point, if you’re running these commands via the AZ CLI, we can run a very quick check to make sure everything is running as intended

# lets test out if the app is working correctly before proceeding
# for that we can port forward the JuiceBalancer service to your local machine
kubectl port-forward service/juice-balancer 3000:3000

# Open up your browser for localhost:3000
# You should be able to see the MultiJuicer Balancer UI

# Try to create a team and see if everything works correctly
# You should be able to access a JuiceShop instances after a few seconds after creating a team,
# and after clicking the "Start Hacking" Button

# You can also try out if the admin UI works correctly
# Go back to localhost:3000/balancer
# To log in as the admin log in as the team "admin"
# The password for the team gets auto generated if not specified, you can extract it from the kubernetes secret:
kubectl get secrets juice-balancer-secret -o=jsonpath='{.data.adminPassword}' | base64 --decode

So far we’ve configured the kubernetes cluster, but have defined no ingress routes for our participant’s traffic, here we’ll define an nginx-ingress route for traffic to reach our JuiceBalancer

# Add the ingress-nginx repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

# Use Helm to deploy an NGINX ingress controller
# Note, if you used a different namespace above, change it to match below

helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace default \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux

Touch a local file called ‘ingress.yml’ and add the following content

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
serviceName: juice-balancer
servicePort: 3000
path: /(.*)

We’ll apply this ruleset to Nginx-ingress to make it effective

kubectl apply -f ingress.yml

If it’s worked correctly, you should hopefully see

ingress.extensions/hello-world-ingress created

At this point, any external network traffic accessing the public IP for your kubernetes cluster, will be directed to the JuiceBalancer. If you want to restrict this, make sure to update the inbound traffic rules in the automatically created Network Security Group within the created multi-juicer resource group.

I tend to whitelist on an IP basis and deny by default as it provides the most robust level of security

Test that you can access the JuiceShop instance remotely without the localforward from before, this will show you if your participants are able to connect properly.

It might even be a good idea to create a DNS entry from azure portal for your MultiJuicer instance via the created Public IP for the resource group, this will make it much easier for the participants to connect.

If all is working well, you should see this screen when you connect via a browser

Now that the JuiceShop instance is up & running, we can try to now configure the Grafana monitoring solution that will let us watch each team’s/participant’s progress through each of the JuiceShop challenges.

Following the guide from here

Enter the following commands into the Azure cloud portal or AZ CLI

# Install Prometheus, Grafana & Grafana Loki

helm repo add grafana https://grafana.github.io/helm-charts
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

kubectl create namespace monitoring

echo "Installing prometheus-operator"
wget https://raw.githubusercontent.com/iteratec/multi-juicer/master/guides/monitoring-setup/prometheus-operator-config.yaml

echo "Installing Prometheus Operator & Grafana"
helm --namespace monitoring upgrade --install monitoring prometheus-community/kube-prometheus-stack --version 13.3.0 --values prometheus-operator-config.yaml

echo "Installing loki"
helm --namespace monitoring upgrade --install loki grafana/loki --version 2.3.0 --set="serviceMonitor.enabled=true"

echo "Installing loki/promtail"
helm --namespace monitoring upgrade --install promtail grafana/promtail --version 3.0.4 --set "config.lokiAddress=http://loki:3100/loki/api/v1/push" --set="serviceMonitor.enabled=true"

echo "Installing MultiJuicer"
helm repo add multi-juicer https://iteratec.github.io/multi-juicer/

# for helm >= 3
helm install multi-juicer multi-juicer/multi-juicer --set="balancer.metrics.enabled=true" --set="balancer.metrics.dashboards.enabled=true" --set="balancer.metrics.serviceMonitor.enabled=true"

This will install the Grafana solution for your MultiJuicer instance, you can either configure another Nginx-ingress solution to allow inbound traffic, however as I only want myself to be able to access it as the instructor and not the participants, I will instead just use a localforward to connect to it

az account set — subscription {{Your subscription ID here}}az aks get-credentials — resource-group multi-juicer — name juicy-k8skubectl -n monitoring port-forward service/monitoring-grafana 8085:80

Now just connect on localhost:8085 with the default password of

prom-operator

You should be able to see a similar screen once choosing the MultiJuicer-Instances dashboard, there are also other K8S dashboards for monitoring the cluster’s health and performance

Closing notes:

You can easily manage the teams (i.e. deleting + restarting teams) by logging into the admin account, detailed in step 3 of setup.

I usually begin this exercise by ensuring every participant can access the solution & can create a team. I then show them how to access the scoreboard, configure postman (ensure they enable cookies for the MultiJuicer domain within Postman’s interceptor tab) and let them proceed at their own pace against all of the challenges.

You can easily scale the instance count within kubernetes in order to account for different workshop sizes, keeping costs down and performance steady as needed.

Keeping an eye on each participant’s progress, doing regular demonstrations on some of the basic exploits, and giving help where needed really makes for an impactful remote workshop that gives the participants some real practical experience they will remember.

Thanks for reading!

— James M.

--

--