Install on Google Kubernetes Engine

Right now, Google Kubernetes Engine (GKE) is the most tested Kubernetes environment for Stack.

Cluster description

WARNING

Keep in mind that for now, Kubernetes 1.13.6 and 1.14.2 are not supported because of https://github.com/bitpoke/stack/issues/23 .

If you want to move quickly, you can use the predefined terraform scripts from terraform .

These scripts allow you to create a new cluster with 4 node pools, pre-configured with the labels and taints:

  • system, used by the control plane to host all operators pods. Those nodes don’t need heavy resources.
  • database, MySQL related nodes. You can tweak the MySQL performance by using nodes with faster IO and maybe bigger memory for the query cache, depending on the use-case.
  • wordpress is used to host pods that run the PHP code with helper containers for serving media files via buckets.
  • wordpress-preemptible is the same as the wordpress pool, but it has the cloud.google.com/gke-preemptible taint. Because of that, you can use preemptible machines for development sites, lowering the entire costs of the cluster.

In order to continue with terraform, you’ll need some prerequisites:

  • terraform >= v0.12.1
  • gcloud
  • kubectl

Moving forward, let’s clone the repository:

git clone git@github.com:bitpoke/stack.git
cd stack

In stack/, you’ll find a directory called terraform which contains some terraform modules and some examples. It’s highly recommended to check the modules yourself, but here is a summary:

  • in order to create a cluster, you’ll need to specify a name, region, and at least a zone.
  • the initial node count is going to be 1 and the cluster will have the Horizontal Pod Autoscaler add-on enabled

Node pools details

system node pool

The system node pool is going to have the initial node count set to 1, but it has autoscaling active, with a minimum node count of 1 to a maxim of 3 nodes. It spawns nodes with 50Gb storage and “COS” images (Container-Optimized OS from Google). Those nodes can be configured as preemptible, if the preemptible variable is set to true. As labels, it sets only one called node-role.stack.presslabs.org/presslabs-sys. One interesting part about this node pool is that it has a taint called CriticalAddonsOnly. You can read more about taints and toleration here . It’s advised to have non-preemptible machines for this node-pool in production, but it doesn’t require having resource heavy machines.

database node pool

Next one is the database node pool. Is similar to the system node pool, the only differences are in initial node count, which is 0, and labels which are node-role.stack.presslabs.org/database, node-role.stack.presslabs.org/mysql and node-role.stack.presslabs.org/memcached. As you can see, the Memcached instance is close to the database, but this can be updated.

wordpress node pool

This node pool is used to run the WordPress. You may want to have CPU intensive machines here since php-fpm doesn’t have an async way to run your code and its processing one request per worker. The recommended amount of workers per CPU core is 8, but you can play with it, depending on your use-case.

wordpress-preemtible node pool

In order to cut your costs, you may want to create sites on preemptible machines. They are short-lived instances, 80% cheaper than normal VMs, but they don’t have a guaranteed lifespan (Google may need them if their workload is high or if they are up for more than 24h). This node pool is suitable for development or low-traffic instances.

Create a new cluster

In order to create a new cluster, first, you’ll need to authorize yourself, via gcloud cli.

gcloud auth login
gcloud auth application-default login

We’ll then need to initialize terraform’s modules and install google-beta plugin.

cd terraform/examples/gke
terraform init

Next, create a new values file. Let’s call it cluster.tfvars.

# cluster.tfvars

project = "my-project-name"
cluster_name = "bitpoke-stack-1"
preemptible = true
system_node_type = "n1-standard-4"
database_node_type = "n1-standard-4"
wordpress_node_type = "n1-standard-4"
zones = ["europe-west3-a"]

You can see a list with all variables you can update in main.tf

Next, just apply the configuration you set:

terraform apply -var-file="cluster.tfvars"

To be able to interact with kubernetes, you need to get the permission:

gcloud beta container clusters get-credentials bitpoke-stack-1 --region europe-west3 --project my-project-name

Now that the cluster is up and running, you’ll need to install helm tiller and the Bitpoke Stack. For that, Stack offers some bash scripts that are located under the demo directory, 01-install-helm.sh and 02-install-presslabs-stack.sh.

Install helm tiller

The 01-install-helm.sh bash script creates a tiller service account, it binds the cluster-admin role to it and is initialising the tiller, it contains the following commands:

kubectl --namespace kube-system create sa tiller

kubectl create clusterrolebinding tiller \
    --clusterrole cluster-admin \
    --serviceaccount=kube-system:tiller

helm init --service-account tiller \
    --history-max 10 \
    --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}' \
    --override 'spec.template.spec.tolerations[0].key'='CriticalAddonsOnly' \
    --override 'spec.template.spec.tolerations[0].operator'='Exists' \
    --wait

Install the Bitpoke Stack

The 02-install-presslabs-stack.sh bash script is actually going to install the Stack, via helm. We’ll take all the commands from this script one by one.

First, we’ll need a bitpoke-system namespace:

kubectl create ns bitpoke-system

For that namespace, we’ll need to disable validation, in order to allow cert-manager to do its job:

kubectl label namespace bitpoke-system certmanager.io/disable-validation=true

Next, add Bitpoke’s charts repository to your helm sources:

helm repo add bitpoke https://helm-charts.bitpoke.io/
helm repo update

Also before before installing the stack using helm we need to install the CRDs that are used by this deployment:

kubectl apply -f https://raw.githubusercontent.com/bitpoke/stack/master/deploy/manifests/00-crds.yaml

In the end, you can just install bitpoke/stack chart with some preset values from gke.yaml .

helm upgrade -i stack bitpoke/stack --namespace bitpoke-system \
    -f "https://raw.githubusercontent.com/bitpoke/stack/master/presets/gke.yaml"

The preset values will request basic resources for each component: 256Mi RAM and 100m CPU.

That’s pretty much it! You have Stack up and running on your cluster!

Enable Let’s Encrypt

By default, Stack comes with self-signed issuer enabled. If you want to configure letsencrypt issues, you’ll need to run the following command

helm upgrade -i stack bitpoke/stack \
	--namespace bitpoke-system \
	-f "https://raw.githubusercontent.com/bitpoke/stack/master/presets/gke.yaml" \
	--set 'letsencrypt.enabled=true,letsencrypt.email=<youremail>'