Kubernetes Cluster
In this guide, we describe how to deploy a Jmix application to a Kubernetes cluster. We’ll use the single-node Kubernetes cluster provided by minikube, which you can install on your development machine and test the deployment locally.
The guide focuses on what should be configured in the application in order to run on Kubernetes. You can complete the guide and prepare your application for such deployment without any previous knowledge of Kubernetes. However, to run real production deployments, you should be familiar with this technology.
Configure Your Application
Configure Image Building
The Spring Boot Gradle plugin provides the bootBuildImage
task which builds your application and creates a Docker image with it. To specify the image name, add the following section to your build.gradle
file:
bootBuildImage {
imageName = 'mycompany/sample-app'
}
Configure Hazelcast
Jmix framework modules use various caches: JPA entity and query caches, pessimistic locks, dynamic attributes configuration, etc. When running in a cluster, a Jmix application requires coordination of caches between cluster nodes.
All Jmix caches support coordination through Hazelcast. In this guide, we’ll use Hazelcast in the embedded mode, together with the hazelcast-kubernetes plugin for auto-discovery in Kubernetes environment.
Follow the steps below to configure Hazelcast for coordination of caches in a Kubernetes cluster.
-
Add Hazelcast dependencies to your
build.gradle
:implementation 'com.hazelcast:hazelcast' implementation 'com.hazelcast:hazelcast-kubernetes:2.2.3'
-
Specify Hazelcast as JCache provider by adding the following property to the
application.properties
file:spring.cache.jcache.provider = com.hazelcast.cache.HazelcastMemberCachingProvider
-
Create
hazelcast.yaml
file in theresources
root:hazelcast: network: join: multicast: enabled: false kubernetes: enabled: false service-name: sample-app-service
The hazelcast.network.join.kubernetes.service-name
property must point to the application service defined in the application service configuration file.
Notice that the hazelcast.network.join.kubernetes.enabled
property is set to false
in this file. It’s done to be able to run your application locally without Kubernetes. The property is set to true
when the application actually runs on Kubernetes, using the HZ_NETWORK_JOIN_KUBERNETES_ENABLED
environment variable in the application service configuration file.
Create Kubernetes Configuration Files
Create k8s
folder in the project root and add the files listed below to it.
Database Service Configuration
This file defines the PostgreSQL database service named sample-db-service
.
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-db-config
data:
db_user: root
db_password: root
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sample-db-pvclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-db
spec:
replicas: 1
selector:
matchLabels:
app: sample-db
strategy: {}
template:
metadata:
labels:
app: sample-db
spec:
volumes:
- name: sample-db-storage
persistentVolumeClaim:
claimName: sample-db-pvclaim
containers:
- image: postgres
name: sample-db
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: sample-db-config
key: db_user
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: sample-db-config
key: db_password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_DB
value: sample
ports:
- containerPort: 5432
name: sample-db
volumeMounts:
- name: sample-db-storage
mountPath: /var/lib/postgresql/data
---
apiVersion: v1
kind: Service
metadata:
name: sample-db-service
spec:
type: ClusterIP
ports:
- port: 5432
selector:
app: sample-db
Application Service Configuration
This file defines the application service named sample-app-service
. It uses the mycompany/sample-app
Docker image with our application.
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-app
spec:
replicas: 1
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
containers:
- image: mycompany/sample-app
imagePullPolicy: IfNotPresent
name: sample-app
env:
- name: DB_USER
valueFrom:
configMapKeyRef:
name: sample-db-config
key: db_user
- name: DB_PASSWORD
valueFrom:
configMapKeyRef:
name: sample-db-config
key: db_password
- name: DB_HOST
value: sample-db-service
- name: SPRING_PROFILES_ACTIVE
value: k8s
- name: HZ_NETWORK_JOIN_KUBERNETES_ENABLED
value: "true"
lifecycle:
preStop:
exec:
command: [ "sh", "-c", "sleep 10" ]
ports:
- containerPort: 8080
- containerPort: 5701
---
apiVersion: v1
kind: Service
metadata:
name: sample-app-service
spec:
type: NodePort
ports:
- port: 8080
name: sample-app-app
- port: 5701
name: sample-app-hazelcast
selector:
app: sample-app
Load-balancer Configuration
The Jmix Backoffice UI expects that all requests from a user come to the same server. In a cluster environment with multiple servers it requires a load-balancer with sticky sessions. Below is the configuration of NGINX Ingress Controller with session affinity.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: balancer
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
defaultBackend:
service:
name: sample-app-service
port:
number: 8080
rules:
- http:
paths:
- backend:
service:
name: sample-app-service
port:
number: 8080
path: /
pathType: Prefix
Hazelcast Access Control Confguration
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: hazelcast-cluster-role
rules:
- apiGroups:
- ""
resources:
- endpoints
- pods
- nodes
- services
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: hazelcast-cluster-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: hazelcast-cluster-role
subjects:
- kind: ServiceAccount
name: default
namespace: default
Set up Local Kubernetes
-
Make sure Docker is running on your machine. The below command displays the Docker version:
docker -v
If the command fails, see Docker documentation for how to install and run Docker.
-
Install Minikube as described in its installation instruction.
-
Run Minikube:
minikube start --vm-driver=virtualbox
-
Enable Ingress Controller:
minikube addons enable ingress
-
Configure the Kubernetes command-line tool to use Minikube:
kubectl config use-context minikube
-
To open the Kubernetes dashboard in the web browser, run in a separate terminal:
minikube dashboard
Build and Run Application
-
Build Docker image:
./gradlew bootBuildImage
-
Load the image into Minikube:
minikube image load mycompany/sample-app:latest
-
Apply Kubernetes configuration files:
kubectl apply -f ./k8s
-
To increase the number of application instances, use the following command:
kubectl scale deployment sample-app --replicas=2
-
Figure out the cluster IP address:
minikube ip
Use this address to open the application in the web browser, for example:
http://192.168.99.100