Please rate how useful you found this document: 
No votes yet

Overview

ProcessMaker 3.9.0 uses Kubernetes as the orchestration framework, facilitating seamless deployment across diverse platforms. With Kubernetes' cloud-agnostic nature, ProcessMaker 3.9.0 can be deployed on Azure, AWS, Google Cloud, and other platforms, whereas earlier versions relied on either ECS for AWS or required standalone installations.

Changes from Pre-3.9.0 Stacks:

In PM 3.9.0, we've transitioned away from stacks to a new approach utilizing container images. This approach has the following benefits:

  • No Stacks Required: ProcessMaker 3.9.0 no longer relies on stacks for deployment.
  • Image-Based Deployment: Images are now used for deployment, simplifying the installation process. It is no longer required to individually install components like databases, libraries, PHP, or web servers.

Note: If a database is installed separately outside of Kubernetes, it must be MySQL 8.0 for compatibility with ProcessMaker 3.9.0.

Requirements To Install ProcessMaker 3.9.0 Using Kubernetes

The following table describe requirements:

Service

Server

Memory

Storage

Connection

Kubernetes Cluster

3nodes

4CPU each one

16GB Memory

Root disk of 150GB Second EBS disk 300GB storage each one

Communication required between nodes

MySQL 8.0

1 DB server at least

4 CPU

16 GB Memory

100GB

It should be accessible by the Kubernetes cluster in the same network

Rancher Service (optional for orchestration)

1 ec2 server

2 CPU

4GB memory

30GB

Needs to have connection to the Kubernetes cluster.

Longhorn

Ingress NGINX

Note: The table above displays the minimum specifications for installing ProcessMaker 3.9.0, which may vary depending on workload and storage requirements.

Kubernetes (EKS) In AWS

Requirements

Service

Server

Memory

Storage

Connection

EKS Cluster

3nodes

4CPU each one

16GB Memory

Root disk of 150GB Second EBS disk 500GB storage each one

They must be in the same VPC

RDS Aurora DB MySQL 8.0

1 DB server at least

4 CPU

16 GB Memory

250GB

It should be accessible by the EKS cluster in the same VPC

Rancher Service

1 ec2 server

2 CPU

4GB memory

30GB

It should be able to connect to the EKS cluster

Note: The table above displays the minimum specifications for installing ProcessMaker 3.9.0, which may vary depending on workload and storage requirements.

Click on the following links for additional installation instructions:

SSL certificates

Three files are necessary:

  • chain-n.pem

  • cert.pem

  • key.pem

Domains:

Records on DNS

The domains need to be pointed to the Kubernetes Ingress. This Ingress load balancer will be identified once the cluster is created.

VPC requirements

Make sure frontend subnets are not assigning Public IP's by default.

AWS Resources

VPC with any main CIDR example: 10.0.0.0/16

Internet gateway attached

Create a NAT gateway

Subnets:

DMZ-A, DMZ-B

nodeGroup-A, nodeGroup-B,

Backend-A, Backend-B

Route tables

DMZ route table:

route { cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.pm_ig.id }

Subnets association:

DMZ-A, DMZ-B

Create a NAT Gateway:

Inside DMZ -A subnet

Attach an EIP address

Nodes-k8s Route_table:

route { cidr_block = "0.0.0.0/0" gateway_id = aws_nat_gateway.pm_ng.id }

Subnets association:

nodeGroup-A, nodeGroup-B

Backend route table:

Default config, no extra route needed.

Subnet association:

Backend-A, Backend-B

Security Groups

DMZ security group:

allow port 22 for administration from Main office administration network.

Allow port 80 from anywhere

Allow port 443 from anywhere

Node group security group:

allow port 22 from DMZ

Allow port 80 from DMZ

Backend Security Group

allow port 3306 from k8s cluster node group,

We can specify the internal IP's of the nodesGroup subnets created before.

EKS Cluster

Create an IAM role:

organization-name-eks-iam-role

Attach the next policies:

AmazonEKSClusterPolicy

AmazonEKSVPCResourceController

Create the Cluster

Set a name

Version 1.29

VPC

Subnets nodegroup-a,b

Security group DMZ

Other values by default.

Node Group

Create an IAM role:

organization-name-node-group-iam-role

First Attach the next policies:

AmazonEKSWorkerNodePolicy

AmazonEKS_CNI_Policy

AmazonEC2ContainerRegistryReadOnly

Create the node group

Set a name

Select a role: organization-name-node-group-iam-role

Create a node group for the EKS cluster with at least 3 nodes.

AMI version: AMI Linux 2 compatible with EKS version 1.29

Capacity On Demand

Instance type m5.large

Disk size 100 GB

Node group calling 3 nodes

subnets nodeGroup subnets

Enable configure remote access:

Create or select a key: organization-name-nodegroup-kp.pem

Allow the security group traffic from DMZ security Group.

Add a second EBS into the existing node group

Step 1: Based on the original lunch template create a new version with the desired changes, in this case with the second EBS disk. The size can be 500GB to start. (As mentioned before the size depends on the amount of data that will be stored in the EBS volumes.)

Append the next lines to the user data in the launch template.

sudo mkfs.ext4 /dev/nvme1n1 sudo mkdir -p /var/lib/longhorn sudo echo /dev/nvme1n1 /var/lib/longhorn ext4 defaults,noatime,nodiratime 0 0 >> /etc/fstab sudo mount -a

This will allow to have the second disk mounted.

Step 2: Create a new lunch template based on the original but using the latest version that involves the new changes. Note: Do not use the role node option, just do not pick any role.

Step 3: Create a new node group using the new template.

Step 4: Wait for the new node group to come up and then delete the old node group one by one.

Create an RDS aurora server

Create a subnet group

Use the backend subnet

Use the backend security group

Make sure the nodegroup security group has access to the RDS server.

EC2 instances

Create two NAT servers for administration:

NAT-A, NAT-B

Security Group DMZ

Subnet DMZ

Create one ec2 server inside the nodegroup subnet:

This server should be able to connect to the EKS cluster.

Orchestrator Server

Connect to the Orchestrator and make sure to update aws-cli. More info here: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

Once aws-cli has been updated, install kubectl:

Use the version 1.23 or higher.

https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html

Also make sure to add a Secret and Key from AWS IAM in the orchestrator server, this is needed when adding the cluster to rancher.

To add the AWS keys run the next command and follow the steps.

aws configure curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x kubectl mv kubectl /usr/local/sbin/kubectl chmod 600 /root/.kube/config

Configure kubectl for your EKS cluster:

aws eks update-kubeconfig --region region-code --name my-cluster

Install helm:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh

Rancher

Use the orchestrator standalone ec2 server and follow the next guide.

Follow this guide: https://ranchermanager.docs.rancher.com/v2.5/pages-for-subheaders/rancher-on-a-single-node-with-docker

Once Rancher is installed, we need some extra requirements to be installed

Cluster Requirements

The installation of this requirements are explained below.

EKS Cluster import to Rancher

Importing Cluster

Go to the Rancher site instance and click on the Import Existing button to import your new cluster. Choose the Generic cluster option.

Name and create your cluster. You will be provided the necessary kubectl command to import the EKS cluster to Rancher. Copy and run the first kubectl apply command in the Orchestrator server to complete the import.

It will take a few minutes to finish. Wait until the cluster is showing as state Active. Once completed, click on the new cluster and click the Explore button.

Longhorn

Longhorn is a distributed block storage system for Kubernetes. With Longhorn, we are able to manage data redundancy and backups on the Node EBS volumes.

First, create the longhorn-system namespace. Then we need to install longhorn-iscsi.

kubectl create namespace longhorn-system kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/prerequisite/longhorn-iscsi-installation.yaml --namespace=longhorn-system

On the left hand menu bar, navigate to Apps > Charts. Search for Longhorn, then install it with all default settings. Wait for all the pods to come up and show Running state.

Nginx Ingress Controller

Navigate to Apps > Charts again and search for Nginx Ingress Controller. Install with all default settings.

Once it has finished installing, you can get the provisioned ALB DNS name by navigating to Service Discovery > Services and clicking on the nginx-ingress-controller service

DataDog

helm repo add datadog https://helm.datadoghq.com vi datadog-values.yaml datadog: # datadog.apiKey -- Your Datadog API key ## ref: https://app.datadoghq.com/account/settings#agent/kubernetes apiKey: 6ef3b7c05259671aca5687a294adb9e8 # datadog.appKey -- Datadog APP key required to use metricsProvider ## If you are using clusterAgent.metricsProvider.enabled = true, you must set ## a Datadog application key for read access to your metrics. appKey: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ## The name must be unique and must be dot-separated tokens with the following restrictions: ## * Lowercase letters, numbers, and hyphens only. ## * Must start with a letter. ## * Must end with a number or a letter. ## * Overall length should not be higher than 80 characters. ## Compared to the rules of GKE, dots are allowed whereas they are not allowed on GKE: ## https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters#Cluster.FIELDS.name clusterName: organization-name ## Enable logs agent and provide custom configs logs: # datadog.logs.enabled -- Enables this to activate Datadog Agent log collection ## ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup enabled: false # datadog.logs.containerCollectAll -- Enable this to allow log collection for all containers ## ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup containerCollectAll: false ## Enable apm agent and provide custom configs apm: # datadog.apm.socketEnabled -- Enable APM over Socket (Unix Socket or windows named pipe) ## ref: https://docs.datadoghq.com/agent/kubernetes/apm/ socketEnabled: true # datadog.apm.portEnabled -- Enable APM over TCP communication (port 8126 by default) ## ref: https://docs.datadoghq.com/agent/kubernetes/apm/ portEnabled: true # datadog.apm.enabled -- Enable this to enable APM and tracing, on port 8126 # DEPRECATED. Use datadog.apm.portEnabled instead ## ref: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host #enabled: false ## The Datadog Agent supports many environment variables. ## ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables env: - name: DD_INVENTORIES_CONFIGURATION_ENABLED value: true - name: DD_APM_NON_LOCAL_TRAFFIC value: true - name: DD_TAGS value: organization-name - name: DD_SITE value: datadoghq.com - name: DD_HOSTNAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: DD_CONTAINER_EXCLUDE_LOGS value: "kube_namespace:longhorn-system kube_namespace:datadog kube_namespace:kube-system kube_namespace:cert-manager kube_namespace:ingress-nginx kube_namespace:cattle-system kube_namespace:cattle-fleet-system" # kubelet configuration kubelet: # datadog.kubelet.host -- Override kubelet IP host: valueFrom: fieldRef: fieldPath: status.hostIP # datadog.kubelet.tlsVerify -- Toggle kubelet TLS verification # @default -- true tlsVerify: false ## Enable process agent and provide custom configs processAgent: # datadog.processAgent.enabled -- Set this to true to enable live process monitoring agent ## Note: /etc/passwd is automatically mounted to allow username resolution. ## ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset enabled: true # datadog.processAgent.processCollection -- Set this to true to enable process collection in process monitoring agent ## Requires processAgent.enabled to be set to true to have any effect processCollection: true networkMonitoring: # datadog.networkMonitoring.enabled -- Enable network performance monitoring enabled: true ## This is the Datadog Cluster Agent implementation that handles cluster-wide ## metrics more cleanly, separates concerns for better rbac, and implements ## the external metrics API so you can autoscale HPAs based on datadog metrics ## ref: https://docs.datadoghq.com/agent/kubernetes/cluster/ clusterAgent: # clusterAgent.enabled -- Set this to false to disable Datadog Cluster Agent enabled: true # clusterAgent.replicas -- Specify the of cluster agent replicas, if > 1 it allow the cluster agent to work in HA mode. replicas: 2 ## The Cluster-Agent supports many additional environment variables ## ref: https://docs.datadoghq.com/agent/cluster_agent/commands/#cluster-agent-options env: - name: DD_INVENTORIES_CONFIGURATION_ENABLED value: true - name: DD_APM_NON_LOCAL_TRAFFIC value: true - name: DD_TAGS value: organization-name - name: DD_SITE value: datadoghq.com # clusterAgent.createPodDisruptionBudget -- Create pod disruption budget for Cluster Agent deployments createPodDisruptionBudget: true # clusterAgent.useHostNetwork -- Bind ports on the hostNetwork ## Useful for CNI networking where hostPort might ## not be supported. The ports need to be available on all hosts. It can be ## used for custom metrics instead of a service endpoint. ## ## WARNING: Make sure that hosts using this are properly firewalled otherwise ## metrics and traces are accepted from any host able to connect to this host. # useHostNetwork: false providers: aks: # providers.aks.enabled -- Activate all specifities related to AKS configuration. Required as currently we cannot auto-detect AKS. enabled: false clusterTagger: # datadog.clusterTagger.collectKubernetesTags -- Enables Kubernetes resources tags collection. collectKubernetesTags: true serviceMonitoring: # datadog.serviceMonitoring.enabled -- Enable Universal Service Monitoring enabled: true

To install run:

helm install datadog datadog/datadog --namespace=datadog --create-namespace -f datadog-values.yaml

ProcessMaker 3.9.0 Installation

app: adminPassword: Sample123! url: organization-name.organization-domain.net registry: enable: true password: EjUMRuPrKkcxxxxxxxx url: harbor.organization-domain.io username: robot@organization-name mysql: deploy: false host: xxx port: 3306 name: xxx username: xxx password: xxx helm install -f values.yaml NAME oci://harbor.processmaker.io/charts/pm3-enterprise --version 1.0.0