Building and Deploying Workflows with the Operator
This document describes how to build and deploy your workflow on a cluster using the Serverless Logic Operator.
Every time you need to change the workflow definition the system will (re)build a new immutable version of the workflow. If you’re still in development phase, please see the Developing Workflows with the Operator guide.
The build system implemented by the Serverless Logic Operator is not suitable for complex production use cases. Consider using an external tool to build your application such as Tekton and ArgoCD. The resulting image can then be deployed with |
Follow the Kubernetes or OpenShift sections of this document based on the cluster you wish to build your workflows on.
-
A Workflow definition.
-
The Serverless Logic Operator installed. See Install the Serverless Logic Operator guide.
Configuring the build system
The operator can build workflows on Kubernetes or OpenShift. On Kubernetes, it uses Kaniko and on OpenShift a standard BuildConfig.
The operator build system is not tailored for advanced production use cases and you can do only a few customizations. |
Using another Workflow base builder image
If your scenario has strict policies for image usage, such as security or hardening constraints, you can replace the default image used by the operator to build the final workflow container image. Alternatively, you might want to test a nightly build with a bug fix or a custom image containing your customizations.
By default, the operator will use the image distributed upstream to build workflows. You can change this image by editing the SonataFlowPlatform
custom resource in the namespace where you deployed your workflows:
# use `kubectl get sonataflowplatform` to get the SonataFlowPlatform name
kubectl patch sonataflowplatform <name> --patch 'spec:\n build:\n config:\n baseImage: <your new image full name with tag>' -n <your_namespace>
Customize the base build Dockerfile
The operator uses the ConfigMap
named logic-operator-builder-config
in the operator’s installation namespace (openshift-serverless-logic) to configure and run the workflow build process.
You can change the Dockerfile
entry in this ConfigMap
to tailor the Dockerfile to your needs. Just be aware that this can break the build process.
ConfigMap
apiVersion: v1
data:
DEFAULT_WORKFLOW_EXTENSION: .sw.json
Dockerfile: "FROM registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.33.0 AS builder\n\n#
variables that can be overridden by the builder\n# To add a Quarkus extension
to your application\nARG QUARKUS_EXTENSIONS\n# Args to pass to the Quarkus CLI
add extension command\nARG QUARKUS_ADD_EXTENSION_ARGS\n# Additional java/mvn arguments
to pass to the builder\nARG MAVEN_ARGS_APPEND\n\n# Copy from build context to
skeleton resources project\nCOPY --chown=1001 . ./resources\n\nRUN /home/kogito/launch/build-app.sh
./resources\n \n#=============================\n# Runtime Run\n#=============================\nFROM
registry.access.redhat.com/ubi9/openjdk-17:latest\n\nENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'\n
\ \n# We make four distinct layers so if there are application changes the library
layers can be re-used\nCOPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/lib/
/deployments/lib/\nCOPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/*.jar
/deployments/\nCOPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/app/
/deployments/app/\nCOPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/quarkus/
/deployments/quarkus/\n\nEXPOSE 8080\nUSER 185\nENV AB_JOLOKIA_OFF=\"\"\nENV JAVA_OPTS=\"-Dquarkus.http.host=0.0.0.0
-Djava.util.logging.manager=org.jboss.logmanager.LogManager\"\nENV JAVA_APP_JAR=\"/deployments/quarkus-run.jar\"\n"
kind: ConfigMap
metadata:
name: sonataflow-operator-builder-config
namespace: sonataflow-operator-system
The excerpt above is just an example. The current version might have a slightly different version. Don’t use this example in your installation. |
Changing resources requirements
You can create or edit a SonataFlowPlatform
in the workflow namespace specifying the resources requirements for the internal builder pods:
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform
spec:
build:
template:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Every build process will reuse this configuration and start new instances based on it from now on.
Only one |
You can fine tune the resources requirements for a particular workflow. Every workflow instance will have a SonataFlowBuild
instance created with the same name as the workflow. You can edit the SonataFlowBuild
custom resource and specify the same parameters. For example:
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowBuild
metadata:
name: my-workflow
spec:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
This parameters will only apply to new build instances.
Passing arguments to the internal builder
You can pass build arguments (see Dockerfile ARG) to the SonataFlowBuild
instance.
-
Create or edit an existing
SonataFlowBuild
instance. It has the same name of theSonataFlow
you’re trying to build.Checking if the SonataFlowBuild instance already existskubectl edit sonataflowbuild/<name> -n <namespace>
-
Add to the
.spec.buildArgs
the desired arguments.Adding buildArgs to the SonataFlowBuild instanceapiVersion: sonataflow.org/v1alpha08 kind: SonataFlowBuild metadata: name: <name> spec: [...] buildArgs: - name: ARG1 value: value1 - name: ARG2 value: value2
-
Save the file and exit. A new build should start soon with the new configuration.
Alternatively, you can set this information in the SonataFlowPlatform
, so that every new build instance will use it as a template. For example:
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: <name>
spec:
build:
template:
buildArgs:
- name: ARG1
value: value1
- name: ARG2
value: value2
Since the On Minikube and Kubernetes only plain values, |
The table below lists the Dockerfile arguments available in the default Serverless Logic Operator installation:
Argument | Description | Example |
---|---|---|
QUARKUS_EXTENSIONS |
List of Quarkus Extensions separated by comma that the builder should add to the workflow. |
org.kie:kie-addons-quarkus-persistence-jdbc:999-SNAPSHOT |
QUARKUS_ADD_EXTENSION_ARGS |
Arguments passed to the Quarkus CLI when adding extensions. Enabled only when |
See the Quarkus CLI documentation |
MAVEN_ARGS_APPEND |
Arguments passed to the maven build when the workflow build is produced. |
-Dkogito.persistence.type=jdbc -Dquarkus.datasource.db-kind=postgresql |
Setting environment variables in the internal builder
You can set environment variables to the SonataFlowBuild
internal builder pod. This is useful in cases where you would like to influence only the build of the workflow.
These environment variables are valid for the build context only. They are not set on the final built workflow image. |
-
Create or edit an existing
SonataFlowBuild
instance. It has the same name of theSonataFlow
you’re trying to build.Checking if the SonataFlowBuild instance already existskubectl edit sonataflowbuild/<name> -n <namespace>
-
Add to the
.spec.envs
the desired data.Setting environment variables in the SonataFlowBuild instanceapiVersion: sonataflow.org/v1alpha08 kind: SonataFlowBuild metadata: name: <name> spec: [...] envs: - name: MYENV1 value: value1 - name: MYENV2 value: value2
-
Save the file and exit. A new build should start soon with the new configuration.
Alternatively, you can set this information in the SonataFlowPlatform
, so that every new build instances will use it as a template. For example:
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: <name>
spec:
build:
template:
envs:
- name: MYENV1
value: value1
- name: MYENV2
value: value2
Since the On Minikube and Kubernetes only plain values, |
Building on Kubernetes
You can skip this section if you’re running on OpenShift. |
Follow these steps to configure your Kubernetes namespace to build workflow images with the operator.
Create a Namespace for the building phase
Create a new namespace that will hold all the resources that the operator will create (Pods, Deployments, Services, Secrets, ConfigMap, and Custom Resources) in this guide.
kubectl create namespace workflows
# set the workflows namespace to your context
kubectl config set-context --current --namespace=workflows
Create a Secret for the container registry authentication
You can follow these steps to publish on external registry that requires authentication. If you’re running on Minikube, just enable the internal registry. You can skip this whole section since the internal Minikube registry doesn’t require authentication.
kubectl create secret docker-registry regcred --docker-server=<registry_url> --docker-username=<registry_username> --docker-password=<registry_password> --docker-email=<registry_email> -n workflows
or you can directly import your local Docker config into your Kubernetes cluster:
kubectl create secret generic regcred --from-file=.dockerconfigjson=${HOME}/.docker/config.json --type=kubernetes.io/dockerconfigjson -n workflows
Double-check your |
Configure the Serverless Logic Operator (i.e. registry address, secret) for building your Workflows
The SonataFlowPlatform
is the Custom Resource used to control the behavior of the Serverless Logic Operator.
It defines the behavior of the operator when handling all OpenShift Serverless Logic Custom Resources (Workflow and Build) in the given namespace.
Since the operator is installed in global mode, you will need to specify a SonataFlowPlatform
in each Namespace where you want to deploy Workflows.
If you have deployed a workflow for development you already have a |
Following is a very basic SonataFlowPlatform
Custom Resource example to work on Kubernetes:
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform
spec:
build:
config:
strategyOptions:
KanikoBuildCacheEnabled: "true"
registry:
address: registry.redhat.io/openshift-serverless-1 (1)
secret: regcred (2)
1 | Your registry address |
2 | The secret name created in the steps above |
On Minikube, you can remove the registry
information entirely since you don’t need credentials for pushing to the internal registry:
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform
spec:
build:
config:
strategyOptions:
KanikoBuildCacheEnabled: "true"
The |
The minikube registry addon must be enabled as follows.
To verify that the addon was property enabled:
|
You can save this file locally and run the following command:
SonataFlowPlatform
kubectl apply -f my-sonataflowplatform.yaml -n workflows
You can also update "on-the-fly" the SonataFlowPlatform
registry field with this command (change <YOUR_REGISTRY>)
SonataFlowPlatform
with a specific registrycat my-sonataflowplatform.yaml | sed "s|address: .*|address: <YOUR_REGISTRY>" | kubectl apply -f -
Building on OpenShift
You don’t need to do anything to build on OpenShift since the operator will configure everything for you. There are a few customizations you can do described in the [configure-build-system] section.
In general, the operator will create a BuildConfig
to build the workflow using the mapped resource files and your workflow definition. After the build is finished, the image will be pushed to the internal OpenShift registry backed by an ImageStream
object.
Changing the base builder image
If you are running on OpenShift, you have access to the Red Hat’s supported registry. You can change the default builder image by editing the sonataflow-operator-builder-config ConfigMap
.
kubectl edit cm/sonataflow-operator-builder-config -n openshift-serverless-logic
In your editor, change the first line in the Dockerfile
entry where it reads FROM registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.33.0
to the desired image.
This image must be compatible with your operator’s installation.
Build and deploy your workflow
You can now send your workflow definition (SonataFlow
) to the operator.
You can find a basic SonataFlow
bellow:
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlow
metadata:
name: greeting
annotations:
sonataflow.org/description: Greeting example on k8s!
sonataflow.org/version: 0.0.1
spec:
flow:
start: ChooseOnLanguage
functions:
- name: greetFunction
type: custom
operation: sysout
states:
- name: ChooseOnLanguage
type: switch
dataConditions:
- condition: "${ .language == \"English\" }"
transition: GreetInEnglish
- condition: "${ .language == \"Spanish\" }"
transition: GreetInSpanish
defaultCondition: GreetInEnglish
- name: GreetInEnglish
type: inject
data:
greeting: "Hello from JSON Workflow, "
transition: GreetPerson
- name: GreetInSpanish
type: inject
data:
greeting: "Saludos desde JSON Workflow, "
transition: GreetPerson
- name: GreetPerson
type: operation
actions:
- name: greetAction
functionRef:
refName: greetFunction
arguments:
message: ".greeting+.name"
end: true
Save a file in your local file system with this content named greetings-workflow.yaml
then run:
kubectl apply -f greetings-workflow.yaml -n workflows
You can check the logs of the build of your Workflow via:
# on Kubernetes
kubectl logs kogito-greeting-builder -n workflows
# on OpenShift
oc logs buildconfig/greeting -n workflows
The final pushed image must be printed into the logs at the end of the build.
Check if the Workflow is running
In order to check that the OpenShift Serverless Logic Greeting is up and running, you can try to perform a test HTTP call, from the greeting Pod.
-
Expose the workflow so you can access it:
Exposing the greeting workflow on Minikube# On Minikube you can use Nodeport kubectl patch svc greeting -n workflows -p '{"spec": {"type": "NodePort"}}' GREETING_SVC=$(minikube service greeting -n workflows --url)
Exposing the greeting workflow on OpenShift# On OpenShift you can expose a route: https://docs.openshift.com/container-platform/4.13/networking/routes/route-configuration.html#nw-creating-a-route_route-configuration oc expose svc greeting -n workflows # get the public URL GREETING_SVC=$(oc get route/greeting --template='{{.spec.host}}')
-
Make the HTTP call using
curl
:Check if the greeting workflow is runningcurl -X POST -H 'Content-Type:application/json' -H 'Accept:application/json' -d '{"name": "John", "language": "English"}' $GREETING_SVC/greeting
If everything is working, you should receive a response like this:
Response from the greeting workflow{"id":"b5fbfaa3-b125-4e6c-9311-fe5a3577efdd","workflowdata":{"name":"John","language":"English","greeting":"Hello from JSON Workflow, "}}
Restarting a build
If you require to restart the build for some reason, you can add or edit the annotation sonataflow.org/restartBuild: true
in the SonataFlowBuild
instance.
For example:
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowBuild
metadata:
name: greeting
annotations:
sonataflow.org/restartBuild: true
After editing the resource, the operator will start a new build of the workflow. Once this is finished, the workflow will be notified and updated accordingly.
If the build fails, but the workflow has a working deployment, the operator won’t roll out a new deployment.
Ideally you should use this feature if there’s a problem with your workflow or the initial build revision.
Found an issue?
If you find an issue or any misleading information, please feel free to report it here. We really appreciate it!