Deploying Data Index and OpenShift Serverless Logic application on Minikube

This document describes how to deploy a workflow application and the Data Index service using a local Kubernetes cluster, such as Minikube, using the Serverless Logic Operator.

For more information about Minikube and related system requirements, see Getting started with Minikube documentation.

This use case is intended to represent an installation with:

  • A singleton Data Index Service with PostgreSQL persistence

  • The greeting workflow (no persistence), that is configured to register events to the Data Index Service.

You can directly access the UseCase1 example application we are going to follow at OpenShift Serverless Logic Data Index Use Cases with operator.

Prerequisites
  • Minikube installed with registry addon enabled

  • kubectl command-line tool is installed. Otherwise, Minikube handles it.

  • SonataFlow operator installed if workflows are deployed. To install the operator you can see Install the Serverless Logic Operator.

We recommend that you start Minikube with the following parameters, note that the registry addon must be enabled.

minikube start --cpus 4 --memory 10240 --addons registry --addons metrics-server --insecure-registry "10.0.0.0/24" --insecure-registry "localhost:5000"

To verify that the registry addon was property added you can execute this command:

minikube addons list | grep registry
| registry                    | minikube | enabled ✅   | Google                         |
| registry-aliases            | minikube | disabled     | 3rd party (unknown)            |
| registry-creds              | minikube | disabled     | 3rd party (UPMC Enterprises)   |

You can check the Minikube installation by entering the following commands in a command terminal:

Verify Minikube version
minikube version
Verify kubectl CLI version
kubectl version

If kubectl is not installed, then Minikube handles it when you execute the following command:

kubectl is available using Minikube
alias kubectl="minikube kubectl --"
Procedure
  1. After cloning the OpenShift Serverless Logic examples repository. Open a terminal and run the following commands

    cd serverless-operator-examples/serverless-workflow-dataindex-use-cases/
  2. Create the namespace:

    kubectl create namespace usecase1
  3. Deploy the Data Index Service and postgresql database:

    include:common/_dataindex_deployment_operator.adoc[]

    Perform the deployments executing

    kubectl kustomize infra/dataindex | kubectl apply -f - -n usecase1
    configmap/dataindex-properties-hg9ff8bff5 created
    secret/postgres-secrets-22tkgc2dt7 created
    service/data-index-service-postgresql created
    service/postgres created
    persistentvolumeclaim/postgres-pvc created
    deployment.apps/data-index-service-postgresql created
    deployment.apps/postgres created

    Give some time for the data index to start, you can check that it’s running by executing.

    kubectl get pod -n usecase1
    NAME                                             READY   STATUS    RESTARTS       AGE
    data-index-service-postgresql-5d76dc4468-lb259   1/1     Running   0              2m11s
    postgres-7f78499688-lc8n6                        1/1     Running   0              2m11s
  4. Deploy the workflow:

    Here you can find the use case kustomization required to deploy the workflow

    Use case kustomization.yaml resources that deploys the workflow
    resources:
    - ../../infra/service_discovery
    - ../../workflows/sonataflow-greeting

    To see in more detail how to deploy the workflow access to Building and Deploying Workflows with the Operator

    Perform the deployment executing

     kubectl kustomize usecases/usecase1 | kubectl apply -f - -n usecase1
    configmap/greeting-props created
    sonataflow.sonataflow.org/greeting created

    To see in more detail how to generate this resources access to Building and Deploying Workflows with the Operator

    Give some time for the sonataflow operator to build and deploy the workflow. To check that the workflow is ready you can use this command.

    kubectl get workflow -n usecase1
    NAME       PROFILE   VERSION   URL   READY   REASON
    greeting             0.0.1           True
  5. Expose the workflow and get the url:

    kubectl patch svc greeting -p '{"spec": {"type": "NodePort"}}' -n usecase1
    minikube service greeting --url -n usecase1
  6. Create a workflow instance:

    You must use the URLs calculated in step 5.

    curl -X POST -H 'Content-Type:application/json' -H 'Accept:application/json' -d '{"name": "John", "language": "English"}'    http://192.168.49.2:32407/greeting
  7. Clean the use case:

    kubectl delete namespace usecase1

Querying Data Index service on Minikube

You can use the public Data Index endpoint to play around with the GraphiQL interface.

Procedure

This procedure apply to all use cases with that deploys the Data Index Service.

  • Get the Data Index Url:

minikube service data-index-service-postgresql --url -n my_usecase
  • Open the GrahiqlUI

Using the url returned, open a browser window in the following url 192.168.49.2:32409/graphiql/,

that IP and port will be different in your installation, and don’t forget to add the last slash "/" to the url, otherwise the GraphiqlUI won’t be opened.

To see the process instances information you can execute this query:

{
  ProcessInstances {
    id,
    processId,
    processName,
    variables,
    state,
    endpoint,
    serviceUrl,
    start,
    end
  }
}

The results should be something like:

{
  "data": {
    "ProcessInstances": [
      {
        "id": "3ed8bf63-85c9-425d-9099-49bfb63608cb",
        "processId": "greeting",
        "processName": "workflow",
        "variables": "{\"workflowdata\":{\"name\":\"John\",\"greeting\":\"Hello from JSON Workflow, \",\"language\":\"English\"}}",
        "state": "COMPLETED",
        "endpoint": "/greeting",
        "serviceUrl": "http://greeting",
        "start": "2023-09-13T06:59:24.319Z",
        "end": "2023-09-13T06:59:24.400Z"
      }
    ]
  }
}

To see the jobs instances information, if any, you can execute this query:

{
  Jobs {
    id,
    processId,
    processInstanceId,
    status,
    expirationTime,
    retries,
    endpoint,
    callbackEndpoint
  }
}

The results should be something like:

{
  "data": {
    "Jobs": [
      {
        "id": "55c7aadb-3dff-4b97-af8e-cc45014b1c0d",
        "processId": "callbackstatetimeouts",
        "processInstanceId": "299886b7-2b78-4965-a701-16783c4162d8",
        "status": "EXECUTED",
        "expirationTime": null,
        "retries": 0,
        "endpoint": "http://jobs-service-postgresql/jobs",
        "callbackEndpoint": "http://callbackstatetimeouts:80/management/jobs/callbackstatetimeouts/instances/299886b7-2b78-4965-a701-16783c4162d8/timers/-1"
      }
    ]
  }
}

Found an issue?

If you find an issue or any misleading information, please feel free to report it here. We really appreciate it!