Setup OpenShift Serverless with OpenShift Service Mesh

This page describes the integration of OpenShift Serverless with OpenShift Service Mesh.

OpenShift Serverless with OpenShift Service Mesh is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see access.redhat.com/support/offerings/techpreview/.

Assumptions and limitations
  • All Knative internal components, as well as Knative Services, are part of the Service Mesh and have sidecars injection enabled. This means, that strict mTLS is enforced within the whole mesh. All requests to Knative Services require an mTLS connection (the client must send its certificate), except calls coming from OpenShift Routing.

  • OpenShift Serverless with Service Mesh integration can only target one service mesh. Multiple meshes can be present in the cluster, but OpenShift Serverless will only be available on one of them.

  • Changing the target ServiceMeshMemberRole, that OpenShift Serverless is part of (meaning moving OpenShift Serverless to another mesh) is not supported. The only way to change the targeted Service mesh is to uninstall and reinstall OpenShift Serverless.

Pre-install verification

Before you install and configure the Service Mesh integration with OpenShift Serverless, please run the following pre-install verifications to make sure, the integration will work properly.

  1. Check for conflicting gateways

    The mesh that OpenShift Serverless is part of has to be distinct and ideally only for OpenShift Serverless workloads, as additional configuration like Gateways might interfere with the OpenShift Serverless gateways knative-local-gateway and knative-ingress-gateway. OpenShift Service Mesh only allows one Gateway to claim a wildcard host binding (hosts: ["*"]) on the same port (port: 443). If another Gateway is already binding this configuration, a separate mesh has to be created for OpenShift Serverless workloads. To check if an existing Gateway is already binding those, run:

    $ oc get gateway -A -o jsonpath='{range .items[*]}{@.metadata.namespace}{"/"}{@.metadata.name}{" "}{@.spec.servers}{"\n"}{end}' | column -t
    knative-serving/knative-ingress-gateway  [{"hosts":["*"],"port":{"name":"https","number":443,"protocol":"HTTPS"},"tls":{"credentialName":"wildcard-certs","mode":"SIMPLE"}}]
    knative-serving/knative-local-gateway    [{"hosts":["*"],"port":{"name":"http","number":8081,"protocol":"HTTP"}}]

    This command should not return a Gateway that binds port: 443 and hosts: ["*"], except the Gateways in knative-serving and Gateways, that are part of another Service Mesh instance.

  2. Check if Service Mesh is exposed as type NodePort or LoadBalancer

    Cluster external Knative Services are expected to be called via OpenShift Ingress using OpenShift Routes. It is not supported to access Service Mesh directly, e.g. by exposing the istio-ingressgateway using a Service with type NodePort or LoadBalancer. To check how your istio-ingressgateway is exposed, run:

    $ oc get svc -A | grep istio-ingressgateway
    istio-system   istio-ingressgateway  ClusterIP  172.30.46.146 none>   15021/TCP,80/TCP,443/TCP     9m50s

    This command should not return a Service of type NodePort or LoadBalancer.

Prerequisites
  • You have access to an OpenShift account with cluster administrator access.

  • Install the OpenShift CLI (oc).

  • Install the OpenShift Service Mesh Operator.

    Using OpenShift Serverless with Service Mesh is only supported with OpenShift Service Mesh version 2.0.5 or later.

Installing and configuring OpenShift Service Mesh
  1. Create a ServiceMeshControlPlane resource in the istio-system namespace. Make sure to use the following settings, to make Service mesh work with Serverless:

    If you have an existing ServiceMeshControlPlane, make sure that you have the same configuration applied.

    apiVersion: maistra.io/v2
    kind: ServiceMeshControlPlane
    metadata:
      name: basic
      namespace: istio-system
    spec:
      profiles:
      - default
      security:
        dataPlane:
          mtls: true (1)
      techPreview:
        meshConfig:
          defaultConfig:
            terminationDrainDuration: 35s (2)
      gateways:
        ingress:
          service:
            metadata:
              labels:
                knative: ingressgateway (3)
      proxy:
        networking:
          trafficControl:
            inbound:
              excludedPorts: (4)
              - 8444 # metrics
              - 8022 # serving: wait-for-drain k8s pre-stop hook
    1 This enforces strict mTLS in the mesh. Only calls using a valid client-certificate are allowed.
    2 OpenShift Serverless has a graceful termination for Knative Services of 30 seconds. The istio-proxy needs to have a longer termination duration to make sure no requests are dropped.
    3 Defining a specific selector for the ingress-gateway to target only the Knative gateway.
    4 Those ports are called by Kubernetes and cluster monitoring which are not part of the mesh and cannot be called using mTLS, thus those ports are excluded from the Service mesh.
  2. Add the namespaces that you would like to integrate with Service Mesh to the ServiceMeshMemberRoll object as members:

    apiVersion: maistra.io/v1
    kind: ServiceMeshMemberRoll
    metadata:
      name: default
      namespace: istio-system
    spec:
      members: (1)
        - knative-serving
        - knative-eventing
        - <your OpenShift projects>
    1 A list of namespaces to be integrated with Service Mesh.

    This list of namespaces must include the knative-serving and knative-eventing namespaces.

  3. Apply the ServiceMeshMemberRoll resource:

    $ oc apply -f <filename>
  4. Create the necessary gateways so that Service Mesh can accept traffic:

    Example knative-local-gateway object using ISTIO_MUTUAL (mTLS)
    apiVersion: networking.istio.io/v1alpha3
    kind: Gateway
    metadata:
      name: knative-ingress-gateway
      namespace: knative-serving
    spec:
      selector:
        knative: ingressgateway
      servers:
        - port:
            number: 443
            name: https
            protocol: HTTPS
          hosts:
            - "*"
          tls:
            mode: SIMPLE
            credentialName: <wildcard_certs> (1)
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: Gateway
    metadata:
     name: knative-local-gateway
     namespace: knative-serving
    spec:
     selector:
       knative: ingressgateway
     servers:
       - port:
           number: 8081
           name: https
           protocol: HTTPS (2)
         tls:
           mode: ISTIO_MUTUAL (2)
         hosts:
           - "*"
    ---
    apiVersion: v1
    kind: Service
    metadata:
     name: knative-local-gateway
     namespace: istio-system
     labels:
       experimental.istio.io/disable-gateway-port-translation: "true"
    spec:
     type: ClusterIP
     selector:
       istio: ingressgateway
     ports:
       - name: http2
         port: 80
         targetPort: 8081
    1 Add the name of the secret that contains the wildcard certificate.
    2 The knative-local-gateway serves HTTPS traffic and expects all clients to send requests using mTLS. This means, that only traffic coming from withing Service Mesh is possible. Workloads from outside the Service Mesh must use the external domain via OpenShift Routing.
  5. Apply the Gateway resources:

    $ oc apply -f <filename>
Installing and configuring OpenShift Serverless
  1. First, install the OpenShift Serverless Operator.

  2. Then, install Knative Serving by creating the following KnativeServing custom resource, which also enables the Istio integration:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
    spec:
      ingress:
        istio:
          enabled: true (1)
      deployments: (2)
      - name: activator
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: autoscaler
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      config:
        istio: (3)
          gateway.knative-serving.knative-ingress-gateway: istio-ingressgateway.<your-istio-namespace>.svc.cluster.local
          local-gateway.knative-serving.knative-local-gateway: knative-local-gateway.<your-istio-namespace>.svc.cluster.local
    1 Enables Istio integration.
    2 Enables sidecar injection for Knative Serving data plane pods.
    3 Optional: if your istio is NOT running in istio-system, set those two flags with the correct namespace.
  3. Apply the KnativeServing resource:

    $ oc apply -f <filename>
  4. Install Knative Eventing by creating the following KnativeEventing custom resource, which also enables the Istio integration:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeEventing
    metadata:
      name: knative-eventing
      namespace: knative-eventing
    spec:
      config:
        features:
          istio: enabled (1)
      workloads: (2)
      - name: pingsource-mt-adapter
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: imc-dispatcher
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: mt-broker-ingress
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: mt-broker-filter
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
    1 Enables Eventing Istio controller to create a DestinationRule for each InMemoryChannel or KafkaChannel service.
    2 Enables sidecar injection for Knative Eventing pods.
  5. Apply the KnativeEventing resource:

    $ oc apply -f <filename>
  6. Install Knative Kafka by creating the following KnativeKafka custom resource, which also enables the Istio integration:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      name: knative-kafka
      namespace: knative-eventing
    spec:
      channel:
        enabled: true
        bootstrapServers: <bootstrap_servers> (1)
      source:
        enabled: true
      broker:
        enabled: true
        defaultConfig:
          bootstrapServers: <bootstrap_servers> (1)
          numPartitions: <num_partitions>
          replicationFactor: <replication_factor>
        sink:
          enabled: true
      workloads: (2)
      - name: kafka-controller
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-broker-receiver
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-broker-dispatcher
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-channel-receiver
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-channel-dispatcher
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-source-dispatcher
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-sink-receiver
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
    1 The Apache Kafka cluster URL, for example: my-cluster-kafka-bootstrap.kafka:9092.
    2 Enables sidecar injection for Knative Kafka pods.
  7. Apply the KnativeKafka resource:

    $ oc apply -f <filename>
  8. Install ServiceEntry to make OpenShift Service Mesh aware of the communication between KnativeKafka components and an Apache Kafka cluster:

    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: kafka-cluster
      namespace: knative-eventing
    spec:
      hosts: (1)
        - <bootstrap_servers_without_port>
      exportTo:
        - "."
      ports: (2)
        - number: 9092
          name: tcp-plain
          protocol: TCP
        - number: 9093
          name: tcp-tls
          protocol: TCP
        - number: 9094
          name: tcp-sasl-tls
          protocol: TCP
        - number: 9095
          name: tcp-sasl-tls
          protocol: TCP
        - number: 9096
          name: tcp-tls
          protocol: TCP
      location: MESH_EXTERNAL
      resolution: NONE
    1 The list of Apache Kafka cluster hosts, for example: my-cluster-kafka-bootstrap.kafka.
    2 Apache Kafka cluster listeners ports.

    The listed ports in spec.ports are example TPC ports and depend on how the Apache Kafka cluster is configured.

  9. Apply the ServiceEntry resource:

    $ oc apply -f <filename>
Verification
  1. Create a Knative Service that has sidecar injection enabled and uses a pass-through route:

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: <service_name>
      namespace: <namespace> (1)
      annotations:
        serving.knative.openshift.io/enablePassthrough: "true" (2)
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: "true" (3)
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
        spec:
          containers:
          - image: <image_url>
    1 A namespace that is part of the Service mesh member roll.
    2 Instructs Knative Serving to generate an OpenShift pass-through enabled route, so that the certificates you have generated are served through the ingress gateway directly.
    3 Injects Service Mesh sidecars into the Knative service pods.

    Please note, that you have to always add the annotation from the example above to all your Knative Service to make them work with Service Mesh.

  2. Apply the Service resource:

    $ oc apply -f <filename>
  3. Access your serverless application by using a secure connection that is now trusted by the CA:

    $ curl --cacert root.crt <service_url>
    Example command
    $ curl --cacert root.crt https://hello-default.apps.openshift.example.com
    Example output
    Hello Openshift!