Contact Us 1-800-596-4880

Using Persistence Gateway with Runtime Fabric

Anypoint Runtime Fabric provides Persistence Gateway, which enables Mule applications deployed to a Mule runtime instance to store and share data across application replicas and restarts.

You can also use the Mule Maven plugin version 3.5.4 or later to enable persistent object stores for Runtime Fabric deployments.

Using Persistence Gateway in a Mule Application

After Persistence Gateway is configured in Anypoint Runtime Fabric, it is available for Mule applications deployed to Mule runtime engine, version 4.2.1 or later. When configured, users can select Use Persistent Object Storage when deploying an application using Runtime Manager. See Deploy a Mule Application to Runtime Fabric for more information.

Mule applications use the Object Store v2 REST API via the Object Store Connector to connect to Persistence Gateway. This enables you to deploy to both Anypoint Runtime Fabric and CloudHub without having to modify your Mule application.

After deleting an application, persistent data may not be deleted immediately. Runtime Fabric cleans persistence data every 60 minutes.

Persistent Gateway Limits

The following table lists the limits on the data stored by Persistence Gateway:

Limit Description

Maximum TTL.

The amount of time data is stored in Persistence Gateway, value is set to 30 days.

Before You Begin

Before enabling Persistence Gateway, ensure that you have:

  • Created a PostgreSQL database to serve as the data source for data stored by Persistence Gateway. This database must be compatible with a supported version of PostgreSQL that does not reserve the partition keyword.

  • Granted the PostgreSQL user CREATE, INSERT, SELECT, UPDATE, and DELETE privileges.

Only plain text connections to the database are supported.

Configure Persistence Gateway

During configuration, Persistence Gateway creates the required database schema. Afterwards, when an application deployed to Runtime Fabric is configured to use persistent object storage, the Persistence Gateway writes the necessary rows to the database.

To configure Persistence Gateway, you must create a Kubernetes custom resource that allows the cluster to connect to your persistence data store.

  1. Create a Kubernetes secret:

    kubectl create secret generic <SECRET NAME> -n rtf --from-literal=persistence-gateway-creds='postgres://username:pass@host:port/databasename'
    • Do not change --from-literal=persistence-gateway-creds. You can replace <SECRET NAME> with any value you want to use for the secret.

    • Do not use the "@" character in the database username.

    • If your PostgresSQL user account password contains special characters, make sure to encode your connection URL before pushing it as Kubernetes secret. For more information, see RTF Persistent Gateway Does Not Allow Login Due to Account Password With Special Characters.

  2. Create a custom resource for your data store:

    1. Copy the custom resource template from Kubernetes Custom Resource Template to a file called custom-resource.yaml.

    2. Ensure the value of secretRef: name matches the name field defined in your Kubernetes secret file.

    3. Modify other fields of the custom resource template as required for your environment.

    4. Run kubectl apply -f custom-resource.yaml.

      If you are using Runtime Fabric on VMs / Bare Metal, run this command from a controller node.

  3. Check the logs of the Persistence Gateway pod to ensure it can communicate with the database:

    kubectl get pods -n rtf

    Look for pods with the name prefix persistence-gateway

    kubectl logs -f persistence-gateway-6dfb98949c-7xns9 -nrtf

    The output of this command should be similar to the following:

    2021/04/09 16:35:31 Connecting to PostgreSQL backend...
    2021/04/09 16:35:32 Starting watcher for /var/run/secrets/rtf-object-store/persistence-gateway-creds
    2021/04/09 16:35:32 Watching for changes on /var/run/secrets/rtf-object-store/persistence-gateway-creds
    2021/04/09 16:35:35 Successfully connected to the PostgreSQL backend.
    192.168.2.101 - - [09/Apr/2021:16:35:55 +0000] "GET /api/v1/status/ready HTTP/1.1" 200 2 "" "kube-probe/1.18+"

Kubernetes Custom Resource Template

Use the example template below to create a Kubernetes custom resource. This custom resource enables the Kubernetes cluster to connect to your data store.

apiVersion: rtf.mulesoft.com/v1
kind: PersistenceGateway
metadata:
  name: default
  namespace: rtf
spec:
  objectStore:
    backendDriver: postgresql
    maxBackendConnectionPool: 20
    replicas: 2
    secretRef:
      name: persistence-gateway-creds
    resources:
      limits:
        cpu: 250m
        memory: 250Mi
      requests:
        cpu: 200m
        memory: 75Mi

Modify the following fields based on your environment:

Field Description Default Value

kind

The type of custom resource. The only supported value is PersistenceGateway.

PersistenceGateway

metadata.name

The internal identifier for this custom resource. The value for this field should be default.

default

metadata.namespace

The namespace where the secret is applied. The supported value is rtf.

rtf

spec.objectStore.backendDriver

The driver used by the data store. Only postgresql is supported.

postgresql

spec.objectStore.maxBackendConnectionPool

The maximum number of simultaneous open connections to the data store.

20

spec.objectStore.replicas

The number of replicas of Persistence Gateway.

2

spec.objectStore.resources.limits.cpu

The CPU resource limits for the Persistence Gateway pods.

250m

spec.objectStore.resources.limits.memory

The memory resource limits for the Persistence Gateway pods.

150Mi

spec.objectStore.resources.requests.cpu

The CPU resource requests for the Persistence Gateway pods.

200m

spec.objectStore.resources.requests.memory

The memory resource requests for the Persistence Gateway pods.

75Mi

spec.objectStore.secretRef.name

The name of the Persistence Gateway credentials defined in the Kubernetes secret file.

persistence-gateway-creds

The default CPU, memory, and limit values are based on a small number of deployed Mule applications. Modify these values based on the requirements of your environment.