Skip to content

Deploy WSO2 API Manager on OpenShift

This guide provides comprehensive instructions for deploying WSO2 API Manager on OpenShift Container Platform using Helm charts. OpenShift environments have unique security requirements that differ from standard Kubernetes clusters, requiring specific configuration adjustments for successful deployment.

For an all-in-one deployment, following this guide is sufficient. If you need a distributed deployment, please refer to the Deployment Patterns guide and apply the additional configurations mentioned in this document on top of the provided default_values.yaml files.

Contents

Prerequisites

Before you begin, ensure you have met the following requirements:

Prerequisites

Deployment Steps

Step 1 - Preparing the Docker Images

To fully comply with OpenShift’s security model especially its use of arbitrary user IDs, user has to create a custom Docker image tailored for OpenShift environments. Following are the steps required for modifying the image to ensure compatibility, including how to set group ownership to the root group (GID 0), which allows access when OpenShift assigns a random UID at runtime.

The official WSO2 Docker images run as a non-root user with a fixed UID. While that works on standard Kubernetes clusters, OpenShift often injects a random UID and restricts container privileges. To prevent permission issues, update the image to:

  1. Allow group write access to required directories
  2. Assign root group ownership (GID 0)

Also

  1. Starting from v4.5.0, each component has a separate Docker image (All-in-one, Control-plane, Gateway, Traffic-manager).
  2. These Docker images do not contain any database connectors; therefore, we need to build custom Docker images based on each Docker image in order to make the deployment work with a seperate DB.
  3. Download a connector which is compatible with the DB version and copy the connector while building the image

Sample Dockerfile for All-in-One Image

FROM wso2/wso2am:4.5.0

# Change directory permissions for OpenShift compatibility
USER root
RUN chgrp -R 0 ${USER_HOME} && chmod -R g=u ${USER_HOME} \
    && chgrp -R 0 ${WSO2_SERVER_HOME} && chmod -R g=u ${WSO2_SERVER_HOME} \
    && chgrp -R 0 ${USER_HOME}/solr && chmod -R g=u ${USER_HOME}/solr
USER wso2carbon

# Copy JDBC MySQL driver
COPY mysql-connector.jar ${WSO2_SERVER_HOME}/repository/components/lib

Note

Changing the group is mandatory to allow OpenShift to assign arbitrary UIDs. Ref: a-guide-to-openshift-and-uids, group_ownership_and_file_permission

After creating your Dockerfile:

  1. Build the custom image:

    docker build -t <REGISTRY_URL>/<REPOSITORY>/<IMAGE_NAME>:<TAG> .
    

  2. Push the image to your registry:

    docker push <REGISTRY_URL>/<REPOSITORY>/<IMAGE_NAME>:<TAG>
    

Important

Make sure to update the Helm chart configurations to use these modified images when deploying to OpenShift.

Step 2 - Login to the OpenShift Cluster

First, authenticate to your OpenShift cluster using the OpenShift CLI:

Authentication with OpenShift CLI

You can authenticate using one of the following methods:

  • Username/Password Authentication:

    oc login <API_SERVER_URL> -u <USERNAME> -p <PASSWORD>
    

  • Token-Based Authentication:

    oc login <API_SERVER_URL> --token=<TOKEN>
    

Once authenticated, verify your connection and check your currently selected project:

# Verify connection
oc whoami

# Check current project
oc project

Step 3 - Create Secrets and Clone Helm Charts

  1. Create Keystore Secret:

Before deploying the Helm chart, create a Kubernetes secret containing the keystores and truststore:

# Create a secret with default WSO2 keystores and truststores
kubectl create secret generic apim-keystore-secret \
  --from-file=wso2carbon.jks \
  --from-file=client-truststore.jks

!!! tip You can find the default keystore and truststore files in the repository/resources/security/ directory of any WSO2 API-M distribution.

  1. Clone WSO2 Helm Charts Repository:
git clone https://github.com/wso2/helm-apim.git
cd helm-apim

Step 4 - Configure OpenShift-Specific Settings in values.yaml

In each values.yaml file for your deployment, make the following OpenShift-specific changes:

Security Context Configuration

The following settings need to be applied to make your deployment compatible with OpenShift's security model:

  1. Update Security Context Settings:
Setting Description
runAsUser: null Allows OpenShift to assign arbitrary UIDs
seLinux.enabled: true/false Enables/disables SELinux support
enableAppArmor: false Disables AppArmor profiles
configMaps.scripts.defaultMode: "0457" Sets execute permissions for ConfigMaps
seccompProfile.type Controls which seccomp profile to apply

Example Configuration:

securityContext:
  # -- Set to null to allow OpenShift to assign arbitrary UIDs
  runAsUser: null
  # -- SELinux context configuration
  seLinux:
    enabled: false
    level: ""
  # -- Seccomp profile for the container
  seccompProfile:
    # -- Seccomp profile type (RuntimeDefault, Unconfined or Localhost)
    type: RuntimeDefault
    localhostProfile: ""
# -- Disable AppArmor for OpenShift compatibility
enableAppArmor: false
# -- Set execute permissions for ConfigMaps
configMaps:
  scripts:
    defaultMode: "0457"

All-in-One Deployment

The All-in-One deployment is the simplest pattern to deploy WSO2 API Manager on OpenShift. This section provides detailed instructions for deploying the All-in-One pattern in an OpenShift environment.

Step 1 - Prepare Configuration Values

  1. Navigate to the All-in-One Helm Chart Directory:

    cd helm-apim/all-in-one
    

  2. Create a Custom Values File:

Create a file named openshift-values.yaml with your OpenShift-specific configurations:

# OpenShift-specific settings
kubernetes:
  securityContext:
    runAsUser: null
    seLinux:
      enabled: false
    seccompProfile:
      type: RuntimeDefault
  enableAppArmor: false
  configMaps:
    scripts:
      defaultMode: "0457"

# Database configuration
wso2:
  apim:
    configurations:
      databases:
        apim_db:
          url: "jdbc:mysql://<DB_HOST>:3306/apim_db?useSSL=false"
          username: "apimadmin"
          password: "password"
        shared_db:
          url: "jdbc:mysql://<DB_HOST>:3306/shared_db?useSSL=false"
          username: "sharedadmin"
          password: "password"

    # Keystore configuration
    configurations:
      security:
        jksSecretName: "apim-keystore-secret"

# Docker image configuration
wso2:
  deployment:
    image:
      registry: "<YOUR_REGISTRY>"
      repository: "<YOUR_REPOSITORY>"
      tag: "<YOUR_TAG>"
      # If using private registry
      imagePullSecrets:
        enabled: true
        username: "<REGISTRY_USERNAME>"
        password: "<REGISTRY_PASSWORD>"

Important

Replace the placeholders with your actual values: - <DB_HOST>: Your database host address - <YOUR_REGISTRY>, <YOUR_REPOSITORY>, <YOUR_TAG>: Your OpenShift-compatible image details - <REGISTRY_USERNAME>, <REGISTRY_PASSWORD>: Your private registry credentials (if applicable)

Step 2 - Deploy Using Helm

  1. Create a Namespace:

    oc create namespace wso2
    

  2. Deploy with Helm:

    helm install apim wso2/wso2am-all-in-one \
      --namespace wso2 \
      --create-namespace \
      --version 4.5.0-3 \
      -f openshift-values.yaml
    

Alternatively, if you want to use the default OpenShift configuration:

helm install apim wso2/wso2am-all-in-one \
  --namespace wso2 \
  --create-namespace \
  --version 4.5.0-3 \
  --set wso2.subscription.username=<USERNAME> \
  --set wso2.subscription.password=<PASSWORD> \
  -f https://raw.githubusercontent.com/wso2/helm-apim/main/docs/am-pattern-0-all-in-one/default_openshift_values.yaml

Step 3 - Verify Deployment

  1. Check Deployment Status:

    oc get pods -n wso2
    

  2. Check Services and Routes:

    # List services
    oc get svc -n wso2
    
    # List routes
    oc get routes -n wso2
    

Access and Exposure

By default, the deployment will create Kubernetes services but not OpenShift routes. You may need to create routes to expose the API Manager services externally:

oc create route passthrough apim-publisher \
  --service=apim-wso2am-service \
  --port=9443 \
  --hostname=publisher.apim.example.com \
  -n wso2

Distributed Deployment

For distributed deployments of WSO2 API Manager on OpenShift, you need to apply the same OpenShift-specific configurations to each component of your chosen pattern:

Step 1 - Select an Appropriate Pattern

Review the WSO2 API-M Deployment Patterns and choose the pattern that suits your requirements:

Step 2 - Configure Each Component

For each component in your selected pattern:

Component Configuration

Each component requires the same OpenShift-specific configurations:

  1. Custom Docker Images: Build OpenShift-compatible images for each component
  2. Security Context: Apply the same security context settings as described in Step 4
  3. Database Connections: Configure the database connections for each component
  4. Service and Route Configuration: Configure services and routes appropriate for OpenShift

Example Component Configuration:

# OpenShift security context settings for Control Plane component
kubernetes:
  securityContext:
    runAsUser: null
    seLinux:
      enabled: false
    seccompProfile:
      type: RuntimeDefault
  enableAppArmor: false
  configMaps:
    scripts:
      defaultMode: "0457"

Step 3 - Deploy Components in Order

Deploy components in the correct order, typically:

  1. Control Plane/All-in-One:

    helm install apim wso2/wso2am-acp \
      --namespace wso2 \
      --create-namespace \
      --version 4.5.0-3 \
      -f control-plane-openshift-values.yaml
    

  2. Traffic Manager (if applicable):

    helm install tm wso2/wso2am-tm \
      --namespace wso2 \
      --create-namespace \
      --version 4.5.0-3 \
      -f tm-openshift-values.yaml
    

  3. Key Manager (if applicable):

    helm install km wso2/wso2am-km \
      --namespace wso2 \
      --version 4.5.0-3 \
      -f km-openshift-values.yaml
    

  4. Gateway:

    helm install gw wso2/wso2am-gw \
      --namespace wso2 \
      --version 4.5.0-3 \
      -f gw-openshift-values.yaml
    

Deployment Best Practices

  • Namespace Strategy: Use separate namespaces for different environments (dev, test, prod)
  • Resource Management: Configure appropriate resource limits and requests for your OpenShift cluster
  • Health Monitoring: Configure appropriate liveness and readiness probes for each component
  • Persistence: Use persistent volumes for any stateful components
  • Route Configuration: Set up OpenShift routes with proper TLS termination for external access
  • Integration: Configure integration with OpenShift monitoring and logging stacks

Troubleshooting OpenShift Deployments

When deploying WSO2 API Manager on OpenShift, you might encounter some common issues. Here are solutions to the most frequent problems:

Permission Denied Errors

Symptom: Pods fail to start with permission denied errors in the logs.

Solution: This typically happens because OpenShift runs containers with random UIDs. Check that:

  1. Your custom Docker images have the proper group permissions:

    # Check container logs
    oc logs <pod-name> -n <namespace>
    

  2. Verify your security context settings in the values.yaml:

    # Ensure runAsUser is set to null
    oc get deployment <deployment-name> -n <namespace> -o yaml | grep -A 10 securityContext
    

Image Pull Errors

Symptom: Pods are stuck in "ImagePullBackOff" or "ErrImagePull" states.

Solution:

  1. Ensure your image repository is accessible from OpenShift:

    # Check image pull secrets
    oc get secrets -n <namespace>
    

  2. Verify registry credentials:

    # Create or update image pull secret
    oc create secret docker-registry regcred \
      --docker-server=<your-registry-server> \
      --docker-username=<your-username> \
      --docker-password=<your-password> \
      --docker-email=<your-email> \
      -n <namespace>
    

Volume Mount Issues

Symptom: Pods crash with volume-related errors.

Solution:

  1. Ensure any persistent volume claims are bound:

    oc get pvc -n <namespace>
    

  2. Check volume permissions:

    # Update deployment to allow writing to volumes
    oc patch deployment <deployment-name> -n <namespace> \
      -p '{"spec":{"template":{"spec":{"securityContext":{"fsGroup":0}}}}}'
    

Network Policy Issues

Symptom: Components cannot communicate with each other.

Solution: OpenShift uses network policies that might block inter-service communication.

  1. Check current network policies:

    oc get netpol -n <namespace>
    

  2. Create a permissive network policy for API-M components:

    oc apply -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-wso2-internal
      namespace: <namespace>
    spec:
      podSelector:
        matchLabels:
          app: wso2am
      ingress:
      - from:
        - podSelector:
            matchLabels:
              app: wso2am
    EOF