Setup of OpenShift demo environment for CNPG

Summary

The purpose of this post is to show how to configure a CRC-based OpenShift environment (aka Red Hat OpenShift Local) to demo EDB CNPG.

I’ll end up with a working OpenShift cluster (CRC), the CNPG operator installed with MinIO as the destination for WAL- and backup files.

Expect to use around 60 minutes, if you follow along in the instructions.

Why this blog post

EDB recently changed how the OpenShift-compatible CNPG operator is installed. Previously, you could install the subscription-based CNPG operator and get a 30-day trial “out of the box”. With the release of EDB CNPG v. 1.29 that path will be gone soon.

These days, if I want the operator from EDB’s repositories at docker.enterprisedb.com, I need a pull secret. That pull secret is the EDB download token (also referred to as the EDB subscription token).

Because I periodically run CNPG workshops for customers, I would need a token available during the workshop. I could use the Global EDB Token – but I don’t want that to leak – it has access to everything EDB ships.

The initial goal here was to create a setup where the pull secret couldn’t be decoded by random CNPG deployer (i.e. users using oc apply to deploy Postgres Clusters).

However, after a long session of reseach I’ve come to the conclusion that hiding the pull secret is not possible. Actually, even if you add the pull secret to a protected namespace like openshift-operators where only cluster-admins can roam around, the operator will copy the pull secret out to the namespace where CNPG clusters are being deployed – the operator needs that Secret in the local namespace.

So consider this a “getting started with the CNPG operator and CRC-based OpenShift clusters”.

What I will build

  • A local OpenShift cluster using CRC
  • MinIO configured so I can use S3 buckets in demos (for example, backups) and I can show the content of MinIO using the web interface.
  • The EDB CNPG operator installed from the OperatorHub in OpenShift
  • A simple deployment and sanity check to see that WAL files are being archived correctly and a backup of the demo-cluster can be performed.

Step-by-step guide

Create the OpenShift environment (CRC)

CRC is downloaded from the Red Hat OpenShift Local page. There is no brew package available for the install. Remember to copy the pull secret on the download page (you’ll need it shortly).

Once you have installed the package, the crc command should be available in your terminal.

With the command below you create the OpenShift cluster.

Note: You’ll need to add at least 16GB of RAM and 8 CPUs to the cluster. Otherwise things won’t install correctly.

crc setup
crc config set memory 16384
crc config set cpus 8
crc config set developer-password developer
crc config set kubeadmin-password kubeadmin
crc start

Note: Here you’ll need the pull secret from Red Hat. Paste it into the terminal when requested.

Once the process is complete you’ll see output similar to:

Started the OpenShift cluster.
The server is accessible via web console at:
https://console-openshift-console.apps-crc.testing
Log in as administrator:
Username: kubeadmin
Password: kubeadmin
Log in as user:
Username: developer
Password: developer
Use the 'oc' command line interface:
$ eval $(crc oc-env)
$ oc login -u developer https://api.crc.testing:6443

Deploy MinIO

A production-ready installation of MinIO would involve MinIO AIStor and MinIO’s Key Encryption Service – that’s a bit much for a quick setup. So, we’ll use the open source version of MinIO instead (for now).

oc new-project cnpg-developer \
  --display-name="CNPG Developer" \
  --description="Project for CNPG + MinIO PoC"
oc project cnpg-developer

Now create the minio.yaml file with this content:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: minio-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
  volumeMode: Filesystem
---
apiVersion: v1
kind: Secret
metadata:
  name: minio-secret
stringData:
  # For a PoC; change these for anything non-demo
  minio_root_user: minio
  minio_root_password: minio12345
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: minio
spec:
  replicas: 1
  selector:
    matchLabels:
      app: minio
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: minio
    spec:
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: minio-pvc
      containers:
        - name: minio
          image: quay.io/minio/minio:latest
          args:
            - server
            - /data
            - --console-address
            - :9090
          env:
            - name: MINIO_ROOT_USER
              valueFrom:
                secretKeyRef:
                  name: minio-secret
                  key: minio_root_user
            - name: MINIO_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: minio-secret
                  key: minio_root_password
          ports:
            - containerPort: 9000
              protocol: TCP
            - containerPort: 9090
              protocol: TCP
          volumeMounts:
            - name: data
              mountPath: /data
              subPath: minio
---
apiVersion: v1
kind: Service
metadata:
  name: minio-service
spec:
  type: ClusterIP
  selector:
    app: minio
  ports:
    - name: api
      port: 9000
      targetPort: 9000
      protocol: TCP
    - name: ui
      port: 9090
      targetPort: 9090
      protocol: TCP
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: minio-api
spec:
  to:
    kind: Service
    name: minio-service
    weight: 100
  port:
    targetPort: api
  tls:
    termination: edge
    insecureEdgeTerminationPolicy: Redirect
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: minio-ui
spec:
  to:
    kind: Service
    name: minio-service
    weight: 100
  port:
    targetPort: ui
  tls:
    termination: edge
    insecureEdgeTerminationPolicy: Redirect

This YAML, when applied:

  • Creates a PersistentVolumeClaim for MinIO to use
  • Creates a Secret with username and password (minio/minio12345) to use in the MinIO Web Console
  • Creates a Deployment of MinIO with a single pod
  • Creates Services for both the UI and API interface in MinIO
  • Creates Routes for both UI and API

The routes enable us to access the UI outside the OpenShift cluster.

Apply the YAML:

oc apply -f minio.yaml

Verify the install:

oc get pods
oc logs deploy/minio

You should see something like:

NAME                     READY   STATUS    RESTARTS   AGE
minio-55b597d6ff-lq726   1/1     Running   0          23s
INFO: Formatting 1st pool, 1 set(s), 1 drives per set.
INFO: WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
MinIO Object Storage Server
Copyright: 2015-2026 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-09-07T16-13-09Z (go1.24.6 linux/arm64)
API: http://10.217.0.66:9000  http://127.0.0.1:9000
WebUI: http://10.217.0.66:9090 http://127.0.0.1:9090
Docs: https://docs.min.io

The URLs specified here are the log output from a standard install. We can’t use those as they only work inside the Kubernetes cluster.

Check the route:

oc get route minio-ui -o wide

You should see something like:

NAME       HOST/PORT                           PATH   SERVICES        PORT   TERMINATION     WILDCARD
minio-ui   minio-ui-default.apps-crc.testing          minio-service   ui     edge/Redirect   None

So https://minio-ui-default.apps-crc.testing should work in your browser (use minio/minio12345 as username and password).

Create a bucket using the UI; I’ll call mine cnpg.

Before we install the CNPG operator we need to set up a Secret for CNPG Backup to use.

Connect as developer in the terminal, create a project for cnpg (cnpg-developer) and create the secret:

oc login -u developer https://api.crc.testing:6443
oc new-project cnpg-developer
oc create secret generic cnpg-minio-creds \
  -n cnpg-developer \
  --from-literal=ACCESS_KEY_ID=minio \
  --from-literal=ACCESS_SECRET_KEY=minio12345

Obtain the EDB subscription token

On the EDB home page, once you have logged in, you can find your token in “My account” → “Account Settings”. It’s the “Repos 2.0 token” on this page.

Note: if you don’t have an EDB account, you can create one (it’s free) and request a token. This will provide you with access for 30 days, without a subscription.

Install CNPG operator

Prior to installing the EDB CNPG Operator, you need to create a pull secret for the operator to use (even the install process needs that pull secret).

The pull secret will be created in the openshift-operators namespace. This is where the CNPG operator will be installed, and where it expects to find the pull secret. The operator will automatically copy this secret into <CLUSTER-NAME>-pull for each deployed cluster.

Note: once the secret exists in the workload namespace, anyone with permission to read secrets in that namespace can extract and decode it — so placing the pull secret in openshift-operators only limits who can see the original secret; it does not prevent exposure in the target namespace. In general, a good rule of thumb in both Kubernetes and OpenShift is “Secrets are not secret”: they are base64-encoded text strings. The encoding simply prevents “casual” reading. A base64 decode is all that is needed to read the original content.

export EDB_SUBSCRIPTION_TOKEN="**** MY TOKEN ***"
oc create secret docker-registry postgresql-operator-pull-secret \
  -n openshift-operators \
  --docker-server=docker.enterprisedb.com \
  --docker-username=k8s \
  --docker-password="$EDB_SUBSCRIPTION_TOKEN"

After the secret is created navigate to the Software Catalog and search for “EDB Postgres for Kubernetes”. Installing it is a simple click on install.

Deploy PostgreSQL cluster with Barman backup and WAL streaming to MinIO

Now all the “plumbing” should be done, and we can get to the fun stuff: deploying clusters and demoing CNPG features.

For now, let’s deploy our first cluster:

apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
  name: pg-demo
  namespace: cnpg-developer
spec:
  instances: 1
  storage:
    size: 5Gi
  backup:
    barmanObjectStore:
      destinationPath: s3://cnpg/pg-demo
      endpointURL: http://minio-service.default.svc:9000
      s3Credentials:
        accessKeyId:
          name: cnpg-minio-creds
          key: ACCESS_KEY_ID
        secretAccessKey:
          name: cnpg-minio-creds
          key: ACCESS_SECRET_KEY
      wal:
        compression: gzip
        maxParallel: 4
      data:
        compression: gzip
      retentionPolicy: "7d"

This YAML creates a simple Postgres cluster with a single instance.

Note that the endpoint for MinIO is http://minio-service.default.svc:9000 since MinIO is deployed in the default namespace.

With this you should see PostgreSQL starting to archive WALs into the bucket:

oc get pods
oc logs pg-demo-1

You should see files popping into the MinIO Web console as well.

What we have done so far

In this post we have:

  • Created an OpenShift cluster using “Red Hat OpenShift Local” (formerly known as Code Ready Containers, hence the crc name of the utility)
  • Installed and configured open source MinIO to use as the destination for backup- and WAL-files from CNPG clusters
  • Installed the EDB CNPG operator from the Red Hat Operator Hub
  • Deployed a simple CNPG cluster with a single instance
  • Validated that WAL files are archived in MinIO