Friday, January 17, 2020

Oracle on Kubernetes

Kubernetes is one of the most exciting technologies I have ever seen. So wouldn’t it be great to do something involving my old love (Oracle) and my new (Kubernetes) ?

And we have already a use case. We have a kind of pdb-as-a-service based on Exadata for our developers. Most of the times they are happy … but sometimes they are starting to complain about that they have only access to the pdb. “We cant restart our pdb/cdb”, “we cant see the OS”, “we cant we cant we cant”…. Wouldnt it be great to provide them Oracle in a container as service on our Kubernetes environment (based on Rancher RKE) ? So what do we need ?
We set this up – and again proving the greatness of Kubernetes – everything works beautifully. Some snippets and associated gotcha’s:
apiVersion: v1
kind: Service
metadata:
  name: oracledb-2
  namespace: default
spec:
  clusterIP: None
  selector:
    app: oracle2
  sessionAffinity: None
  type: ClusterIP
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: oracle2
  namespace: default
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  selector:
    matchLabels:
      app: oracle2
  template:
    metadata:
      labels:
       app: oracle2
    spec:
      containers:
      - image: store/oracle/database-enterprise:12.2.0.1-slim
        imagePullPolicy: IfNotPresent
        name: oracle
        volumeMounts:
        - mountPath: /ORCL
          name: oradata2
      initContainers:
      - name: pgsql-data-permission-fix
        image: busybox
        command: ["/bin/chmod","-R","777", "/ORCL"]
        volumeMounts:
        - mountPath: /ORCL
          name: oradata2
      dnsPolicy: ClusterFirst
      restartPolicy: Always
  updateStrategy:
    type: RollingUpdate
  volumeClaimTemplates:
  - metadata:
      name: oradata2
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 50Gi
Pay attention here to the “initContainer”. This is required as the Oracle database creation happens as user “Oracle” and it needs to be able to write to the mount. There are probably better ways. Note as well that we are relying here on the default storage class (which is Longhorn).
For the cman, the setup referenced above can very much be simplified – as that was done for RAC. All these environment variables can be removed, simply hardwire the hostname to 0.0.0.0 in cman.ora , remove domain, remove ip and scanip, remove the updating of /etc/hosts and then you can create an image which does not need any environment. It needs however the right for privilege escalation (not so clear to me why). Expose port 1521 on ClusterIP. And define a service on top. Nothing else really.
As we use Nginx Ingress (on premise) and dont have L4 load balancers, we needed in addition to make the Ingress to forward plain TCP – this basically involves defining a config map like
apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-configmap-example
data:
  1521: "default/cman:1521"
and an Ingress amendment like this:
      - args:
        - /nginx-ingress-controller
        - --tcp-services-configmap=default/tcp-configmap-example
...
        ports:
        - containerPort: 80
          hostPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          hostPort: 443
          name: https
          protocol: TCP
        - containerPort: 1521
          hostPort: 1521
          name: oradb
          protocol: TCP
Developers access then the service like this over sqlnet:
test1=
 (DESCRIPTION=
   (SOURCE_ROUTE=yes)
   (ADDRESS=(PROTOCOL=tcp)(HOST=cman)(PORT=1521))   
   (ADDRESS=(PROTOCOL=tcp)(HOST=oracledb-0)(PORT=1521))   
   (CONNECT_DATA=(SERVICE_NAME=ORCLPDB1.localdomain)))
In addition they can get a shell directly into their pod if they want to restart something, want to create more pdbs or so … even sparse cloning works 🙂
Overall a great setup, of course its more of a POC at the moment, but its as well lots of fun !

No comments:

Post a Comment