Microk8s Storage Retain Policy
By defaul Microk8s storage plugin creates persistent volumes with delete policy, which means that whenever you delete your claims you will loose your data
To keep your data you gonna need to create storage class and persisten volume before hand
So for example on a Microk8s server:
sudo mkdir -p /data/demo
echo 'Hello' | sudo tee /data/demo/index.html
cat /data/demo/index.html
Then create storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: demo
# In Microk8s when storage plugin enabled it will be only one available provisioner
provisioner: microk8s.io/hostpath
reclaimPolicy: Retain
now if we ran kubectl get storage class
should see:
NAME PROVISIONER RECLAIMPOLICY
microk8s-hostpath (default) microk8s.io/hostpath Delete
demo microk8s.io/hostpath Retain
Now it is time for persisten-volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: demo
spec:
# Here we are asking to use our custom storage class
storageClassName: demo
capacity:
storage: 100M
accessModes:
- ReadWriteOnce
hostPath:
# Should be created upfront
path: '/data/demo'
If everything fine we should see it by running kubectl get pv
Last piece is a persisten-volume-claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: demo
namespace: default
spec:
# Once again our custom storage class here
storageClassName: demo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100M
Take a note, that if you gonna create/delete claim few time, Kubernetes will not give volume for a second claim and will create one with delete policy and mark previous as released, so kubectl get pv
will show you two rows instead of expected one. To fix this you need also delete and recreate persisten volume.
And now we can use our volume:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
namespace: default
labels:
app: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: demo
mountPath: /usr/share/nginx/html
volumes:
- name: demo
persistentVolumeClaim:
claimName: demo
Host path alternative
Also there is a workaround without husling at all with all this volumes, we can mount host path directly, like so:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
namespace: default
labels:
app: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: demo
mountPath: /usr/share/nginx/html
volumes:
- name: demo
# instead of volume claim
hostPath:
path: /data/demo
type: DirectoryOrCreate
Which technically will work same way as with retain policy
From github issue it seems that:
- used storage provisioner not only not recommended but also not developed anymore
- there is an alternative by ranches which also used in deckhouse