Skip to main content

I'm have created a storage class and persistent volume according to their documentation (https://docs.safe.com/fme/html/FME-Flow/AdminGuide/Kubernetes/Kubernetes-Deploying-to-Amazon-EKS.htm)

 

This is my helm install command:

helm install fmeserver safesoftware/fmeserver-2023-0 --set fmeserver.image.tag=21821-20220728 --set deployment.ingress.general.ingressClassName=alb --set storage.fmeserver.class=efs-sc --set deployment.deployPostgresql=false --set storage.fmeserver.accessMode=ReadWriteMany --set fmeserver.database.host=mycluster.rds.amazonaws.com --set fmeserver.database.adminUser=user --set fmeserver.database.adminPasswordSecret=fmeserversecret --set fmeserver.database.adminPasswordSecretKey=password --set fmeserver.database.user=user --set fmeserver.database.name=fmeserver --set fmeserver.database.password=password --set fmeserver.database.adminDatabase=postgres --set deployment.startAsRoot=true -n gis-fmeserver

 

Now my core pod is failing with the following details:

Screenshot 2024-01-10 at 2.55.30 PMAll the other pods are failing with the following errors:

Screenshot 2024-01-10 at 2.56.42 PMNote that the EFS being used here is also in use by other services. Please share any insights that will help me with fixing these issues.

To me it looks like there a problem with your PersistenVolume type. Does your volume type support ReadWriteMany? Persistent Volumes | Kubernetes


To me it looks like there a problem with your PersistenVolume type. Does your volume type support ReadWriteMany? Persistent Volumes | Kubernetes

I have followed their documentation to create the persistent volume.

apiVersion: v1

kind: PersistentVolume

metadata:

 name: fmeserver-data

 labels:    

  app.kubernetes.io/managed-by: Helm

 annotations:  

  meta.helm.sh/release-name: fmeserver

  meta.helm.sh/release-namespace: gis-fmeserver

spec:

 capacity:

  storage: 10Gi

 volumeMode: Filesystem

 accessModes:

  - ReadWriteMany

 persistentVolumeReclaimPolicy: Retain

 storageClassName: efs-sc

 csi:

  driver: efs.csi.aws.com

  volumeHandle: fs-********::fsap-********

 

And I have provided the storage.fmeserver.class=efs-sc in the values.yaml file to use that storage class. And I've given the name of the PV as fmeserver-data because that is what it will be looking for.

 

I've also tried the option startAsRoot as true in the values.yaml file.


I have followed their documentation to create the persistent volume.

apiVersion: v1

kind: PersistentVolume

metadata:

 name: fmeserver-data

 labels:    

  app.kubernetes.io/managed-by: Helm

 annotations:  

  meta.helm.sh/release-name: fmeserver

  meta.helm.sh/release-namespace: gis-fmeserver

spec:

 capacity:

  storage: 10Gi

 volumeMode: Filesystem

 accessModes:

  - ReadWriteMany

 persistentVolumeReclaimPolicy: Retain

 storageClassName: efs-sc

 csi:

  driver: efs.csi.aws.com

  volumeHandle: fs-********::fsap-********

 

And I have provided the storage.fmeserver.class=efs-sc in the values.yaml file to use that storage class. And I've given the name of the PV as fmeserver-data because that is what it will be looking for.

 

I've also tried the option startAsRoot as true in the values.yaml file.

Did you look at this doc:

https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html

 


I have followed their documentation to create the persistent volume.

apiVersion: v1

kind: PersistentVolume

metadata:

 name: fmeserver-data

 labels:    

  app.kubernetes.io/managed-by: Helm

 annotations:  

  meta.helm.sh/release-name: fmeserver

  meta.helm.sh/release-namespace: gis-fmeserver

spec:

 capacity:

  storage: 10Gi

 volumeMode: Filesystem

 accessModes:

  - ReadWriteMany

 persistentVolumeReclaimPolicy: Retain

 storageClassName: efs-sc

 csi:

  driver: efs.csi.aws.com

  volumeHandle: fs-********::fsap-********

 

And I have provided the storage.fmeserver.class=efs-sc in the values.yaml file to use that storage class. And I've given the name of the PV as fmeserver-data because that is what it will be looking for.

 

I've also tried the option startAsRoot as true in the values.yaml file.

I've taken a look at this. But I'm not sure what I should be looking for. Can you please help me with it.

 


I was able to solve this issue, in the pv.yaml file, I provided the wrong EFS access point ID. I gave the posix user access point ID instead of the root user access point ID initially, it's fixed now after providing the root user access point ID.


Reply