Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

File persona pod mount using fsOwner in storageclass with securitycontext set as privileged creates infinite loop #679

Open
wdurairaj opened this issue Jul 9, 2019 · 5 comments

Comments

@wdurairaj
Copy link
Collaborator

I used the following yaml's

storage class

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: rc-file-uid-gid
provisioner: hpe.com/hpe
parameters:
  filePersona: ""
  fsOwner: "10500:10800"

PVC definition

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: sc-file-pvc-uid-gid
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: rc-file-uid-gid

POD

kind: Pod
apiVersion: v1
metadata:
  name: podfiletestw4-uid-gid-nosecurity
spec:
  containers:
  - name: nginx
    securityContext:
#       runAsUser: 10500
#       runAsGroup: 10800
      privileged: true
      capabilities:
        add: ["SYS_ADMIN"]
      allowPrivilegeEscalation: true
    image: nginx
    volumeMounts:
    - name: export
      mountPath: /export
  restartPolicy: Always
  volumes:
  - name: export
    persistentVolumeClaim:
      claimName: sc-file-pvc-uid-gid
[root@ecostor-b13 securitycontext]# kubectl get pvc sc-file-pvc-uid-gid
NAME                  STATUS    VOLUME                                                 CAPACITY   ACCESS MODES   STORAGECLASS      AGE
sc-file-pvc-uid-gid   Bound     rc-file-uid-gid-d9b84cf6-a256-11e9-a137-f40343a6d040   10Gi       RWX            rc-file-uid-gid   13m

Volume inspect of

[root@ecostor-b13 securitycontext]# docker volume inspect rc-file-uid-gid-d9b84cf6-a256-11e9-a137-f40343a6d040
[
    {
        "Driver": "hpe",
        "Labels": null,
        "Mountpoint": "/",
        "Name": "rc-file-uid-gid-d9b84cf6-a256-11e9-a137-f40343a6d040",
        "Options": {},
        "Scope": "global",
        "Status": {
            "backend": "DEFAULT_FILE",
            "clientIPs": [],
            "cpg": "FC_r6",
            "fpg": "DockerFpg_2",
            "fsMode": null,
            "fsOwner": "10500:10800",
            "name": "rc-file-uid-gid-d9b84cf6-a256-11e9-a137-f40343a6d040",
            "protocol": "nfs",
            "sharePath": "15.213.65.65:/DockerFpg_2/DockerVfs_2/rc-file-uid-gid-d9b84cf6-a256-11e9-a137-f40343a6d040",
            "size": "10 GiB",
            "status": "AVAILABLE",
            "vfs": "DockerVfs_2",
            "vfsIPs": [
                [
                    "15.213.65.65",
                    "255.255.248.0"
                ]
            ]
        }
    }
]

In the logs after the pod mount, i continuously see a loop

2019-07-09 14:47:32,830 [INFO] hpedockerplugin.hpe.hpe_3par_mediator [140018277776264] MainThread It is first mount request but ip is already added to the share. Exception Bad request (HTTP 400) 29 - IP address 15.213.65.61 already exists :
2019-07-09 14:47:32,830 [DEBUG] hpe3parclient.http [140018277776264] MainThread

Describe of pod


Node-Selectors:  node-role.kubernetes.io/compute=true
Tolerations:     <none>
Events:
  Type     Reason       Age                     From                                        Message
  ----     ------       ----                    ----                                        -------
  Normal   Scheduled    5m                      default-scheduler                           Successfully assigned validate/podfiletestw4-uid-gid-nosecurity to ecostor-b14.in.rdlabs.hpecorp.net
  Warning  FailedMount  <invalid> (x2 over 1m)  kubelet, ecostor-b14.in.rdlabs.hpecorp.net  Unable to mount volumes for pod "podfiletestw4-uid-gid-nosecurity_validate(2e36fd77-a258-11e9-a137-f40343a6d040)": timeout expired waiting for volumes to attach or mount for pod "validate"/"podfiletestw4-uid-gid-nosecurity". list of unmounted volumes=[export]. list of unattached volumes=[export default-token-rhtpj]
  Warning  FailedMount  <invalid> (x8 over 3m)  kubelet, ecostor-b14.in.rdlabs.hpecorp.net  MountVolume.SetUp failed for volume "rc-file-uid-gid-d9b84cf6-a256-11e9-a137-f40343a6d040" : mount command failed, status: Failure, reason: invalid character '<' looking for beginning of value
[root@ecostor-b13 securitycontext]#
@wdurairaj
Copy link
Collaborator Author

I did confirm the array has uid:gid as
"10500:10800"

[root@ecostor-b14 ~]# ssh [email protected]
[email protected]'s password:
CB1402_8400_4N cli% showfsuser
Username        UID ---------------------SID--------------------- Primary_Group Enabled
Administrator 10500 S-1-5-21-4245652964-2152692123-1087927478-500 Local Users   false
Guest         10501 S-1-5-21-4245652964-2152692123-1087927478-501 Local Users   false
---------------------------------------------------------------------------------------
            2 total

			
			
[root@ecostor-b14 ~]# ssh [email protected]
[email protected]'s password:
CB1402_8400_4N cli% showfsgroup
GroupName          GID ---------------------SID---------------------
Local Users      10800 S-1-5-21-4245652964-2152692123-1087927478-800
Administrators   10544 S-1-5-32-544
Users            10545 S-1-5-32-545
Guests           10546 S-1-5-32-546
Backup Operators 10551 S-1-5-32-551
--------------------------------------------------------------------

@wdurairaj wdurairaj assigned wdurairaj and unassigned wdurairaj Jul 9, 2019
@wdurairaj wdurairaj added the bug label Jul 9, 2019
@amitk1977 amitk1977 added the high label Jul 10, 2019
@amitk1977
Copy link
Collaborator

To be verified by Virendra on the 7/10 final RC build.

@amitk1977
Copy link
Collaborator

amitk1977 commented Jul 10, 2019

To be tested with these combinations on plain Docker and K8s

  1. only -o fsMode
  2. only -o fsOwner
  3. combination of -o fsMode -o fsOwner

wdurairaj pushed a commit that referenced this issue Jul 10, 2019
* fix for issue 679 its a regression

* this makes fsOwner mandetory parameter to be used with fsMode

* Fixed pep8
@amitk1977
Copy link
Collaborator

William will raise the PR for documentation

@virendra-sharma
Copy link

Tested provided scenarios and logged bug on both platform, docker (RHEL) & plain kubernetes (RHEL & k8s).

Please find below output for all the scenario for both platform.
final-test-scenario(plain docker)-1,2,3.txt
final-test-scenario(rhel&k8s)-1,2,3.txt

Note: - While performing verification faced below two issues, please find issue along with action decided.

  1. Unable to mount simple shares too, when SELINUX was disable. -- (Decided to document it)
  2. Unable to get pod in running state, when we update "runAsUser" & "runAsGroup" for securityContext in pod specification. -- (Decided to document this limitation for current release)

@wdurairaj please check output file for above use cases. Observation is updated as note and we can close this issue. I will be raising medium severity issue for "securityContext" analyses.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants