0010066fe3
This helps the reliability of s3backer as the fuse mount is done on NodeStageVolume and only once per volume per node. |
||
---|---|---|
cmd/s3driver | ||
deploy/kubernetes | ||
pkg/s3 | ||
test | ||
.gitignore | ||
Gopkg.lock | ||
Gopkg.toml | ||
LICENSE | ||
Makefile | ||
README.md |
CSI for S3
This is a Container Storage Interface (CSI) for S3 (or S3 compatible) storage. This can dynamically allocate buckets and mount them via a fuse mount into any container.
Status
This is still very experimental and should not be used in any production environment. Unexpected data loss could occur depending on what mounter and S3 storage backend is being used.
Kubernetes installation
Requirements
- Kubernetes 1.10+
- Kubernetes has to allow privileged containers
- Docker daemon must allow shared mounts (systemd flag
MountFlags=shared
)
1. Create a secret with your S3 credentials
The endpoint is optional if you are using something else than AWS S3. Also the region can be empty if you are using some other S3 compatible storage.
apiVersion: v1
kind: Secret
metadata:
name: csi-s3-secret
stringData:
accessKeyID: <YOUR_ACCESS_KEY_ID>
secretAccessKey: <YOUR_SECRET_ACCES_KEY>
endpoint: <S3_ENDPOINT_URL
region: <S3_REGION>
# specify which mounter to use
# can be set to s3fs, goofys or s3ql
mounter: <MOUNTER>
# Currently only for s3ql
# If not using s3ql, set it to ""
encryptionKey: <FS_ENCRYPTION_KEY>
2. Deploy the driver
cd deploy/kubernetes
$ kubectl create -f provisioner.yaml
$ kubectl create -f attacher.yaml
$ kubectl create -f csi-s3-driver.yaml
3. Create the storage class
$ kubectl create -f storageclass.yaml
4. Test the S3 driver
- Create a pvc using the new storage class:
$ kubectl create -f pvc.yaml
- Check if the PVC has been bound:
$ kubectl get pvc csi-s3-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-s3-pvc Bound pvc-c5d4634f-8507-11e8-9f33-0e243832354b 5Gi RWX csi-s3 9s
- Create a test pod which mounts your volume:
$ kubectl create -f poc.yaml
If the pod can start, everything should be working.
- Test the mount
$ kubectl exec -ti csi-s3-test-nginx bash
$ mount | grep fuse
s3fs on /var/lib/www/html type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
$ touch /var/lib/www/html/hello_world
If something does not work as expected, check the troubleshooting section below.
Additional configuration
Mounter
As seen in the deployment example above, the driver can be configured to use one of these mounters to mount buckets:
All mounters have different strengths and weaknesses depending on your use case. Here are some characteristics which should help you choose a mounter:
s3fs
- Large subset of POSIX
- Files can be viewed normally with any S3 client
- Does not support appends or random writes
goofys
- Weak POSIX compatibility
- Performance first
- Files can be viewed normally with any S3 client
- Does not support appends or random writes
s3ql
- (Almost) full POSIX compatibility
- Uses its own object format
- Files are not readable with other S3 clients
- Support appends
- Supports compression before upload
- Supports encryption before upload
Limitations
As S3 is not a real file system there are some limitations to consider here. Depending on what mounter you are using, you will have different levels of POSIX compability. Also depending on what S3 storage backend you are using there are not always consistency guarantees. The detailed limitations can be found on the documentation of s3fs and goofys.
Troubleshooting
Issues while creating PVC
- Check the logs of the provisioner:
$ kubectl logs -l app=csi-provisioner-s3 -c s3-csi-driver
Issues creating containers
- Ensure feature gate
MountPropagation
is not set tofalse
- Check the logs of the s3-driver:
$ kubectl logs -l app=csi-s3-driver -c csi-s3-driver
Development
This project can be built like any other go application.
$ go get -u github.com/ctrox/csi-s3-driver
Build
$ make build
Tests
Currently the driver is tested by the CSI Sanity Tester. As end-to-end tests require S3 storage and a mounter like s3fs, this is best done in a docker container. A Dockerfile and the test script are in the test
directory. The easiest way to run the tests is to just use the make command:
$ make test