You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(Since I didn't find a guide about how to draft a proposal, so plz forgive me if I submit it to a wrong place.)
Motivation
From my opinion, CSI is a standard interface, aiming at making it easier for every storage vendor to be integrated into different container orchestration systems. Generally we have done well enough, but it is still complicated for vendors to implement ControllerPublishVolume and ControllerUnpublishVolume request in some use cases.
Use case 1: Dedicated server
If users expect to deploy CO system (for example Kubernetes) on dedicated servers, they can specify different storage systems, that means that every storage system plugin need to make sure their volume can be attached to different kinds of servers and work well on different operation systems.
Use case 2: Private or hybrid cloud
If CO system is deployed on private cloud or hybrid cloud nodes, then users may specify some storage systems that are not supported or badly supported by this private cloud (VMware+Cinder). Since VASA provider can not attach Cinder volume to VMware VMs, VMware SP has to talk with Cinder directly to attach the volume. And in the end, there will be numerous solos, which definitely is what we don't want to see.
Goal
To solve the problem, we plan to design a standard library in CSI providing volume attaching for different storage vendors. For any storage system which wants to provide storage resource for
dedicated servers or VMs of private cloud, they can call this library to finish host-side volume discovery
and then mount the device path to container.
Proposed Design
As we know, there are a lot of storage protocols, such as iscsi, rbd, fc, smbfs and so forth, and some of them are implemented in different ways according to different system types(x86, s390, ppc64) and os types (linux, windows), so this library will communicate with kernel and expose a unified interface to different SPs.
API Object
The API object will have the following structure:
const (
// Platform typePLATFORM_ALL= 'ALL'
PLATFORM_x86= 'X86'
PLATFORM_S390= 'S390'
PLATFORM_PPC64= 'PPC64'
// Operation system typeOS_TYPE_ALL= 'ALL'
OS_TYPE_LINUX= 'LINUX'
OS_TYPE_WINDOWS= 'WIN'
// Device driver typeISCSI="ISCSI"ISER="ISER"FIBRE_CHANNEL="FIBRE_CHANNEL"AOE="AOE"DRBD="DRBD"NFS="NFS"GLUSTERFS="GLUSTERFS"LOCAL="LOCAL"GPFS="GPFS"HUAWEISDSHYPERVISOR="HUAWEISDSHYPERVISOR"HGST="HGST"RBD="RBD"SCALEIO="SCALEIO"SCALITY="SCALITY"QUOBYTE="QUOBYTE"DISCO="DISCO"VZSTORAGE="VZSTORAGE"// A unified device path prefixVOLUME_LINK_DIR= '/dev/disk/by-id/'
)
// Connector is an interface indicating what outside world can do with this// library, notice that it is at very early stage right now.typeConnectorinterface {
GetConnectorProperties(multiPath, doLocalAttachbool) (*ConnectorProperties, error)
ConnectVolume(conn*ConnectionInfo) (string, error)
DisconnectVolume(conn*ConnectionInfo) (string, error)
GetDevicePath(volumeIdstring) (string, error)
}
// ConnectorProperties is a struct used to tell storage backend how to// intialize connection of volume. Please notice that it is OPTIONAL.typeConnectorPropertiesstruct {
DoLocalAttachbool`json:"doLocalAttach"`Platformstring`json:"platform"`OsTypestring`json:"osType"`Ipstring`json:"ip"`Hoststring`json:"host"`MultiPathbool`json:"multipath"`Initiatorstring`json:"initiator"`
}
// ConnectionInfo is a structure for all properties of// connection when connect a volumetypeConnectionInfostruct {
// the type of driver volume, such as iscsi, rbd and so onDriverVolumeTypestring`json:"driverVolumeType"`// Required parameters to connect volume and differs from DriverVolumeType.// For example, for iscsi driver, see struct IsciConnectionData below.// NOTICE that you have to convert it into a map.ConnectionDatamap[string]interface{} `json:"data"`
}
typeIscsiConnectionDatastruct {
// boolean indicating whether discovery was usedTragetDiscoveredbool`json:"targetDiscovered"`// the IQN of the iSCSI targetTargetIqnstring`json:"targetIqn"`// the portal of the iSCSI targetTargetPortalstring`json:"targetPortal"`// the lun of the iSCSI targetTargetLunstring`json:"targetLun"`// the uuid of the volumeVolumeIdstring`json:"volumeId"`// the authentication detailsAuthUsernamestring`json:"authUsername"`AuthPasswordstring`json:"authPassword"`
}
(Since I didn't find a guide about how to draft a proposal, so plz forgive me if I submit it to a wrong place.)
Motivation
From my opinion, CSI is a standard interface, aiming at making it easier for every storage vendor to be integrated into different container orchestration systems. Generally we have done well enough, but it is still complicated for vendors to implement ControllerPublishVolume and ControllerUnpublishVolume request in some use cases.
Use case 1: Dedicated server
If users expect to deploy CO system (for example Kubernetes) on dedicated servers, they can specify different storage systems, that means that every storage system plugin need to make sure their volume can be attached to different kinds of servers and work well on different operation systems.
Use case 2: Private or hybrid cloud
If CO system is deployed on private cloud or hybrid cloud nodes, then users may specify some storage systems that are not supported or badly supported by this private cloud (VMware+Cinder). Since VASA provider can not attach Cinder volume to VMware VMs, VMware SP has to talk with Cinder directly to attach the volume. And in the end, there will be numerous solos, which definitely is what we don't want to see.
Goal
To solve the problem, we plan to design a standard library in CSI providing volume attaching for different storage vendors. For any storage system which wants to provide storage resource for
dedicated servers or VMs of private cloud, they can call this library to finish host-side volume discovery
and then mount the device path to container.
Proposed Design
As we know, there are a lot of storage protocols, such as iscsi, rbd, fc, smbfs and so forth, and some of them are implemented in different ways according to different system types(x86, s390, ppc64) and os types (linux, windows), so this library will communicate with kernel and expose a unified interface to different SPs.
API Object
The API object will have the following structure:
References
which is an implementation written in Python.
The text was updated successfully, but these errors were encountered: