CN116382585A - Temporary volume storage method, containerized cloud platform and computer readable medium - Google Patents

Temporary volume storage method, containerized cloud platform and computer readable medium Download PDF

Info

Publication number
CN116382585A
CN116382585A CN202310375437.2A CN202310375437A CN116382585A CN 116382585 A CN116382585 A CN 116382585A CN 202310375437 A CN202310375437 A CN 202310375437A CN 116382585 A CN116382585 A CN 116382585A
Authority
CN
China
Prior art keywords
pod
volume
node
temporary volume
temporary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310375437.2A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anchao Cloud Software Co Ltd
Original Assignee
Anchao Cloud Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anchao Cloud Software Co Ltd filed Critical Anchao Cloud Software Co Ltd
Priority to CN202310375437.2A priority Critical patent/CN116382585A/en
Publication of CN116382585A publication Critical patent/CN116382585A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0667Virtualisation aspects at data level, e.g. file, record or object virtualisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Stored Programmes (AREA)

Abstract

The invention provides a temporary volume storage method, a containerized cloud platform and a computer readable medium, wherein the temporary volume storage method comprises the following steps: initiating a creation statement to use the Pod of the temporary volume, and storing attribute information of the created Pod; scheduling the Pod to any working node, calling the CSI temporary volume plug-in to create a temporary volume for the Pod, and calling the CSI temporary volume plug-in to initiate a request for creating block equipment to the distributed storage system; the distributed storage system mounts the temporary volume into the Pod in response to a create block device request to create a block device, mounts the block device to a work node that schedules deployment of the Pod. Through this application, effectively avoid among the prior art because temporary volume and Kubelet shared directory exist occupy too much local disk space and then lead to the fact the problem of influence to the stability of cluster.

Description

Temporary volume storage method, containerized cloud platform and computer readable medium
Technical Field
The present invention relates to the field of cloud computing, and in particular, to a temporary volume storage method, a containerized cloud platform, and a computer readable medium.
Background
The Kubernetes (container arrangement system) is abbreviated as K8s, is a container arrangement engine of Google open source, supports automatic deployment, large-scale scalability and application containerization management, can create a plurality of containers in the Kubernetes, each container runs an application instance, and realizes management, discovery and access to the group of application instances through a built-in load balancing strategy without complex manual configuration and processing by operation staff. In Kubernetes, the lifecycle of a container may be short, frequently created and deleted, and the data held in the container at the time of destruction is also purged, and a Volume is introduced for persisting the data in the container. Pod refers to a group of containers sharing certain resources, volume is a shared directory in the Pod, which can be accessed by a plurality of containers, volume is defined on the Pod and is mounted under a specific file directory by a plurality of containers in one Pod, and Kubernetes realizes data sharing between different containers in the same Pod through Volume.
The empty dir (temporary Volume) is used as a temporary space for storing, for example, temporary directories required for some applications to run without permanent storage, or a data directory that one container needs to acquire from another container, and is the most basic Volume type. The EmptyDir is created when Pod is allocated to a Node (working Node), and the data in the EmptyDir is then permanently deleted when Pod is destroyed. In the prior art, the data of the EmptyDir and the Kubelet are co-catalogued, and the data storage catalogue of the EmptyDir is fixed after the cluster is created.
In the prior art, since the empty dir and the Kubelet are in the same directory, and the subsequent data are stored in the same medium, the storage paths need to be distinguished in order to avoid conflict between the empty dir and the Kubelet data, and the problems that the paths are too lengthy and thus excessive local disk space is occupied and the stability of the clusters is affected are caused as the stored data of the empty dir are gradually increased.
Disclosure of Invention
The invention aims to disclose a temporary volume storage method, a containerized cloud platform and a computer readable medium, which solve the problem that the stability of a cluster is affected due to the fact that excessive local disk space is occupied due to the fact that temporary volumes and kubelets are in a common directory in the prior art.
In order to achieve the above object, the present invention provides a temporary volume storage method, including: initiating a creation statement to use the Pod of the temporary volume, and storing attribute information of the created Pod;
scheduling the Pod to any working node, calling a CSI temporary volume plug-in to create a temporary volume for the Pod, and calling the CSI temporary volume plug-in to initiate a request for creating block equipment to a distributed storage system;
the distributed storage system responds to the block device creation request to create a block device, mounts the block device to a working node which schedules deployment of the Pod, and mounts the temporary volume into the Pod.
As a further improvement of the invention, the CSI temporary volume plug-in comprises a node driver registration program and a CSI driver, wherein the node driver registration program registers CSI driver information to a daemon component to which a working node belongs;
scheduling and deploying the Pod to any working node, calling a CSI driver for the Pod by using the statement of the temporary volume, and when the daemon component detects that the temporary volume of the corresponding working node needs to be mounted to the Pod, calling the CSI driver again to initiate a request for creating block equipment to a distributed storage system.
As a further improvement of the present invention, the create block device request includes:
the daemon component calls a node release volume interface of the CSI driver to initiate a request for creating a resource pool to the distributed storage system;
the establishment of the resource pool is completed, and the node release volume interface initiates a target object request for establishing an iSCSI server to the distributed storage system;
the target object of the iSCSI server is created, and the node issuing volume interface initiates a virtual disk creation request to the distributed storage system;
the virtual disk is established, and the node release volume interface initiates a request for adding working node information to a target object of the iSCSI server to the distributed storage system;
the node issuing volume interface initiates binding of the virtual disk and the working node to the distributed storage system to create a disk mapping request;
and the disk mapping creation is completed, the node issuing volume interface initiates a request for configuring the disk mapping to the working node to the distributed storage system, and the request for creating the block equipment is completed after the configuration is completed.
As a further improvement of the present invention, the node publishing volume interface determines whether the Pod declares use of a temporary volume;
if yes, creating a temporary volume;
and if not, carrying out the mounting flow of the Pod and the persistent volume.
As a further improvement of the present invention, initiating a deletion statement using a Pod of a temporary volume, deleting attribute information of the Pod;
the daemon component detects that a temporary volume which is mounted on a Pod to be deleted exists in a corresponding working node, invokes the CSI driver to sequentially initiate an unloading flow of the temporary volume mounted on the Pod, an unloading flow of the working node mounting block device and a block device deleting request to the distributed storage system.
As a further improvement of the present invention, the block device delete request includes:
the daemon component calls a node cancel release volume interface of the CSI driver to initiate a request for unbinding the virtual disk and the working node to the distributed storage system;
and the virtual disk and the working node are unbinding, and the node cancels the issuing volume interface to initiate a virtual disk deleting request to the distributed storage system.
As a further improvement of the invention, the volume management program in the daemon component detects that the temporary volume in the corresponding working node needs to be mounted or deleted.
As a further improvement of the invention, the daemon component issues or cancels issuing of volume interface calls to the CSI driver initiating node in an RPC or GRPC mode.
Based on the same thought, the invention also discloses a containerized cloud platform, which comprises: a master node, at least one working node hosted by the master node, and a distributed storage system;
the master node deploys a resource access component;
the working node deploys a CSI temporary volume plug-in;
the user initiates a Pod for creating a statement to use the temporary volume to the resource access component, and the resource access component stores attribute information of the created Pod to the ETCD component; scheduling and deploying the Pod to any working node, calling a CSI temporary volume plug-in to create a temporary volume for the Pod, and calling the CSI temporary volume plug-in to initiate a request for creating block equipment to a distributed storage system; the distributed storage system responds to the block device creation request to create block devices, mounts the block devices to a working node for scheduling deployment of the Pod and makes a file system for providing storage services for the Pod, and mounts the temporary volumes into the Pod.
As a further improvement of the present invention, the CSI temporary volume plug-in includes a node driver registration program and a CSI driver, the node driver registration program registering information of the CSI driver to a daemon component to which the working node belongs;
scheduling and deploying the Pod to any working node, calling a CSI driver for the Pod to create a temporary volume for the Pod aiming at the Pod statement, and calling the CSI driver again to initiate a request for creating block equipment to a distributed storage system when the daemon component detects that the temporary volume of the corresponding working node needs to be mounted to the Pod.
As a further improvement of the invention, the user initiates a delete statement to the resource access component to use the Pod of the temporary volume, and the resource access component clears the Pod information from the ETCD component;
and the daemon component detects that redundant mounted temporary volumes exist in corresponding working nodes, invokes the CSI driver to sequentially initiate an unloading flow of the mounted temporary volumes in the Pod, an unloading flow of the mounting block equipment of the working nodes and a block equipment deleting request to the distributed storage system.
The invention also discloses a computer readable medium, wherein the computer readable medium stores computer program instructions which, when read and executed by a processor, perform the steps in the snapshot rollback method.
Compared with the prior art, the invention has the beneficial effects that: firstly, after a Pod for declaring to use a temporary volume is created and the Pod is scheduled to any working node, a CSI temporary volume plug-in is called to create the temporary volume for the Pod, then after the fact that the newly created temporary volume in the working node is not mounted to the Pod is identified, the CSI temporary volume plug-in is called again to initiate a block equipment creation request to a distributed storage system, after the distributed storage system responds to the request of the temporary volume plug-in to create the block equipment, the block equipment is mounted to the working node where the Pod is located, and then the temporary volume is mounted to the Pod. Through the arrangement, the storage path of the temporary volume is transferred to the distributed storage system, and the block equipment for storing the temporary volume information is created along with the creation of the Pod, so that the problem that the cluster stability is reduced due to the fact that too many local disks are occupied due to too long temporary volume storage path in the prior art is effectively avoided.
Secondly, by means of the mode that the storage of the temporary volume information is based on the iSCSI protocol, dynamic block equipment creation and mounting are carried out along with the creation of the Pod, and dynamic storage of the temporary volume information mounted on the Pod is carried out, when the Pod is deleted, dynamic unloading and deletion of the block equipment can be carried out, and the flexibility of temporary volume storage is improved while the occupation of excessive local disks is avoided.
Finally, the CSI temporary volume plug-in comprises a node driving registration program and a CSI driver, wherein a creation volume interface of the CSI driver integrates a temporary volume creation function and a permanent volume creation function, the CSI driver can judge the creation volume type of the Pod statement to respectively carry out a permanent volume mounting flow or create a temporary volume, and a block device creation flow is further carried out after the creation of the temporary volume is completed, so that the mixing of other types of data is effectively avoided, and the success rate of the concurrent use of the CSI is improved.
Drawings
FIG. 1 is a block diagram of the overall flow for embodying the method of temporary volume storage in the present invention;
FIG. 2 is a timing chart showing the overall flow of the steps of creating a Pod declaring a temporary volume type, creating a temporary volume corresponding to the Pod, creating a block device, mounting the block device to a work node where the Pod is deployed, and mounting the temporary volume to the corresponding Pod;
FIG. 3 is a specific flow diagram of creating and scheduling a Pod to a work node, creating a temporary volume and a block device, and mounting the created block device to the work node scheduling the Pod and mounting the temporary volume to the Pod in the present invention;
FIG. 4 is a flowchart illustrating the specific steps of removing a Pod declaring a temporary volume type, unloading a temporary volume mounted on the Pod, and detaching a block device from a worker node, and removing the block device, according to the present invention;
FIG. 5 is a block diagram of a particular flow of deleting a Pod, unloading a temporary volume mounted on the Pod, and detaching a block device from a worker node, and deleting a block device in accordance with the present invention;
FIG. 6 is a diagram of a cloud platform architecture in accordance with the present invention;
FIG. 7 is a block flow diagram of a computer readable medium in accordance with the present invention.
Detailed Description
The present invention will be described in detail below with reference to the embodiments shown in the drawings, but it should be understood that the embodiments are not limited to the present invention, and functional, method, or structural equivalents and alternatives according to the embodiments are within the scope of protection of the present invention by those skilled in the art.
Before explaining the various embodiments of the present application in detail, the meanings of the main technical terms and english abbreviations involved in the various embodiments are explained or defined as necessary.
The Kubernetes cluster (hereinafter or simply "cluster") refers to an application that is open-source and is used for managing containerization on multiple hosts in a cloud platform, CSI is a container storage interface of the Kubernetes cluster, is a Kubernetes cluster definition standard interface, pod refers to a group of tightly coupled container groups, is a basic unit of Kubernetes orchestration, and in various application scenarios shown in the invention, CSI temporary volume plug-ins of Node-driver-registrar (Node-driver-registrar) and CSI driver (CSI-driver) deployed in a side-by-side mode are introduced in CSI Node Pod, so that the Kubernetes cluster cloud uses a distributed storage system.
The resource access component (kube-ApiServer) is a request entry of a Kubernetes cluster, and performs addition and deletion screening on resources, and an external volume manager (external-volume-manager) registers a custom resource object (CRD) to the resource access component (kube-ApiServer) in various application scenarios shown in the invention. And a resource controller (controller-manager) for monitoring the resource to ensure that the resource is maintained in a desired state. The ETCD component is a database of the Kubernetes cluster and is used for storing all relevant data of the Kubernetes cluster. A daemon component (Kubelet) is deployed at each working node, responsible for the specific creation flow of Pod.
A temporary volume (empty dir), which is the most basic volume type, is created when Pod is allocated to a working node, and data in the temporary volume is simultaneously completely deleted when Pod is destroyed. Virtual disk (vdisk) refers to a resource application for a storage resource. iSCSI, also known as IP-SAN (IP network based storage area network), is a storage technology based on the internet and SCSI-3 protocol, is an industry standard that uses TCP/IP protocols to transfer SCSI block commands over existing IP networks where messages and block data can be transferred simultaneously without installing a separate fiber network.
Referring to fig. 1 to fig. 5, in one embodiment of the temporary volume storage method disclosed in the present invention, a CSI temporary volume plug-in formed by a driver registration program and a CSI driver is introduced into a CSI node Pod, where the driver registration program is used to register information of the CSI driver to the daemon component to which the driver belongs. Creating a Pod of a temporary volume for the Pod by using the declaration of the temporary volume to the Kubernetes cluster, scheduling the Pod to any working node, calling a CSI driver by a daemon component for the declaration of the temporary volume for the Pod, after the temporary volume is created, detecting that the temporary volume which is not mounted to the Pod exists in the working node by the daemon component, continuously calling the CSI driver to initiate a request for creating the block device to a distributed system, responding and creating a finished block device by the distributed storage system, mounting the block device to a scheduling node of the Pod, and mounting the temporary volume to the Pod. Through the above process, compared with the storage mode of fixing the data storage catalog of the temporary volume in the prior art, the data of the temporary volume is transferred to the distributed storage system, and the creation and mounting of the block equipment are carried out along with the creation of the Pod and the temporary volume, so that the local disk space is effectively saved, and the Kubernetes cluster stability is improved. In this application, a worker node is understood to be a computing node that deploys Pod, and may be deployed by a computer device such as a physical machine.
Referring to fig. 1, in the present embodiment, the temporary volume storage method includes steps S1 to S3:
s1, initiating creation statement to use Pod of the temporary volume, and storing attribute information of the created Pod.
S2, scheduling the Pod to any working node, calling the CSI temporary volume plug-in to create a temporary volume for the Pod, and calling the CSI temporary volume plug-in to initiate a request for creating block equipment to the distributed storage system.
S3, the distributed storage system 30 responds to the block device creation request to create a block device, mounts the block device to a work node for scheduling deployment of the Pod, and mounts the temporary volume into the Pod.
Specifically, referring to fig. 2, the CSI temporary volume plug-in includes a node driver registration program 23 and a CSI driver 22, where the node driver registration program 23 registers CSI driver 22 information to a daemon component 21 to which a working node belongs. Among other things, the information of CSI driver 22 includes: the name of CSI driver 22 and the interface information of CSI driver 22. Multiple CSI drivers may be deployed in the same working node, and by registering the name of the CSI driver 22 and the interface information of the CSI driver 22 in the daemon component 21 to which the working node belongs, the driver registration program 23 enables communication between the daemon component 21 and the CSI driver 22.
Referring to fig. 2 and 3, step S1 includes steps S11 to S13:
s11, initiating creation of Pod for declaring use of the temporary volume.
And S12, storing the Pod information in the ETCD component 13.
And S13, scheduling the Pod to any working node.
Specifically, when creating a Pod to the Kubernetes cluster, the Pod information includes a type volume declared to be used and declaring to use a distributed storage system, and the created Pod information is stored in the ETCD component 13 of the Kubernetes cluster, where after the Pod with the type of the declared volume being a temporary volume is scheduled to any working node, the daemon component 21 of the working node calls the CSI driver 22 for the declaration of the Pod using the temporary volume to create the temporary volume for the Pod.
Referring to fig. 2 and 3, step S2 includes steps S211 to S24:
s21, judging whether the Pod declares the volume type to be a temporary volume.
S22, if not, carrying out a durable volume creation and mounting process.
S23, if so, creating a temporary volume for the Pod.
S24, judging that the temporary volume to be mounted to the Pod exists in the working node, and calling the CSI temporary volume plug-in to initiate a block equipment creation request to the distributed storage system.
In step S21 to step S23, the CSI driver 22 determines to create the volume type according to the Pod declares the volume type, and if the Pod declares the volume type to be a persistent volume, the CSI driver 22 performs a persistent volume creation and mounting procedure for the Pod. Only when Pod declares the volume type as a temporary volume, CSI driver 21 creates a temporary volume for Pod and invokes the node publishing volume interface to initiate the block device creation flow to distributed storage system 30. Through the above arrangement, firstly, the temporary volume creation function and the permanent volume creation function are integrated in the CSI driver 22, so that the working efficiency of the CSI plug-in is effectively improved. And, by judging the type of the Pod declaration volume by the CSI driver 22, it is possible to avoid mixing in the Pod declaration using the persistent volume in the creation of the persistent volume and mixing in the Pod declaration using the persistent volume in the creation of the persistent volume, and to improve the accuracy of using the CSI plugin.
Specifically, after the temporary volume is created in step S23, if the daemon component 21 detects that the temporary volume needs to be mounted to Pod in the working node to which the daemon component belongs, the CSI driver 22 is called again to initiate a block creation device request to the distributed storage 30 system, where the block creation device request is step S24, and the method includes steps S240 to S245:
s240, daemon component 21 invokes a node publishing volume interface (not labeled) of CSI driver 22 to initiate a request to create a resource pool to the distributed storage system, waiting for the creation of the resource pool to complete. In this embodiment, the resource pool is used to partition storage resources in the distributed storage system. The storage performance of the resource pool can be improved, when the resource is requested, the resource pool is allocated to one resource, then the resource is marked as busy (namely, the resource cannot be allocated to a virtual disk), and the resource marked as busy cannot be allocated for use any more; when a certain resource is used, the resource pool clears the busy identifier of the related resource to indicate that the resource can be used by the next request. While the foregoing node publishing volume interface is logically deployed to CSI driver 22, distributed storage system 30 is logically viewed as a collection of one type of storage resources of the resource pool, it being understood that other types of resources, such as CPUs, may also be deployed in the storage pool.
S241, the creation of the resource pool is completed, the node release volume interface initiates a target object request for creating the iSCSI server to the distributed storage system, and waits for the completion of the creation of the target object of the iSCSI server. iSCSI combines the existing SCSI interface with ethernet technology, and the TCP/IP-based protocol connects an iSCSI server (Target) and a client (Initiator), in this embodiment, the iSCSI server is a distributed storage system, and the iSCSI client is a Kubernetes cluster. In this step, the targettli command is used to abstract the configuration content of the iSCSI shared resource into the form of a directory for subsequent filling of configuration information into the corresponding directory.
S242, the target object of the iSCSI server is created, the node release volume interface initiates a virtual disk creation request to the distributed storage system, and the virtual disk creation is waited for completion. In this step, the resources of the resource pool created in step S240 are called to create a virtual disk (vdisk).
S243, the virtual disk is created, the node release volume interface initiates a request for adding the working node information to the target object of the iSCSI server to the distributed storage system, and waits for the completion of adding the node information to the target object of the iSCSI server. The service end and the client end in the iSCSI can be addressed through iqn (iSCSI qualifiedname), and the iqn number format is: iqn. date domain name: name assigned by the domain name organization. In this embodiment, the iqn number of the working node of the scheduling Pod is added to the target object, so that the target object allows connection with the ip address of the working node.
S244, adding the work node information to the target object of the iSCSI server is completed, and the node publishing volume interface initiates a binding virtual disk and a work node to the distributed storage system to create a disk mapping request, and waits for the completion of disk mapping creation. In this step, the virtual disk is bound to the working node in the distributed storage system 30 to create a mapping disk associated with the iSCSI client, i.e., the virtual disk is bound to the working node, thereby creating a disk map to improve the access efficiency between the subsequent temporary volume and the block device.
S245, the disk mapping is established, a node publishing volume interface initiates a request for configuring the disk mapping to a working node to a distributed storage system, and the request for establishing the block equipment is completed after the configuration is completed. In this embodiment, the disk map is configured to the work node scheduling Pod using the discovery, encapsulation and login functions of the ischiadm tool, and the block device creation flow is completed.
Referring to fig. 2 and 3, step S3 includes steps S31 to S32:
s31, mounting the temporary volume into the Pod.
S32, manufacturing a file system for providing storage service for Pod.
Through the steps, when the temporary volume is written with data, the mounting point of the temporary volume, which is the same as that of the daemon component 21 in the prior art, is transferred to the distributed storage system 30, the virtual disk and the disk map are created along with the creation of the Pod, and the data written in the temporary volume is also stored in the distributed storage system 30, so that the problem that the mounting point of the temporary volume, which is the same as that of the daemon component 21 in the prior art, occupies too many local disks due to lengthy paths, and further influences cluster stability is effectively avoided.
It should be noted that iSCSI provides block device storage, encapsulates block-level SCSI commands, and then sends them over IP networks, with built-in multipath capability, which can provide more advanced load balancing algorithms, and intelligently balances storage traffic across multiple server and array side storage paths. The block device is one of i/o devices, information is stored in blocks with fixed sizes, each block has a corresponding path address, and the information stored in the block device can be read through the path address at any position of the device. In the above steps S240 to S245, the configuration content of the iSCSI shared resource is abstracted into a directory form by creating a target object and using a targettli command, the working node information of the scheduling Pod (i.e. iqn number) is added to the directory, and then the virtual disk is bound to the working node information to finally form a disk map, where the path address of the disk map is the block device created in the present application, and data can be written into and read from the temporary volume by directly accessing the disk map address. Compared with the mode that writing and reading of data are carried out in the same server through a network link in file storage provided by NFS, the method and the device effectively improve the writing and reading efficiency of the data, and therefore the temporary volume data stored by applying the iSCSI protocol have better performance.
Referring to fig. 4 and 5, in the present embodiment, the temporary volume storage method further includes steps S4 to S6:
s4, initiating a deletion statement to use the Pod of the temporary volume, and deleting the attribute information of the Pod.
S5, the daemon component 21 detects that redundant mounted temporary volumes exist in corresponding working nodes, and invokes the CSI driver 22 to sequentially initiate an unloading flow of the mounted temporary volumes in the Pod and an unloading flow of the working node mounting block equipment.
S6, the CSI driver 22 initiates a block device deletion request to the distributed storage system.
Referring to fig. 4 and 5, step S4 includes steps S41 to S43:
s41, initiating a delete statement to use the Pod of the temporary volume.
S42, deleting the attribute information of the Pod to be deleted.
S43, detecting that the temporary volume which is mounted on the Pod to be deleted exists in the corresponding working node.
The user initiates a delete statement to the Kubernetes cluster using the Pod of the temporary volume, attribute information is stored in the ETCD component 13 of the Kubernetes cluster when the Pod is created, and the attribute information stored in the ETCD component 13 is deleted when the Pod is deleted. When the corresponding daemon component 21 of the working node that schedules the Pod to be deleted detects the temporary volume that the Pod to be deleted corresponds to the binding, step S5 is performed.
Referring to fig. 4 and 5, step S5 includes steps S51 to S53:
s51, judging whether the Pod to be deleted declares whether the volume type is a temporary volume.
S52, if not, ending, and if so, unloading the temporary volume from the Pod to be deleted.
S53, the block device is detached by the working node of the Pod to be deleted.
Step S43, when detecting that the Pod to be deleted corresponds to the bound temporary volume, the daemon component 21 calls the CSI driver 22 to judge whether the volume type stated by the Pod to be deleted is the temporary volume, if not, the Pod is indicated not to be deleted, and the process is ended; if yes, CSI driver 22 unloads the temporary volume from Pod, and reattaches the block device configured at the working node, that is, the disk map created in step S245, and then proceeds to step S6.
Referring to fig. 4 and 5, step S6 includes steps S61 to S63:
s61, the daemon component 21 calls a node cancel release volume interface of the CSI driver 22 to initiate a request for unbinding a virtual disk and a working node to the distributed storage system;
s62, unbinding the virtual disk and the working node, initiating a virtual disk deleting request to the distributed storage system by the node cancel release volume interface, completing virtual disk deleting, and completing the block equipment deleting request.
After the unbinding of the virtual disk and the working node completes the deletion of the disk map, the virtual disk is deleted, and according to the above flow, when the Pod of the temporary volume is used in the deletion statement, the daemon component 21 detects the Pod bound temporary volume to be deleted in the working node and invokes the node cancel issue volume interface (not labeled) of the CSI driver 22 to initiate a block device deletion request to the distributed storage system, so that the block device is deleted in real time along with the Pod deletion. The node de-issue volume interface is logically deployed to CSI driver 22.
When the Pod cross-node migration using the temporary volume is declared, the disk mapping configured on the working node before the Pod migration is unbinding, the working information in the target object is modified into the working node after the Pod migration, the virtual disk and the working node after the Pod migration are bound again to form a new disk mapping, and the disk mapping is configured on the working node after the Pod migration, so that data is not lost during the Pod cross-node migration.
In this embodiment, the volume manager in daemon component 21 detects that a temporary volume in a corresponding working node needs to be mounted or deleted. The daemon component 21 initiates a node issue volume interface call or a node cancel issue volume interface call to the CSI driver 22 in RPC or GRPC mode, and specifically initiates a node issue volume interface call or a node cancel issue volume interface call to the CSI driver 22 in GRPC mode in this embodiment.
Based on the same thought, the invention also discloses a containerized cloud platform, which comprises: a master node 10, at least one working node hosted by the master node 10, and a distributed storage system 30;
the master node 10 deploys a resource access component 11, a resource controller 12, and a scheduler 14, the resource controller 12 monitoring the resource access component 11;
the working node deploys a CSI temporary volume plug-in;
the user initiates a Pod of creating a statement use temporary volume to the resource access component 11, and the resource access component 11 saves attribute information of the created Pod to the ETCD component 13; scheduling and deploying Pod to any working node, calling a CSI temporary volume plug-in to create a temporary volume for the Pod, and calling the CSI temporary volume plug-in to initiate a request for creating block equipment to the distributed storage system 30; the distributed storage system 30 mounts the temporary volume into the Pod in response to a create block device request to create a block device, mounts the block device to a work node that schedules deployment of the Pod, and creates a file system that provides storage services for the Pod.
The CSI temporary volume plug-in comprises a node driver registration program 23 and a CSI driver 22, wherein the node driver registration program 23 registers information of the CSI driver 22 to a daemon component 21 of a working node;
scheduling and deploying the Pod to any working node by the scheduler 14, calling the CSI driver 22 for the Pod statement to create a temporary volume for the Pod, and when the daemon component 21 detects that the corresponding working node has a temporary volume to be mounted to the Pod, calling the CSI driver 22 again to initiate a create block device request to the distributed storage system 30 as shown in fig. 2 and 3, wherein the specific create block device request flow is specifically described above, and is not described herein.
The user initiates a delete statement to the resource access component 11 to use the Pod of the temporary volume, and the resource access component 11 clears the Pod information from the ETCD component 13;
daemon component 21 detects that there is a temporary volume in the corresponding working node that is mounted on the Pod to be deleted, invokes CSI driver 22 to initiate the unloading flow of the temporary volume mounted in the Pod (step S4 above), the unloading flow of the working node mounting block device (step S5 above), and initiates a block device delete request to distributed storage system 30 (step S6 above) in that order.
It should be noted that, although the ETCD assembly 13 shown in fig. 5 is disposed outside the host node 10, the ETCD assembly 13 may be disposed inside the host node 10 in practical applications.
The present invention also discloses a computer readable medium, in which computer program instructions 901 are stored, the computer program instructions 901, when read and executed by a processor 902, perform the steps in the above-mentioned temporary volume storage method.
The above list of detailed descriptions is only specific to practical embodiments of the present invention, and they are not intended to limit the scope of the present invention, and all equivalent embodiments or modifications that do not depart from the spirit of the present invention should be included in the scope of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (12)

1. A method of temporary volume storage, comprising:
initiating a creation statement to use the Pod of the temporary volume, and storing attribute information of the created Pod;
scheduling the Pod to any working node, calling a CSI temporary volume plug-in to create a temporary volume for the Pod, and calling the CSI temporary volume plug-in to initiate a request for creating block equipment to a distributed storage system;
the distributed storage system responds to the block device creation request to create a block device, mounts the block device to a working node which schedules deployment of the Pod, and mounts the temporary volume into the Pod.
2. The method of claim 1, wherein the CSI temporary volume plugin includes a node driver registration program and a CSI driver, the node driver registration program registering CSI driver information to a daemon component to which a working node belongs;
scheduling and deploying the Pod to any working node, calling a CSI driver for the Pod by using the statement of the temporary volume, and when the daemon component detects that the temporary volume of the corresponding working node needs to be mounted to the Pod, calling the CSI driver again to initiate a request for creating block equipment to a distributed storage system.
3. The temporary volume storage method of claim 2, wherein the create block device request comprises:
the daemon component calls a node release volume interface of the CSI driver to initiate a request for creating a resource pool to the distributed storage system;
the establishment of the resource pool is completed, and the node release volume interface initiates a target object request for establishing an iSCSI server to the distributed storage system;
the target object of the iSCSI server is created, and the node issuing volume interface initiates a virtual disk creation request to the distributed storage system;
the virtual disk is established, and the node release volume interface initiates a request for adding working node information to a target object of the iSCSI server to the distributed storage system;
the node issuing volume interface initiates binding of the virtual disk and the working node to the distributed storage system to create a disk mapping request;
and the disk mapping creation is completed, the node issuing volume interface initiates a request for configuring the disk mapping to the working node to the distributed storage system, and the request for creating the block equipment is completed after the configuration is completed.
4. The method of claim 3, wherein the node-published volume interface determines whether the Pod declares use of a temporary volume;
if yes, creating a temporary volume;
and if not, carrying out the mounting flow of the Pod and the persistent volume.
5. A temporary volume storage method according to claim 3, further comprising: initiating a deletion statement to use the Pod of the temporary volume, and deleting the attribute information of the Pod;
the daemon component detects that a temporary volume which is mounted on a Pod to be deleted exists in a corresponding working node, invokes the CSI driver to sequentially initiate an unloading flow of the temporary volume mounted on the Pod, an unloading flow of the working node mounting block device and a block device deleting request to the distributed storage system.
6. The temporary volume storage method according to claim 5, wherein the block device delete request comprises:
the daemon component calls a node cancel release volume interface of the CSI driver to initiate a request for unbinding the virtual disk and the working node to the distributed storage system;
and the virtual disk and the working node are unbinding, and the node cancels the issuing volume interface to initiate a virtual disk deleting request to the distributed storage system.
7. The method of claim 6, wherein detecting, by a volume manager in the daemon component, that a temporary volume in a corresponding working node needs to be mounted or deleted.
8. The method of claim 5, wherein the daemon component initiates a node issue or node de-issue volume interface call to the CSI driver in RPC or GRPC.
9. A containerized cloud platform, comprising:
a master node, at least one working node hosted by the master node, and a distributed storage system;
the master node deploys a resource access component;
the working node deploys a CSI temporary volume plug-in;
the user initiates a Pod for creating a statement to use the temporary volume to the resource access component, and the resource access component stores attribute information of the created Pod to the ETCD component; scheduling and deploying the Pod to any working node, calling a CSI temporary volume plug-in to create a temporary volume for the Pod, and calling the CSI temporary volume plug-in to initiate a request for creating block equipment to a distributed storage system; the distributed storage system responds to the block device creation request to create block devices, mounts the block devices to a working node for scheduling deployment of the Pod and makes a file system for providing storage services for the Pod, and mounts the temporary volumes into the Pod.
10. The containerized cloud platform of claim 9, wherein: the CSI temporary volume plug-in comprises a node driving registration program and a CSI driver, wherein the node driving registration program registers information of the CSI driver to a daemon component of a working node;
scheduling and deploying the Pod to any working node, calling a CSI driver for the Pod to create a temporary volume for the Pod aiming at the Pod statement, and calling the CSI driver again to initiate a request for creating block equipment to a distributed storage system when the daemon component detects that the temporary volume of the corresponding working node needs to be mounted to the Pod.
11. The containerized cloud platform of claim 10, wherein: the user initiates a delete statement to the resource access component to use the Pod of the temporary volume, and the resource access component clears the Pod information from the ETCD component;
the daemon component detects that a temporary volume which is mounted on a Pod to be deleted exists in a corresponding working node, invokes the CSI driver to sequentially initiate an unloading flow of the temporary volume mounted on the Pod, an unloading flow of the working node mounting block device and a block device deleting request to the distributed storage system.
12. A computer readable medium, characterized in that it has stored therein computer program instructions which, when read and executed by a processor, perform the steps in the temporary volume storage method according to any of claims 1 to 8.
CN202310375437.2A 2023-04-11 2023-04-11 Temporary volume storage method, containerized cloud platform and computer readable medium Pending CN116382585A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310375437.2A CN116382585A (en) 2023-04-11 2023-04-11 Temporary volume storage method, containerized cloud platform and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310375437.2A CN116382585A (en) 2023-04-11 2023-04-11 Temporary volume storage method, containerized cloud platform and computer readable medium

Publications (1)

Publication Number Publication Date
CN116382585A true CN116382585A (en) 2023-07-04

Family

ID=86970761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310375437.2A Pending CN116382585A (en) 2023-04-11 2023-04-11 Temporary volume storage method, containerized cloud platform and computer readable medium

Country Status (1)

Country Link
CN (1) CN116382585A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056029A (en) * 2023-10-09 2023-11-14 苏州元脑智能科技有限公司 Resource processing method, system, device, storage medium and electronic equipment
CN117519613A (en) * 2024-01-08 2024-02-06 之江实验室 Storage volume sharing method and system for k8s clusters

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056029A (en) * 2023-10-09 2023-11-14 苏州元脑智能科技有限公司 Resource processing method, system, device, storage medium and electronic equipment
CN117056029B (en) * 2023-10-09 2024-02-09 苏州元脑智能科技有限公司 Resource processing method, system, device, storage medium and electronic equipment
CN117519613A (en) * 2024-01-08 2024-02-06 之江实验室 Storage volume sharing method and system for k8s clusters

Similar Documents

Publication Publication Date Title
US11218364B2 (en) Network-accessible computing service for micro virtual machines
JP6732798B2 (en) Automatic scaling of resource instance groups in a compute cluster
US11226847B2 (en) Implementing an application manifest in a node-specific manner using an intent-based orchestrator
US10963282B2 (en) Computing service with configurable virtualization control levels and accelerated launches
US10158579B2 (en) Resource silos at network-accessible services
CN116382585A (en) Temporary volume storage method, containerized cloud platform and computer readable medium
US8151245B2 (en) Application-based specialization for computing nodes within a distributed processing system
EP0747832A2 (en) Customer information control system and method in a loosely coupled parallel processing environment
US11392363B2 (en) Implementing application entrypoints with containers of a bundled application
CN111290827B (en) Data processing method, device and server
US8001324B2 (en) Information processing apparatus and informaiton processing method
US10620871B1 (en) Storage scheme for a distributed storage system
US8141084B2 (en) Managing preemption in a parallel computing system
CN111684437B (en) Staggered update key-value storage system ordered by time sequence
US5790868A (en) Customer information control system and method with transaction serialization control functions in a loosely coupled parallel processing environment
CN113391875A (en) Container deployment method and device
US8838768B2 (en) Computer system and disk sharing method used thereby
EP0747812A2 (en) Customer information control system and method with API start and cancel transaction functions in a loosely coupled parallel processing environment
US8442939B2 (en) File sharing method, computer system, and job scheduler
CN113438295A (en) Container group address allocation method, device, equipment and storage medium
US20190215281A1 (en) Fenced Clone Applications
CN115913778A (en) Network strategy updating method, system and storage medium based on sidecar mode
JP7483059B2 (en) DEFAULT GATEWAY MANAGEMENT METHOD, GATEWAY MANAGER, SERVER AND STORAGE MEDIUM
CN115202820A (en) Method, device and equipment for creating Pod unit and storage medium
CN112637037A (en) Cross-region container communication system, method, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination