CN117667298A - Method and device for starting container, computing node and shared storage equipment - Google Patents

Method and device for starting container, computing node and shared storage equipment Download PDF

Info

Publication number
CN117667298A
CN117667298A CN202211057183.1A CN202211057183A CN117667298A CN 117667298 A CN117667298 A CN 117667298A CN 202211057183 A CN202211057183 A CN 202211057183A CN 117667298 A CN117667298 A CN 117667298A
Authority
CN
China
Prior art keywords
container
file system
shared storage
image
storage device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211057183.1A
Other languages
Chinese (zh)
Inventor
王耀辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Huawei Technology Co Ltd
Original Assignee
Chengdu Huawei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Huawei Technology Co Ltd filed Critical Chengdu Huawei Technology Co Ltd
Priority to CN202211057183.1A priority Critical patent/CN117667298A/en
Priority to PCT/CN2023/080081 priority patent/WO2024045541A1/en
Publication of CN117667298A publication Critical patent/CN117667298A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines

Abstract

The embodiment of the application discloses a method and a device for starting a container, a computer node and a shared storage device, and belongs to the technical field of containers. The method is applied to a computing node, and comprises the following steps: sending a request for establishing a container image file system of a first container to the shared storage device; receiving an establishment completion message of the container mirror image file system sent by the shared storage device, wherein the container mirror image file system stores a container mirror image file of the first container; creating a root directory of the first container, and mounting the container mirror image file system under the root directory; and controlling the shared storage device to initialize the container configuration file in the file. By adopting the method and the device, the storage resources of the computing node can be saved.

Description

Method and device for starting container, computing node and shared storage equipment
Technical Field
The present disclosure relates to the field of container technologies, and in particular, to a method and apparatus for starting a container, a computing node, and a shared storage device.
Background
The virtualization technology can realize hardware resource sharing of the server, wherein a virtual machine, a container and the like are virtualization technologies which are more commonly used at present. Compared with a virtual machine, the container is widely used in some lightweight applications due to the light weight of the container.
Currently, a compute node needs to download a container image of a container to a local location in a container image repository and start the container on the basis of the container image if the container is to be used.
However, since storage resources of a compute node are typically small, storage container mirroring can bring about a certain storage pressure for the compute node.
Disclosure of Invention
The embodiment of the application provides a method, a device, a computing node and a shared storage device for starting a container, which can relieve storage pressure of the computing node caused by storage container mirroring under the condition that the computing node starts the container. The technical scheme is as follows:
in a first aspect, there is provided a method of container startup, the method being applied to a computing node, the method comprising:
the computing node sends a request to the shared storage device to establish a container image file system for the first container. After receiving the establishment completion message of the container image file system sent by the shared storage device, creating a root directory of the first container, and mounting the container image file system under the root directory, so that the computing node can realize access to the container image file system. Wherein the container image file system is created by the shared storage device, the container image file system having stored therein a container image file of the first container. Further, the computing node may initialize the container profile for the container by controlling the shared storage device.
It can be seen that in the solution provided in the embodiment of the present application, the computing nodes do not need to store the container images of the containers locally, and the container images are all stored in the shared storage device and can be shared by multiple computing nodes. The computing node can access the container mirror image file system only by mounting the container mirror image file system in the shared storage device under the root directory of the container. Therefore, the storage resources of the computing node can be effectively saved, and the storage pressure caused by the storage container mirror image when the computing node starts the container is reduced.
In one possible implementation manner, before sending a request for establishing a container image file system of a first container to the shared storage device, the computing node may determine that the shared storage device servo-stores the container image of the first container, and if it is determined that the shared storage device stores the container image of the first container, send a request for establishing the container image file system of the corresponding first container to the shared storage device.
In one possible implementation, if the computing node determines that the container image of the first container is not stored in the shared storage device, the computing node may first send a download request of the container image of the first container to the container image repository. And after receiving the container image of the first container sent by the container warehouse, sending the container image to the shared storage device. And after the container image is sent to the shared storage device, the computing node can delete the local container image so as to save storage resources.
In one possible implementation, after receiving the container image of the second container sent by a certain computing node, the shared storage device sends a shared storage message of the container image of the second container to other computing nodes of the container platform or other computing nodes that share the container image. Wherein the shared storage message is used to notify other computing nodes that the container image of the second container is stored in the shared storage device. In this way, the other computing nodes do not need to download the image of the second container to the container image repository if the second container is to be started.
In one possible implementation manner, the container mirror image file system is a combined file system obtained by combining a read-only file system and a top-level readable-writable file system corresponding to each container mirror image layer of the first container by the shared storage device, wherein the read-only file system stores files of the corresponding container mirror image layer, and the files of the container mirror image comprise files of each container mirror image layer.
In one possible implementation, controlling the shared storage to initialize a container configuration file in the container image file system includes:
and controlling the shared storage to copy the container configuration file to a top-level readable-writable file system, and initializing the container configuration file in the top-level readable-writable file system.
In one possible implementation, the container image file system is a snapshot file system generated by the shared storage device from a readable and writable file system storing the files of the container image. The readable and writable file system is generated by the shared storage device according to the dependency relationship among the container mirror layers of the first container.
In one possible implementation, the readable and writable file system is a snapshot file system generated by the shared storage device according to a joint file system corresponding to the container mirror image, where the joint file system is formed according to a read-only file system corresponding to each container mirror image layer of the first container.
In one possible implementation, controlling the shared storage to initialize a container configuration file in a container image file system includes:
and controlling the shared storage to initialize the target data block in the container configuration file in the container mirror image file system, and recording the initialized target data block into a block difference tracking file. Therefore, the whole file does not need to be modified after being copied, the initialization efficiency is improved, and the starting efficiency of the container is further improved.
In a second aspect, there is provided a method of booting a container, the method being applied to a shared storage device, the method comprising:
receiving a request for establishing a container mirror file system of a first container, which is sent by a first computing node;
establishing a container mirror image file system of the first container, wherein the container mirror image file system comprises a container mirror image file of the first container;
receiving an initialization request for a container configuration file in the container mirror image file system, which is sent by the first computing node;
and initializing the container configuration file under the control of the first computing node.
In one possible implementation, the method further includes:
receiving a container image of the first container sent by the first computing node;
and sending a shared storage message of the container mirror image of the first container to other computing nodes except the first computing node in the container platform, wherein the shared storage message is used for notifying the other computing nodes that the container mirror image of the first container is stored in the shared storage device.
In one possible implementation, the method further includes:
Receiving a container image of the first container sent by the first computing node;
determining computing nodes sharing the container image;
and sending a shared storage message of the container mirror image of the first container to other computing nodes except the first computing node in the computing nodes sharing the container mirror image, wherein the shared storage message is used for notifying the other computing nodes that the container mirror image of the first container is stored in the shared storage device.
In one possible implementation manner, the establishing a container image file system of the first container includes:
and combining the read-only file system and the top-layer readable-writable file system corresponding to each container mirror layer of the first container to obtain the container mirror file system of the first container.
In one possible implementation, the initializing the container configuration file includes:
copying the container configuration file to the top-level readable-writable file system, and initializing the container configuration file in the top-level readable-writable file system.
In one possible implementation manner, the establishing a container image file system of the first container includes:
Generating a readable and writable file system for storing files of each container mirror layer according to the dependency relationship among the container mirror layers of the first container;
generating a snapshot file system according to the readable and writable file system;
and taking the snapshot file system as a container mirror file system of the first container.
In one possible implementation manner, the establishing a container image file system of the first container includes:
forming a combined file system according to the read-only file system corresponding to each container mirror layer of the first container;
generating a snapshot file system according to the combined file system;
and taking the snapshot file system as a container mirror file system of the first container.
In one possible implementation, the initializing the container configuration file includes:
initializing target data blocks in a container configuration file in the container mirror image file system, and recording the initialized target data blocks into a block difference tracking file.
In a third aspect, there is provided an apparatus for actuating a container, the apparatus comprising:
the sending module is used for sending a request for establishing the container mirror image file system of the first container to the shared storage equipment;
The receiving module is used for receiving the establishment completion message of the container mirror image file system sent by the shared storage equipment, wherein the container mirror image file system stores the container mirror image file of the first container;
the mounting module is used for creating a root directory of the first container and mounting the container mirror image file system under the root directory;
and the modification module is used for controlling the shared storage equipment to initialize the container configuration file in the file.
In one possible implementation manner, the sending module is configured to:
and if the container mirror image of the first container is stored in the shared storage device, sending a request for establishing a container mirror image file system corresponding to the first container to the shared storage device.
In one possible implementation manner, the sending module is further configured to:
if the fact that the container image of the first container is not stored in the shared storage device is determined, sending a downloading request of the container image of the first container to a container image warehouse;
the receiving module is further used for receiving the container mirror image of the first container sent by the container warehouse;
The sending module is further configured to send a container image of the first container to the shared storage device.
In one possible implementation, the receiving module is further configured to:
and receiving a shared storage message of the container mirror image of the second container sent by the shared storage device, wherein the shared storage message is used for notifying the computing node that the container mirror image of the second container is stored in the shared storage device.
In one possible implementation manner, the container mirror image file system is a combined file system obtained by combining the read-only file system and the top-level readable-writable file system corresponding to each container mirror image layer of the first container by the shared storage device, wherein the read-only file system stores files of the corresponding container mirror image layer, and the files of the container mirror image comprise files of each container mirror image layer.
In one possible implementation, the modifying module is configured to:
and controlling the shared storage to copy the container configuration file to the top-level readable-writable file system, and initializing the container configuration file in the top-level readable-writable file system.
In one possible implementation manner, the container mirror file system is a snapshot file system generated by the shared storage device according to a readable and writable file system of a file stored with the container mirror, wherein the readable and writable file system is a dependency relationship between container mirror layers of the shared storage device according to the first container.
In one possible implementation manner, the readable and writable file system is a snapshot file system generated by the shared storage device according to a joint file system corresponding to the container mirror image, where the joint file system is formed according to a read-only file system corresponding to each container mirror image layer of the first container.
In one possible implementation, the modifying module is configured to:
and controlling the shared storage to initialize the target data block in the container configuration file in the container mirror image file system, and recording the initialized target data block into a block difference tracking file.
In a fourth aspect, there is provided an apparatus for actuating a container, the apparatus comprising:
the receiving module is used for receiving a request for establishing a container mirror image file system of the first container, which is sent by the first computing node;
The establishing module is used for establishing a container mirror image file system of the first container, wherein the container mirror image file system comprises a container mirror image file of the first container;
a sending module, configured to send an establishment completion message of the container image file system to the first computing node;
and the modification module is used for initializing the container configuration file under the control of the first computing node.
In one possible implementation manner, the receiving module is further configured to:
receiving a container image of the first container sent by the first computing node;
the device also comprises a sending module, which is used for sending a shared storage message of the container mirror image of the first container to other computing nodes except the first computing node in the container platform, wherein the shared storage message is used for informing the other computing nodes that the container mirror image of the first container is stored in the shared storage device.
In one possible implementation manner, the receiving module is further configured to:
receiving a container image of the first container sent by the first computing node;
determining computing nodes sharing the container image;
The device further comprises a sending module, configured to send a shared storage message of the container image of the first container to other computing nodes except the first computing node in the computing nodes sharing the container image, where the shared storage message is used to notify the other computing nodes that the container image of the first container is stored in the shared storage device.
In one possible implementation manner, the establishing module is configured to:
and combining the read-only file system and the top-layer readable-writable file system corresponding to each container mirror layer of the first container to obtain the container mirror file system of the first container.
In one possible implementation, the modifying module is configured to:
copying the container configuration file to the top-level readable-writable file system, and initializing the container configuration file in the top-level readable-writable file system.
In one possible implementation manner, the establishing module is configured to:
establishing a readable and writable file system for storing files of each container mirror layer;
generating a snapshot file system according to the readable and writable file system;
and taking the snapshot file system as a container mirror file system of the first container.
In one possible implementation manner, the establishing module is configured to:
forming a combined file system according to the read-only file system corresponding to each container mirror layer of the first container;
generating a snapshot file system according to the combined file system;
and taking the snapshot file system as a container mirror file system of the first container.
In one possible implementation, the modifying module is configured to:
initializing target data blocks in a container configuration file in the container mirror image file system, and recording the initialized target data blocks into a block difference tracking file.
In a fifth aspect, there is provided a computing node comprising a processor and a memory for storing at least one piece of program code loaded by the processor and executing the method of starting up a container as provided by the first aspect or any one of the possible implementations of the first aspect.
In a sixth aspect, there is provided a shared storage device comprising a processor and a memory for storing at least one piece of program code loaded by the processor and executing the method of starting up a container as provided in the second aspect or any one of the possible implementations of the second aspect.
In a seventh aspect, a computer readable storage medium is provided for storing at least one piece of program code for implementing the method of starting up a container provided by the first aspect or any one of the possible implementations of the first aspect. The storage medium includes, but is not limited to, volatile memory, such as random access memory, non-volatile memory, such as flash memory, hard Disk Drive (HDD), solid state disk (solid state drive, SSD).
In an eighth aspect, a computer program product is provided which, when run on a computing device, causes the computing device to implement the method of booting a container provided by the first aspect or any one of the possible implementations of the first aspect. The computer program product may be a container engine and storage client in embodiments of the present application.
Drawings
FIG. 1 is a schematic illustration of an implementation environment provided by embodiments of the present application;
FIG. 2 is a schematic diagram of an implementation environment provided by embodiments of the present application;
FIG. 3 is a schematic diagram of an implementation environment provided by embodiments of the present application;
FIG. 4 is a flow chart of a method for starting a container provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a system for creating a container image file according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a system for creating a container image file provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a system for creating a container image file according to an embodiment of the present application;
FIG. 8 is a schematic view of a device for activating a container according to an embodiment of the present application;
FIG. 9 is a schematic view of a device for activating a container according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a computing node according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a shared memory device according to an embodiment of the present application.
Detailed Description
In order to facilitate an understanding of the present application, reference will be made to key terms or key concepts related to the present application.
1. Container mirror image
The container image includes files of a plurality of container image layers and metadata (manifest) of the container image. Referring to fig. 1, there is shown a container mirroring layer in which an Operating System (OS) mirroring file is compressed in layer 0, a dependency library of an application program is compressed in layer1, and a binary file of the application program is compressed in layer 2. The metadata of the container mirror image records the relation among the container mirror image layers, the identification of the container mirror image and the like.
2. Mounting (mount)
Mount is a means of allowing a device to access a file system. Mounting a file system to a specified directory, and accessing the specified directory by a device is equivalent to accessing the file system. This specified directory is called a mount point. Because the original files under the mounting point can be hidden by mounting, the mounting point is usually a newly built empty directory.
An exemplary description of an implementation environment of an embodiment of the present application follows.
Referring to fig. 2, in an implementation environment of an embodiment of the present application, a container platform and a shared storage device may be included, where the container platform may include at least one container cluster, each container cluster including a plurality of computing nodes therein. Computing node 1, computing node 2 … …, computing node N as shown in fig. 2 may belong to at least one container cluster.
A container engine may be installed in the computing node, where the container engine has functions of starting, managing, etc. the container. In addition, a storage client can be installed in the computing node, and the storage client interacts with the shared storage device under the instruction of the container engine to realize functions of mounting the container mirror image file system, modifying files and the like.
The shared storage device has stored therein a container image that is available for shared use by the computing nodes. When a computing node needs to start a container, the shared storage device can construct a corresponding container mirror file system for the computing node to mount and access.
Optionally, in an implementation environment of the embodiment of the present application, a container mirror repository may be further included, where when a computing node wants to start a certain container, and there is no container mirror of the container in the shared storage device, then the computing node may request to download the container mirror from the container mirror repository, and send the downloaded container mirror to the shared storage device, where the container mirror is stored by the shared storage device, so that other computing nodes do not need to download the container mirror from the container mirror repository any more. On the basis, if all the container images in the container image warehouse are stored in the shared storage device and the container images which are generated later are directly stored in the shared storage device, the container image warehouse does not need to be built.
There may be a variety of situations in which a shared storage device may be deployed.
For example, the shared storage device may be deployed externally, i.e. the shared storage device is deployed separately outside the container platform, and in this deployment manner, the shared storage device may be deployed by a single machine, or may be deployed in a distributed manner by a server cluster. In particular, the distributed manner may include a multiple copy manner, an Erasure Coding (EC) manner, and a Peer-to-Peer (P2P) manner. The computing nodes may access the shared storage devices through a network protocol, which may include a universal internet document system (Common Internet File System, CIFS) protocol, a network file system (Network File System, NFS) protocol, an internet small computer system interface (Internet Small Computer System Interface, iSCSi) protocol, a Fibre Channel (FC) protocol, an NVMe storage network (NVMe over Fabric, nof) protocol, an aggregated ethernet-based remote direct data access (RDMA over Converged Ethernet, roCE) protocol, an internet wide area remote direct data access protocol (Internet Wide Area RDMA Protocol, icapp), or a proprietary protocol, among others.
For another example, the shared storage device may be a built-in deployment, i.e., the shared storage device is deployed in a container platform, in which case the shared storage may be distributed across multiple computing nodes. The embodiment of the application does not limit the specific deployment mode of the shared storage device.
A method for starting up a container according to an embodiment of the present application will be briefly described with reference to fig. 3.
In the scenario shown in fig. 3, computing node 1 and computing node 2 need the same container image to boot up the container, which container image is stored in the shared storage device, in which case computing node 1 and computing node 2 need only send a request to establish the container image file system to the shared storage device, respectively. Further, the shared storage creates a container image file system 1 and a container image file system 2 for computing node 1 and computing node 2, respectively, based on the container image. The computing node 1 mounts the container image file system 1 under the root directory of the container through the storage client, and likewise, the computing node 2 mounts the container image file system 2 under the root directory of the container through the storage client. Then, the computing node 1 initializes the container configuration file stored in the container image file system 1 through the storage client, and the computing node 2 initializes the container configuration file stored in the container image file system 2 through the storage client. Thus, both compute node 1 and compute node 2 complete the container startup. In the process of starting the computing node 1 and the computing node 2, the container mirror image stored in the shared storage device can be shared by the computing node 1 and the computing node 2, the computing node 1 and the computing node 2 do not need to store the container mirror image locally, and storage resources of the computing node can be saved.
The following describes in detail a method for starting a container according to an embodiment of the present application, and as shown in fig. 4, the processing of the method may include the following steps:
step 401, a first computing node sends a request for establishing a container image file system of a first container to a shared storage device.
The first computing node may be any computing node in fig. 2 or fig. 3.
The execution timing of the step 401 may also be different in different implementation environments, and the execution timing of the step 401 is described below with respect to several possible implementation environments.
The implementation environment I is not deployed with a container mirror warehouse, container mirrors of all containers are stored in the shared storage device, and newly created container mirrors are also stored in the shared storage device
In this case, after receiving the start-up instruction of the first container, the first computing node may directly send, through the storage client, a request for establishing a container image file system of the first container to the shared storage device. The request for establishing the container mirror image file system can carry the identification of the container mirror image.
The implementation environment II is deployed with a container mirror warehouse, and the newly created container mirror is stored in the container mirror warehouse
In this case, the processing of the first computing node after receiving the start instruction of the first container may include the steps of:
step A1, a first computing node determines whether a container image of a first container is stored in a shared storage device.
After receiving the start instruction of the first container, the first computing node can judge whether the container image of the first container is stored in the shared storage device or not by judging whether metadata of the container image of the first container is recorded locally.
The following step A6 describes the case of computing the metadata of which case the node will record the container image.
And step A2, the first computing node determines that the container mirror image of the first container is stored in the shared storage equipment, and then sends an establishment request of a container mirror image file system of the first container to the shared storage equipment.
And A3, the first computing node determines that the container mirror image of the first container is not stored in the shared storage device, and then sends a downloading request of the container mirror image of the first container to the container mirror image warehouse.
And A4, the container mirror warehouse sends the container mirror of the first container to the first computing node.
The container mirror warehouse returns the container mirror of the first container to the first computing node after receiving a downloading request of the container mirror of the first container sent by the first computing node.
Step A5, the first computing node sends the container mirror image of the first container to the shared storage device.
And step A6, the shared storage equipment transmits the shared storage information of the container mirror image of the first container to other computing nodes except the first computing node in the container platform.
The shared storage message is used for notifying other computing nodes that the container mirror image of the first container is stored in the shared storage device.
Taking the implementation scenario shown in fig. 2 as an example, assuming that the first computing node is computing node 1, in step A6, the shared storage device may send a shared storage message of the container image of the first container to computing node N from computing node 2 of the container platform.
In addition, this step A6 may be replaced by the following process:
the shared storage device sends a shared storage message of the container image of the first container to other computing nodes except the first computing node in the computing nodes sharing the container image of the first container.
In this case, the correspondence between the container images and the computing nodes may be preconfigured in the shared storage device, in which the corresponding relations are recorded which computing nodes can use which container images.
Taking the implementation scenario shown in fig. 2 as an example, assuming that the first computing node is the computing node 1, and recording, in the shared storage device, that the computing node 1 and the computing node 2 can use the container image of the first container, after receiving the container image of the first container sent by the computing node 1, the shared storage device queries the corresponding relationship between the container image and the computing node to determine that the computing node 2 can also use the container image, and then sends a shared storage message of the container image of the first container to the computing node 2.
Correspondingly, for the step A6, after receiving the shared storage message of the container image of the first container sent by the shared storage device, the computing node may locally record metadata of the container image carried in the shared storage message, and refresh the cache.
And step A7, the first computing node sends a request for establishing the container mirror image file system of the first container to the shared storage equipment.
The execution order of the step A7 and the step A6 is not limited, and the step A7 may be executed after the step A5.
Step 402, the shared storage device establishes a container image file system of the first container.
The container mirror image file system comprises files of each container mirror image layer of the first container.
In implementation, there are also various methods for the shared storage device to create a container image file system, and the method for creating a container image file system is described below with reference to the example in fig. 5.
Method one, establish container mirror image file system through snapshot mechanism
The first method will be described with reference to fig. 5. Referring to fig. 5, it is assumed that the container mirror of the first container includes three container mirror layers of layer0, layer1, and layer2. The dependency relationship among layer0, layer1 and layer2 recorded in metadata according to container mirroring is: layer0, layer1 and layer2 are sequentially arranged from bottom layer to top layer.
And B1, the shared storage equipment creates a first file system for the layer0, and decompresses files in the layer0 to the first file system.
And B2, the shared storage equipment creates a first snapshot file system for the first file system through a snapshot mechanism, and decompresses the files in layer1 to the first snapshot file system.
For example, the shared storage device may create a readable and writable copy of the first file system through a file system snapshot technique or a cloning technique, and decompress the files in layer1 into the readable and writable copy, resulting in the first snapshot file system. The first snapshot file system stores a file decompressed by layer0 and a file decompressed by layer 1.
And B3, the shared storage equipment creates a second snapshot file system for the first snapshot file system through a snapshot mechanism, and decompresses the files in layer2 into the second snapshot file system.
For example, the shared storage device may create a read-write copy of the first snapshot file system via a file system snapshot technique or a cloning technique, and decompress the files in layer2 into the read-write copy, resulting in a second snapshot file system. The second snapshot file system stores a file decompressed by layer0, a file decompressed by layer1 and a file decompressed by layer 2.
And B4, the shared storage equipment creates a container mirror image file system for the second snapshot file system through a snapshot mechanism.
For example, the shared storage device may create a readable and writable copy of the second snapshot file system as a container mirror file system through file system snapshot techniques or cloning techniques. The container mirror image file system stores a file decompressed by layer0, a file decompressed by layer1 and a file decompressed by layer 2.
It should be noted that, steps B1 to B3 may be performed after the shared storage device obtains the container image, and step B4 may be performed after the shared storage device receives the request for establishing the container image file system. Therefore, when the computing node starts the container, the computing node does not need to wait for the decompression of the image layer files of each container, and the starting efficiency of the container is improved.
Establishing a container mirror image file system through a combined file system mechanism
Next, the second method will be described with reference to fig. 6. Referring to fig. 6, it is assumed that the container mirror of the first container includes three container mirror layers of layer0, layer1, and layer2. The dependency relationship among layer0, layer1 and layer2 recorded in metadata according to container mirroring is: layer0, layer1 and layer2 are sequentially arranged from bottom layer to top layer.
And C1, the shared storage equipment creates a first file system for the layer0, and decompresses files in the layer0 to the first file system. Wherein the first file system is a read-only file system.
And C2, creating a second file system by the shared storage device layer1, and decompressing files in the layer1 to the second file system. Wherein the second file system is a read-only file system.
And C3, creating a third file system by the shared storage device layer2, and decompressing files in the layer2 to the third file system. Wherein the third file system is a read-only file system.
And C4, the shared storage equipment creates a top-layer readable and writable file system, and the first file system, the second file system, the third file system and the top-layer readable and writable file system are combined into a container mirror file system through a joint file system mechanism.
The federated file system mechanism may include, among other things, an upper-layer file system (overlayfs), a federated file system (unionfs), an Advanced Multi-layer unified file system (aufs), etc.
It should be noted that, steps C1 to C3 may be performed after the shared storage device obtains the container image, and step C4 may be performed after the shared storage device receives the request for establishing the container image file system. Therefore, when the computing node starts the container, the computing node does not need to wait for the decompression of the image layer files of each container, and the starting efficiency of the container is improved.
Method III, combining a snapshot mechanism and a combined file system mechanism to establish a container mirror image file system
The third method will be described with reference to fig. 7. Referring to fig. 7, it is assumed that the container mirror of the first container includes three container mirror layers of layer0, layer1, and layer2. The dependency relationship among layer0, layer1 and layer2 recorded in metadata according to container mirroring is: the bottom layer and the top layer are sequentially as follows: layer0, layer1, layer2.
And D1, the shared storage equipment creates a first file system for the layer0, and decompresses files in the layer0 to the first file system. Wherein the first file system is a read-only file system.
And D2, creating a second file system by the shared storage device layer1, and decompressing files in the layer1 to the second file system. Wherein the second file system is a read-only file system.
And D3, creating a third file system by the shared storage device layer2, and decompressing files in the layer2 to the third file system. Wherein the third file system is a read-only file system.
And D4, the shared storage equipment combines the first file system, the second file system and the third file system into a combined file system through a combined file system mechanism, and creates a snapshot file system for the combined file system through a snapshot mechanism to serve as a container mirror image file system.
For example, the shared storage device may create a read-write copy of the federated file system, which may also be referred to as a snapshot file system, through file system snapshot techniques or cloning techniques, that is, a container image file system.
It should be noted that, steps D1 to D3 may be performed after the shared storage device obtains the container image, and step D4 may be performed after the shared storage device receives the request for establishing the container image file system. Therefore, when the computing node starts the container, the computing node does not need to wait for the decompression of the image layer files of each container, and the starting efficiency of the container is improved.
After the container mirror image file system is built, the shared storage device returns a built-up completion message to the storage client of the first computing node.
Step 403, the first computing node creates a root directory of the first container, and mounts the container image file system under the root directory.
In an implementation, to enable remote access to the container image file system, the first compute node container engine invokes the storage client to create a root directory of the first container and mounts the container image file system under the root directory with the root directory as a mount point. Thus, the first computing node, when accessing the root directory, is equivalent to accessing the container image file system.
Step 404, the first computing node controls the shared storage device to initialize the container configuration file in the file.
In implementation, after the first computing node completes the mounting of the container image file system, the first computing node may access the container image file system through a network protocol, and control the shared storage device to initialize a container configuration file in the container image file system. Initialization, i.e., modifying specified data in a container configuration file to preset initial data, is also a modification process to the file in nature.
For different methods of creating the container image file system, there may be different methods of modifying files in the container image file system.
For the first and third methods in step 402, the method for modifying the file may be as follows:
and a difference bitmap can be established for each file in the shared storage device, each position in the difference bitmap corresponds to one data block of the file, when one data block is to be modified, the data block is read out from the file, the modification is carried out, and after the modification is finished, the value of the corresponding position in the difference bitmap is updated from 0 to 1 so as to indicate that the data block corresponding to the position is modified. And recording the modified data block. In particular, the modified data blocks may be redirected to a block difference tracking (Block Change Tracking, BCT) file for saving. Thus, when the data block is subsequently revisited, the data block modified in the BCT file is pointed to.
Correspondingly, when initializing the container configuration file, firstly reading a data block needing to be initialized and modified from the container configuration file, and modifying the data block into preset initial data. Then, in the difference bitmap corresponding to the container configuration file, the value of the corresponding position of the data block is updated from 0 to 1. And redirecting the initialized and modified data block to the BCT file for storage.
For the second method, the method for modifying the file may be as follows:
copying the file to be modified into the top-level readable-writable file system corresponding to the container mirror image, and then modifying the file in the top-level readable-writable file system.
Correspondingly, when initializing the container configuration file, copying the container configuration file to a top-layer readable and writable file system corresponding to the container mirror image, and then modifying the data appointed in the container configuration file into preset initial data in the top-layer readable and writable file.
In the scheme provided by the embodiment of the application, the computing nodes do not need to store the container images of the containers in the local storage, and the container images are stored in the shared storage device and can be shared by a plurality of computing nodes. The computing node can access the container mirror image file system only by mounting the container mirror image file system in the shared storage device under the root directory of the container, and further, the reading and modification of the file of the container mirror image in the container mirror image file system are realized. Therefore, the storage resources of the computing node can be effectively saved, and the storage pressure caused by the storage container mirror image when the computing node starts the container is reduced.
Based on the same technical concept, the embodiment of the present application further provides an apparatus for starting a container, where the apparatus may be a computing node, and referring to fig. 8, the apparatus includes a sending module 810, a receiving module 820, a mounting module 830, and a modifying module 840, where:
a sending module 810, configured to send a request for establishing a container image file system of the first container to the shared storage device;
a receiving module 820, configured to receive an establishment completion message sent by the shared storage device to the container image file system, where the container image file system stores a container image file of the first container;
a mount module 830, configured to create a root directory of the first container, and mount the container image file system under the root directory;
and a modifying module 840, configured to control the shared storage device to initialize the container configuration file in the file.
In one possible implementation, the sending module 810 is configured to:
and if the container mirror image of the first container is stored in the shared storage device, sending a request for establishing a container mirror image file system corresponding to the first container to the shared storage device.
In one possible implementation, the sending module 810 is further configured to:
if the fact that the container image of the first container is not stored in the shared storage device is determined, sending a downloading request of the container image of the first container to a container image warehouse;
the receiving module 820 is further configured to receive a container image of the first container sent by the container warehouse;
the sending module is further configured to send a container image of the first container to the shared storage device.
In one possible implementation, the receiving module 820 is further configured to:
and receiving a shared storage message of the container mirror image of the second container sent by the shared storage device, wherein the shared storage message is used for notifying the computing node that the container mirror image of the second container is stored in the shared storage device.
In one possible implementation manner, the container mirror image file system is a combined file system obtained by combining the read-only file system and the top-level readable-writable file system corresponding to each container mirror image layer of the first container by the shared storage device, wherein the read-only file system stores files of the corresponding container mirror image layer, and the files of the container mirror image comprise files of each container mirror image layer.
In one possible implementation, the modifying module 840 is configured to:
and controlling the shared storage to copy the container configuration file to the top-level readable-writable file system, and initializing the container configuration file in the top-level readable-writable file system.
In one possible implementation, the container image file system is a snapshot file system generated by the shared storage device according to a readable and writable file system of a file stored with the container image, wherein the readable and writable file system is generated by the shared storage device according to a dependency relationship between container image layers of the first container.
In one possible implementation manner, the readable and writable file system is a snapshot file system generated by the shared storage device according to a joint file system corresponding to the container mirror image, where the joint file system is formed according to a read-only file system corresponding to each container mirror image layer of the first container.
In one possible implementation, the modifying module 840 is configured to:
and controlling the shared storage to initialize target data blocks in a container configuration file in the container mirror image file system, and recording the initialized target data blocks into a BCT file.
In the scheme provided by the embodiment of the application, the computing nodes do not need to store the container images of the containers in the local storage, and the container images are stored in the shared storage device and can be shared by a plurality of computing nodes. The computing node can access the container mirror image file system only by mounting the container mirror image file system in the shared storage device under the root directory of the container, and further, the reading and modification of the file of the container mirror image in the container mirror image file system are realized. Therefore, the storage resources of the computing node can be effectively saved, and the storage pressure caused by the storage container mirror image when the computing node starts the container is reduced.
It should be noted that: in the device for starting a container provided in the above embodiment, when the container is started, only the division of the functional modules is used for illustration, in practical application, the functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the computing node is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the device for starting the container provided in the above embodiment and the method embodiment for starting the container belong to the same concept, and the specific implementation process is detailed in the method embodiment, which is not repeated here.
Based on the same technical concept, the embodiments of the present application further provide an apparatus for starting a container, where the apparatus may be a shared storage device, referring to fig. 9, the apparatus includes a receiving module 910, an establishing module 920, a sending module 930, and a modifying module 940, where:
a receiving module 910, configured to receive a request for establishing a container image file system of a first container sent by a first computing node;
the establishing module 920 is configured to establish a container image file system of the first container, where the container image file system includes a file of a container image of the first container;
a sending module 930, configured to send, to the first computing node, a setup complete message of the container image file system;
a modifying module 940, configured to initialize the container configuration file under the control of the first computing node.
In one possible implementation, the receiving module 910 is further configured to:
receiving a container image of the first container sent by the first computing node;
the device also comprises a sending module, which is used for sending a shared storage message of the container mirror image of the first container to other computing nodes except the first computing node in the container platform, wherein the shared storage message is used for informing the other computing nodes that the container mirror image of the first container is stored in the shared storage device.
In one possible implementation, the receiving module 910 is further configured to:
receiving a container image of the first container sent by the first computing node;
determining computing nodes sharing the container image;
the device further comprises a sending module, configured to send a shared storage message of the container image of the first container to other computing nodes except the first computing node in the computing nodes sharing the container image, where the shared storage message is used to notify the other computing nodes that the container image of the first container is stored in the shared storage device.
In one possible implementation, the establishing module 920 is configured to:
and combining the read-only file system and the top-layer readable-writable file system corresponding to each container mirror layer of the first container to obtain the container mirror file system of the first container.
In one possible implementation, the modifying module 940 is configured to:
copying the container configuration file to the top-level readable-writable file system, and initializing the container configuration file in the top-level readable-writable file system.
In one possible implementation, the establishing module 920 is configured to:
Generating a readable and writable file system for storing files of each container mirror layer according to the dependency relationship among the container mirror layers of the first container;
generating a snapshot file system according to a readable and writable file system storing files of the container mirror layers;
and taking the snapshot file system as a container mirror file system of the first container.
In one possible implementation, the establishing module 920 is configured to:
forming a combined file system according to the read-only file system corresponding to each container mirror layer of the first container;
generating a snapshot file system according to the combined file system;
and taking the snapshot file system as a container mirror file system of the first container.
In one possible implementation, the modifying module 930 is configured to:
initializing target data blocks in a container configuration file in the container mirror image file system, and recording the initialized target data blocks into a BCT file.
In the scheme provided by the embodiment of the application, the computing nodes do not need to store the container images of the containers in the local storage, and the container images are stored in the shared storage device and can be shared by a plurality of computing nodes. The computing node can access the container mirror image file system only by mounting the container mirror image file system in the shared storage device under the root directory of the container, and further, the reading and modification of the file of the container mirror image in the container mirror image file system are realized. Therefore, the storage resources of the computing node can be effectively saved, and the storage pressure caused by the storage container mirror image when the computing node starts the container is reduced.
It should be noted that: in the device for starting a container provided in the foregoing embodiment, when the container is started, only the division of the functional modules is used for illustration, in practical application, the functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the shared storage device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the device for starting the container provided in the above embodiment and the method embodiment for starting the container belong to the same concept, and the specific implementation process is detailed in the method embodiment, which is not repeated here.
Referring to FIG. 10, an embodiment of the present application provides a schematic diagram of a computing node, computing node 600 optionally implemented by a generic bus architecture. The computing node 600 includes at least one processor 601, a communication bus 602, a memory 603, and at least one network interface 604. The computing node of the structure shown in fig. 10 may be any of the computing nodes in fig. 2 and 3.
The processor 601 is, for example, a general-purpose central processing unit (central processing unit, CPU), a network processor (network processer, NP), a graphics processor (Graphics Processing Unit, GPU), a neural-network processor (neural-network processing units, NPU), a data processing unit (Data Processing Unit, DPU), a microprocessor, or one or more integrated circuits for implementing the aspects of the present application. For example, the processor 601 includes an application-specific integrated circuit (ASIC), a programmable logic device (programmable logic device, PLD), or a combination thereof. PLDs are, for example, complex programmable logic devices (complex programmable logic device, CPLD), field-programmable gate arrays (field-programmable gate array, FPGA), general-purpose array logic (generic array logic, GAL), or any combination thereof.
Communication bus 602 is used to transfer information between the components described above. The communication bus 602 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 10, but not only one bus or one type of bus.
The Memory 603 is, for example, but not limited to, a read-only Memory (ROM) or other type of static storage device that can store static information and instructions, as well as a random access Memory (random access Memory, RAM) or other type of dynamic storage device that can store information and instructions, as well as an electrically erasable programmable read-only Memory (electrically erasable programmable read-only Memory, EEPROM), compact disc read-only Memory (compact disc read-only Memory) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media, or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 603 is, for example, independent and is connected to the processor 601 via a communication bus 602. The memory 603 may also be integrated with the processor 601.
The network interface 604 uses any transceiver-like device for communicating with other devices or communication networks. Network interface 604 includes a wired network interface and may also include a wireless network interface. The wired network interface may be, for example, an ethernet interface. The ethernet interface may be an optical interface, an electrical interface, or a combination thereof. The wireless network interface may be a wireless local area network (wireless local area networks, WLAN) interface, a network interface of a cellular network, a combination thereof, or the like.
In a particular implementation, as an example, the processor 601 may include one or more CPUs.
In a particular implementation, as one example, computing node 600 may include multiple processors. Each of these processors may be a single-core processor (single-CPU) or a multi-core processor (multi-CPU). A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In particular implementations, computing node 600 may also include output devices and input devices, as one example. The output device communicates with the processor 601 and can display information in a variety of ways. For example, the output device may be a liquid crystal display (liquid crystal display, LCD), a light emitting diode (light emitting diode, LED) display device, a Cathode Ray Tube (CRT) display device, or a projector (projector), or the like. The input device is in communication with the processor 601 and receives input from a user in a variety of ways. For example, the input device may be a mouse, a keyboard, a touch screen device, a sensing device, or the like.
In some embodiments, the memory 603 is used to store program code 6031 that executes the boot container in the present application, and the processor 601 executes the program code 6031 stored in the memory 603. That is, the computing node 600 may implement the method of starting up the container provided by the method embodiment through the processor 601 and the program code 6031 in the memory 603.
Referring to FIG. 11, a distributed deployment of shared storage devices (also referred to as a cluster of shared storage devices) is shown, which may be the shared storage devices of FIGS. 2 and 3. The cluster includes one or more storage nodes 110 (three storage nodes 110 are shown in fig. 8, but are not limited to three storage nodes 110), and the storage nodes 110 may communicate with each other. Storage node 110 is a device that has both computing and storage capabilities, such as a storage node, desktop computer, or the like. Illustratively, an ARM storage node or an X86 storage node may be used as storage node 110 herein. In hardware, as shown in fig. 8, the storage node 110 includes at least a processor 112, a memory 113, a network card 114, and a hard disk 105. The processor 112, the memory 113, the network card 114 and the hard disk 105 are connected by buses. Wherein the processor 112 and the memory 113 are used for providing computing resources. Specifically, the processor 112 is a CPU for processing data access requests from outside the storage node 110 (application storage node or other storage node 110) and also for processing requests generated internally to the storage node 110. Illustratively, when the processor 112 receives write data requests, the data in these write data requests is temporarily stored in the memory 113. When the total amount of data in the memory 113 reaches a certain threshold, the processor 112 sends the data stored in the memory 113 to the hard disk 105 for persistent storage. In addition, the processor 112 is used for data computation or processing, such as metadata management, deduplication, data compression, data verification, virtualized storage space, address translation, and the like. Only one CPU 112 is shown in fig. 8, and in practical applications, there are often a plurality of CPUs 112, where one CPU 112 has one or more CPU cores. The present embodiment does not limit the number of CPUs and the number of CPU cores.
The memory 113 is an internal memory for directly exchanging data with the processor, and can read and write data at any time, and is fast, and is used as a temporary data memory for an operating system or other running programs. The memory includes at least two types of memories, for example, the memory may be either RAM or ROM. For example, the random access memory is a dynamic random access memory (Dynamic Random Access Memory, DRAM), or a storage class memory (Storage Class Memory, SCM). DRAM is a semiconductor memory, which, like most random access memory RAM, is a volatile memory (volatile memory) device. SCM is a composite storage technology combining both traditional storage devices and memory characteristics, and storage class memories can provide faster read and write speeds than hard disks, but access speeds slower than DRAM, and are cheaper in cost than DRAM. However, the DRAM and SCM are only exemplary in this embodiment, and the memory may also include other random access memories, such as static random access memories (Static Random Access Memory, SRAM), and the like. For read-only memory, for example, it may be a programmable read-only memory (Programmable Read Only Memory, PROM), erasable programmable read-only memory (Erasable Programmable Read Only Memory, EPROM), etc. In addition, the memory 113 may be a Dual In-line Memory Module (Dual In-line Memory Module) memory module, i.e., a module composed of DRAM, or a Solid State Disk (SSD). In practical applications, a plurality of memories 113 and different types of memories 113 may be configured in the storage node 110. The number and type of the memories 113 are not limited in this embodiment. In addition, the memory 113 may be configured to have a power conservation function. The power-up protection function means that the data stored in the memory 113 is not lost when the system is powered down and powered up again. The memory having the power-saving function is called a nonvolatile memory.
The hard disk 105 is used to provide storage resources, such as storage container mirroring. It may be a magnetic disk or other type of storage medium such as a solid state disk or shingled magnetic recording hard disk, etc. The network card 114 is used to communicate with other storage nodes 110.
When the shared storage device is deployed stand-alone, the structure of the shared storage device may be the same as that of any of the storage nodes in fig. 11.
It should be noted that, information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, the performance data referred to in this application are all obtained with sufficient authorization.
The terms "first," "second," and the like in this application are used for distinguishing between similar elements or items having substantially the same function and function, and it should be understood that there is no logical or chronological dependency between the terms "first," "second," and the like, nor is it limited to the order of execution or quantity. It will be further understood that, although the following description uses the terms first, second, etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another element. For example, a first computing node may be referred to as a second computing node, and similarly, a second computing node may be referred to as a first computing node, without departing from the scope of the various described examples. The first computing node and the second computing node are both computing nodes and, in some cases, may be separate and distinct nodes.
The term "at least one" in this application means one or more, the term "plurality" in this application means two or more, for example, a plurality of nodes means two or more.
The foregoing description is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions are all covered by the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of program structural information. The program structure information includes one or more program instructions. When loaded and executed on a computing device, produces, in whole or in part, a flow or function in accordance with embodiments of the present application.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the above storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (22)

1. A method of launching a container, the method being applied to a computing node, the method comprising:
sending a request for establishing a container image file system of a first container to the shared storage device;
receiving an establishment completion message of the container mirror image file system sent by the shared storage device, wherein the container mirror image file system stores a container mirror image file of the first container;
creating a root directory of the first container, and mounting the container mirror image file system under the root directory;
and controlling the shared storage device to initialize the container configuration file in the file.
2. The method of claim 1, wherein sending the request to establish the container image file system of the first container to the shared storage device comprises:
And if the container mirror image of the first container is stored in the shared storage device, sending a request for establishing a container mirror image file system of the first container to the shared storage device.
3. The method according to claim 2, wherein the method further comprises:
if the container image of the first container is not stored in the shared storage device, sending a downloading request of the container image of the first container to a container image warehouse;
receiving a container image of the first container sent by the container warehouse;
and sending the container image of the first container to the shared storage device.
4. A method according to claim 2 or 3, characterized in that the method further comprises:
and receiving a shared storage message of the container mirror image of the second container sent by the shared storage device, wherein the shared storage message is used for notifying the computing node that the container mirror image of the second container is stored in the shared storage device.
5. The method according to any one of claims 1-4, wherein the container image file system is a joint file system obtained by combining, by the shared storage device, a read-only file system and a top-level readable-writable file system corresponding to each container image layer of the first container, where the read-only file system stores therein files of the corresponding container image layer, and the files of the container image include files of each container image layer.
6. The method of claim 5, wherein the controlling the shared storage device to initialize a container profile in the file comprises:
and controlling the shared storage equipment to copy the container configuration file in the file to the top-level readable-writable file system, and initializing the container configuration file in the top-level readable-writable file system.
7. The method of any of claims 1-4, wherein the container image file system is a snapshot file system generated by the shared storage device from a read-write file system storing files of the container image, wherein the read-write file system is generated by the shared storage device from dependencies between container image layers of the first container.
8. The method of any of claims 1-4, wherein the readable and writable file system is a snapshot file system generated by the shared storage device according to a federated file system corresponding to the container image, wherein the federated file system is comprised of read-only file systems corresponding to each container image layer of the first container by the shared storage device.
9. The method of claim 7 or 8, wherein the controlling the shared storage device to initialize a container profile in the file comprises:
and controlling the shared storage to initialize the target data block in the container configuration file in the file, and recording the initialized target data block into the block difference tracking BCT file.
10. A method of booting a container, the method being applied to a shared storage device, the method comprising:
receiving a request for establishing a container mirror file system of a first container, which is sent by a first computing node;
establishing a container mirror image file system of the first container, wherein the container mirror image file system stores a container mirror image file of the first container;
sending an establishment completion message of the container image file system to the first computing node;
and initializing a container configuration file in the file under the control of the first computing node.
11. The method according to claim 10, wherein the method further comprises:
receiving a container image of the first container sent by the first computing node;
And sending a shared storage message of the container mirror image of the first container to other computing nodes except the first computing node in the container platform, wherein the shared storage message is used for notifying the other computing nodes that the container mirror image of the first container is stored in the shared storage device.
12. The method of claim 10, wherein the method further comprises:
receiving a container image of the first container sent by the first computing node;
determining computing nodes sharing the container image;
and sending a shared storage message of the container mirror image of the first container to other computing nodes except the first computing node in the computing nodes sharing the container mirror image, wherein the shared storage message is used for notifying the other computing nodes that the container mirror image of the first container is stored in the shared storage device.
13. The method of any of claims 10-12, wherein the establishing a container image file system of the first container comprises:
and combining the read-only file system and the top-layer readable-writable file system corresponding to each container mirror layer of the first container to obtain the container mirror file system of the first container.
14. The method of claim 13, wherein initializing the container profile in the file comprises:
copying the container configuration file in the file to the top-level readable-writable file system, and initializing the container configuration file in the top-level readable-writable file system.
15. The method of any of claims 10-12, wherein the establishing a container image file system of the first container comprises:
generating a readable and writable file system for storing files of each container mirror layer according to the dependency relationship among the container mirror layers of the first container;
generating a snapshot file system according to a readable and writable file system storing files of the container mirror layers;
and taking the snapshot file system as a container mirror file system of the first container.
16. The method of any of claims 10-12, wherein the establishing a container image file system of the first container comprises:
forming a combined file system according to the read-only file system corresponding to each container mirror layer of the first container;
generating a snapshot file system according to the combined file system;
And taking the snapshot file system as a container mirror file system of the first container.
17. The method of claim 15 or 16, wherein initializing the container profile in the file comprises:
initializing target data blocks in a container configuration file in the file, and recording the initialized target data blocks into a block difference tracking (BCT) file.
18. A device for actuating a container, the device comprising:
the sending module is used for sending a request for establishing the container mirror image file system of the first container to the shared storage equipment;
the receiving module is used for receiving the establishment completion message of the container mirror image file system sent by the shared storage equipment, wherein the container mirror image file system stores the container mirror image file of the first container;
the mounting module is used for creating a root directory of the first container and mounting the container mirror image file system under the root directory;
and the modification module is used for controlling the shared storage equipment to initialize the container configuration file in the file.
19. A device for actuating a container, the device comprising:
The receiving module is used for receiving a request for establishing a container mirror image file system of the first container, which is sent by the first computing node;
the establishing module is used for establishing a container mirror image file system of the first container, wherein the container mirror image file system comprises a container mirror image file of the first container;
a sending module, configured to send an establishment completion message of the container image file system to the first computing node;
and the modification module is used for initializing the container configuration file in the file under the control of the first computing node.
20. A computing node comprising a processor and a memory for storing at least one piece of program code that is loaded by the processor and that performs the method of booting the container of any of claims 1 to 9.
21. A shared memory device comprising a processor and a memory for storing at least one piece of program code that is loaded by the processor and that performs the method of starting up a container according to any of claims 10 to 17.
22. A computer readable storage medium storing at least one piece of program code for performing the method of starting up a container according to any one of claims 1 to 17.
CN202211057183.1A 2022-08-31 2022-08-31 Method and device for starting container, computing node and shared storage equipment Pending CN117667298A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211057183.1A CN117667298A (en) 2022-08-31 2022-08-31 Method and device for starting container, computing node and shared storage equipment
PCT/CN2023/080081 WO2024045541A1 (en) 2022-08-31 2023-03-07 Container starting method and apparatus, computing node, and shared storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211057183.1A CN117667298A (en) 2022-08-31 2022-08-31 Method and device for starting container, computing node and shared storage equipment

Publications (1)

Publication Number Publication Date
CN117667298A true CN117667298A (en) 2024-03-08

Family

ID=90072018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211057183.1A Pending CN117667298A (en) 2022-08-31 2022-08-31 Method and device for starting container, computing node and shared storage equipment

Country Status (2)

Country Link
CN (1) CN117667298A (en)
WO (1) WO2024045541A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740048B (en) * 2016-01-26 2019-03-08 华为技术有限公司 A kind of mirror image management method, apparatus and system
CN106506587B (en) * 2016-09-23 2021-08-06 中国人民解放军国防科学技术大学 Docker mirror image downloading method based on distributed storage
WO2020112029A1 (en) * 2018-11-30 2020-06-04 Purple Ds Private Ltd. System and method for facilitating participation in a blockchain environment
CN110704162B (en) * 2019-09-27 2022-09-20 北京百度网讯科技有限公司 Method, device and equipment for sharing container mirror image by physical machine and storage medium
CN113448609B (en) * 2021-08-30 2021-11-19 恒生电子股份有限公司 Container upgrading method, device, equipment and storage medium

Also Published As

Publication number Publication date
WO2024045541A1 (en) 2024-03-07

Similar Documents

Publication Publication Date Title
US10296494B2 (en) Managing a global namespace for a distributed filesystem
US10013185B2 (en) Mapping systems and methods of an accelerated application-oriented middleware layer
US9549026B2 (en) Software-defined network attachable storage system and method
US9811662B2 (en) Performing anti-virus checks for a distributed filesystem
US9811532B2 (en) Executing a cloud command for a distributed filesystem
US8788628B1 (en) Pre-fetching data for a distributed filesystem
US11693789B2 (en) System and method for mapping objects to regions
US10871911B2 (en) Reducing data amplification when replicating objects across different sites
US10042719B1 (en) Optimizing application data backup in SMB
US20230367746A1 (en) Distributed File System that Provides Scalability and Resiliency
US7499980B2 (en) System and method for an on-demand peer-to-peer storage virtualization infrastructure
US11099941B2 (en) System and method for accelerating application service restoration
US10831714B2 (en) Consistent hashing configurations supporting multi-site replication
US20240143233A1 (en) Distributed File System with Disaggregated Data Management and Storage Management Layers
CN116848517A (en) Cache indexing using data addresses based on data fingerprints
US10782989B2 (en) Method and device for virtual machine to access storage device in cloud computing management platform
US20220391361A1 (en) Distributed File System with Reduced Write and Read Latencies
US11580078B2 (en) Providing enhanced security for object access in object-based datastores
CN117667298A (en) Method and device for starting container, computing node and shared storage equipment
US9971532B2 (en) GUID partition table based hidden data store system
CN115878580A (en) Log management method and device
US8356016B1 (en) Forwarding filesystem-level information to a storage management system
US11847100B2 (en) Distributed file system servicing random-access operations
US11526286B1 (en) Adaptive snapshot chunk sizing for snapshots of block storage volumes
US20220197860A1 (en) Hybrid snapshot of a global namespace

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination