CN113467882A - Method and system for deploying containers - Google Patents

Method and system for deploying containers Download PDF

Info

Publication number
CN113467882A
CN113467882A CN202010240992.0A CN202010240992A CN113467882A CN 113467882 A CN113467882 A CN 113467882A CN 202010240992 A CN202010240992 A CN 202010240992A CN 113467882 A CN113467882 A CN 113467882A
Authority
CN
China
Prior art keywords
virtual disk
application
container
image
snapshot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010240992.0A
Other languages
Chinese (zh)
Inventor
代志锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010240992.0A priority Critical patent/CN113467882A/en
Publication of CN113467882A publication Critical patent/CN113467882A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0664Virtualisation aspects at device level, e.g. emulation of a storage device or system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Stored Programmes (AREA)

Abstract

A method for deploying a container and a cloud service system are disclosed. The method is applied to a system comprising a computing node and a storage node, wherein the storage node constructs a virtual disk snapshot based on an application image, the computing node creates a container, and the container executes the following operations: determining whether a corresponding virtual disk snapshot exists according to the specified application mirror image; if the virtual disk snapshot exists, the virtual disk is constructed based on the corresponding virtual disk snapshot, the virtual disk is mounted on the computing node, and the application configuration information is read from the specified directory of the virtual disk, so that the application storage node is started in the container. The embodiment of the disclosure converts the uncertain duration of pulling the application image from the image warehouse server in the prior art into the determined duration of creating the virtual disk based on the virtual disk snapshot, and meanwhile, the determined duration is generally less than the time consumption in the prior art, thereby reducing the time consumption for deploying the container.

Description

Method and system for deploying containers
Technical Field
The disclosure relates to the field of internet cloud, in particular to a method and a system for deploying a container.
Background
The cloud technology is a hosting technology for unifying series resources such as hardware, software, network and the like in a wide area network or a local area network to realize the calculation, storage, processing and sharing of data.
Some common cloud technology-based service models include software (commercially available software applications) as a service (SaaS), platform (e.g., software applications, application architecture, virtual machines) as a service (PaaS), and infrastructure (e.g., computing resources, storage resources, databases, network resources) as a service (IaaS). Based on the service-based model described above, various cloud service systems have been implemented at present. For example, the cloud storage system performs periodic backup of data on a client's server to facilitate recovery based on the backup data when the client's server fails. As another example, a cloud computing system provides an interface to a client through which a user can standardize packaging and upload an application and its dependent environment to a cloud server to run the application on the server. Among other things, cloud computing systems often employ containers to achieve isolation between various applications. However, when the container is created, the cloud computing system needs to pull the application image from the image repository to enable the container to run. The application image is stored on an image warehouse server, wherein the image warehouse server can be a server in a cloud storage system or a server of a client. In any case, the application image is pulled and is constrained by conditions such as network bandwidth, storage position of the application image, size of the application image and the like, the downloading time is uncontrollable and is often several minutes to several hours, if the application image needs to be decompressed, the decompression of the application image often takes several seconds, and the time for a small-specification container is longer. In other words, the pull application mirror has a significant impact on the deployment of the container. Therefore, there is a need to reduce the time consumption and uncertainty in the time of this step to improve the competitiveness of various container products based on container technology.
Disclosure of Invention
Based on this, the present disclosure aims to provide a method for deploying a container and a cloud service system, so as to solve the problems existing in the prior art.
In a first aspect, an embodiment of the present disclosure provides a method for deploying a container, which is applied to a system including a computing node and a storage node, where the storage node constructs a virtual disk snapshot based on an application image, the computing node creates a container, and the container performs the following operations:
determining whether a corresponding virtual disk snapshot exists according to the specified application mirror image;
and if so, making a virtual disk based on the corresponding virtual disk snapshot, mounting the virtual disk on the computing node, and reading application configuration information from the specified directory of the virtual disk so as to start the application in the container.
Optionally, the computing node is deployed with a plurality of virtual machines, and the container is deployed in the virtual machines.
Optionally, the container further performs the steps of: and when the virtual disk snapshot corresponding to the specified application image does not exist, downloading the specified application image from an image warehouse server, and reading application configuration information from the downloaded application image so as to start the application in the container.
Optionally, the container further performs the steps of: before reading the application configuration information, if the downloaded application image is compressed, it is decompressed.
Optionally, the method further comprises: and the computing node collects the time consumed by downloading and/or compressing the specified application mirror image and feeds the time consumed by downloading and/or compressing the specified application mirror image back to the storage node, and the storage node determines whether to construct a virtual disk snapshot based on the specified application mirror image or not according to feedback information.
Optionally, the constructing the virtual disk snapshot includes:
creating a virtual disk;
copying the one or more application images to the virtual disk; and
and constructing the virtual disk snapshot by using the snapshot system of the storage node.
Optionally, the storage node stores data of a correspondence between the virtual disk snapshot and the application image.
Optionally, the storage node determines whether to construct the virtual disk snapshot for the application image according to one or more of the following options:
user indication or not;
whether the application image size exceeds a set threshold.
Optionally, the mounting the virtual disk onto the computing node includes:
reading partition information from the virtual disk snapshot;
and mounting the plurality of partitions to a specified directory according to the partition information.
Optionally, the container mounts the corresponding virtual disk to a designated directory of the operating system of the virtual machine or to a designated directory of the operating system of the physical host.
Optionally, the computing node further comprises: and unloading the virtual disk.
In a second aspect, an embodiment of the present invention provides a cloud service system, including a storage node and a computing node, where the storage node constructs a virtual disk snapshot based on an application image, and the computing node creates a container, where the container performs the following operations:
determining whether a corresponding virtual disk snapshot exists according to the specified application mirror image;
if the virtual disk snapshot exists, a virtual disk is manufactured based on the corresponding virtual disk snapshot, the virtual disk is mounted on the computing node, application configuration information is read from the specified directory of the virtual disk, and application is started in the container.
Optionally, the computing node is deployed with a plurality of virtual machines, and the container is deployed in the virtual machines.
Optionally, the container further performs the steps of: and when the corresponding virtual disk snapshot does not exist, downloading the specified application image from the image warehouse server, and reading application configuration information from the downloaded application image so as to start the application in the container.
Optionally, the virtual disk snapshot includes:
creating a virtual disk;
copying the one or more application images to the virtual disk; and
and constructing the virtual disk snapshot by using the snapshot system of the storage node. .
Optionally, the storage node determines whether to construct the virtual disk snapshot for the application image according to one or more of the following options:
user indication or not;
whether the application image size exceeds a set threshold. :
optionally, the mounting the virtual disk onto the computing node includes:
reading partition information from the virtual disk snapshot of the virtual disk file;
and mounting the plurality of partitions to a specified directory according to the partition information.
Optionally, the container mounts the corresponding virtual disk to a designated directory of the operating system of the virtual machine or to a designated directory of the operating system of the physical host.
Optionally, the container further performs the steps of: and when the virtual disk snapshot corresponding to the specified application image does not exist, downloading the specified application image from an image warehouse server, and reading application configuration information from the downloaded application image so as to start the application in the container.
Optionally, the computing node collects time consumed by downloading and/or compressing the specified application image and feeds the time consumed by downloading and/or compressing the specified application image back to the storage node, and the storage node determines whether to construct the virtual disk snapshot based on the specified application image according to feedback information.
In a third aspect, the disclosed embodiments provide a computer-readable medium having stored thereon computer instructions that, when executed, implement the method of any one of the above.
In the above embodiment, the storage node constructs the virtual disk snapshot based on the application image, and the computing node constructs the virtual disk according to the virtual disk snapshot when the specified application image is needed, and mounts the virtual disk into the computing node, so that the uncertain duration of pulling the application image from the image warehouse server in the prior art is converted into the determined duration of creating the virtual disk based on the virtual disk snapshot, thereby avoiding uncertainty of container deployment, and the determined duration is generally less than the time consumption in the prior art. Moreover, the storage node has strong distributed storage capacity, the virtual disk snapshot can be quickly constructed, the storage performance of the storage node is basically not affected by the process, and the computing node can borrow the capacity to improve the deployment efficiency of the container product.
Drawings
The foregoing and other objects, features, and advantages of the disclosure will be apparent from the following description of embodiments of the disclosure, which refers to the accompanying drawings in which:
FIG. 1 is used to illustrate a cloud environment in which embodiments of the present disclosure are located;
FIG. 2 illustrates a conceptual block diagram of compute nodes deployed based on container technology and virtual machine technology;
FIGS. 3a and 3b are flow diagrams illustrating respective execution of a storage node and a compute node, respectively, in an embodiment of the present disclosure;
FIG. 4a illustrates an interaction diagram of a storage node and a compute node in an embodiment of the present disclosure;
FIG. 4b illustrates an interaction diagram of a storage node and a compute node in another embodiment of the disclosure;
fig. 5a-5d show a flow chart of a deployment process of an example flexible container provided by an embodiment of the present disclosure.
Detailed Description
The present disclosure is described below based on examples, but the present disclosure is not limited to only these examples. In the following detailed description of the present disclosure, some specific details are set forth in detail. It will be apparent to those skilled in the art that the present disclosure may be practiced without these specific details. Well-known methods, procedures, and procedures have not been described in detail so as not to obscure the present disclosure. The figures are not necessarily drawn to scale.
In a cloud environment, a customer (e.g., an end user or enterprise) gains access to software, platforms, and/or infrastructure services through one or more networks (e.g., the internet). The customer can request use of the software, platform, and/or infrastructure services from the customer through an application service interface provided by the cloud service system. Accordingly, the cloud service system aggregates various software and hardware resources and specifies, configures, or re-satisfies physical and/or virtual resources requested by the user according to the customer request. The cloud service system can dynamically change the underlying software and hardware resources without the customer feeling about it.
Fig. 1 is used to illustrate a cloud environment in which embodiments of the present disclosure are located. As shown on the figure, a cloud environment (also referred to as a data center) 100 includes storage nodes 102, a management server 101, a mirror repository server 103, and computing nodes 104 coupled via a network 105. The management server 101 may include input and output devices (display screen, keyboard and mouse) and may be operatively coupled to the storage nodes 102 and/or the compute nodes 104 to manage the hardware and software resources thereon. For example, virtual machines are installed, terminated, or moved at the storage nodes 102 and/or the compute nodes 104 by the management server 101. The storage nodes 102 are used to provide cloud-based storage services. The computing nodes 104 are used to provide cloud-based computing services. Although the storage nodes, are illustrated for ease of description as a limited number of single servers, actual compute nodes and storage nodes each include thousands of servers located in the same or multiple different data centers and balanced for storage and computing resources via load balancing software/hardware.
In one example, the storage node 102 is logically divided into multiple tiers. Where the hardware layer 131 includes general purpose nodes (including a single server 1211 and storage device 1212) that are geographically distributed across the internet and provide physical resources for implementing higher layers of storage nodes.
Above the hardware layer 131 is a software kernel layer 122. Software kernel layer 112 includes operating system 1223, virtual machines 1222, virtual machine monitor 1221, cluster middleware 1224, and storage virtualization component 1225. An operating system 1223 such as the UNIX operating system, Linux operating system, etc. is used for the operating system on the server. The Virtual Machine (Virtual Machine)1222 is a complete computer system with complete hardware system functionality, which is emulated by software and runs in a completely isolated environment. A Virtual Machine Monitor (VMM)1221 manages real resources of the server, providing an interface for the virtual machines 1222. Virtual machine monitor 1221 may maintain multiple virtual machines on the same server. The cluster middleware 1224 is configured to provide policy services for the cluster for upper layer applications, such as a dynamic balancing policy, that is, when a request is distributed, the request is sent to a process with a light load, a read-write separation policy, and control that different processes are specially responsible for reading and writing data services. The storage virtualization manager 1225 implements an abstract representation of the storage hardware resources to provide the application with the functional services of a unified interface.
Above the software kernel layer 122 is a storage layer 123. Storage tier 123 provides different forms of storage based on the characteristics of different data, including object store 1232, file store 1233, block store 1231, and database 1234. The object (object) is a union of data and a data attribute set, and the object storage is to organize and store the data by object, and to access the object with an object identifier. The file store 1233, a conventional file system, organizes and manages data by file. The block storage 1231 has a data block as a basic storage unit. Database 1234 is the organization and management of data with a database system.
Above the storage layer 123 is an application layer 124. The application layer 124 includes a storage application 1241. The storage application 1241 directly faces the client, backups data on the client terminal or the server through the interface of the storage application 1241, and restores the data to the terminal or the client server through the interface of the storage application 1241 when the data on the client terminal or the client server is damaged.
In one example, the compute nodes 104 may also be logically divided into multiple layers. Among other things, the hardware layer 141 includes general purpose nodes that are geographically distributed across the Internet and provide physical resources for implementing higher layers of storage nodes, including, for example, multiple single servers 1211 and multiple storage devices 1212. Above the hardware layer 141 is an operating system 142. The operating system 142 is, for example, a UNIX operating system, a Linux operating system, or the like, which may be used on the server. Container support 143 is the various underlying implementations required to support the containers running thereon. For example, for docker, two technologies of cgroup (abbreviation of control groups) and namespace (namespace) need to be implemented, wherein cgroup implements resource quota, and namespace implements resource isolation. docker is an open source application container engine that allows developers to package their applications and dependent operating environments into a portable container and then publish them to a server. As shown in the figure, based on the container support 143, a plurality of containers c 1-cn are operated, the operating environment e1 and the operating application p1 are implemented in the container c1, and the operating environment en and the operating application pn are implemented in the container cn, where n is an integer greater than or equal to 1. The isolation between the application programs is realized through a container technology, and the application programs are not influenced mutually.
In one example, the image repository server 103 is used to store application images. The mirror repository server 103 is provided with a plurality of mirror repositories. The figure shows a mirror repository R1 and a mirror repository R2, the mirror repository R1 storing application images I1 and I2, and the mirror repository R2 storing application images I3 and I4. Application mirroring is the standardized packaging of an application and its dependent runtime environment. When the container is created, the application image is pulled from the image repository, and the application configuration information in the application image is read in the container to obtain the running environment and the running application program. An application image may be understood as a static file representing the running state of an application at a certain moment in time. Thus, the application program can be restored to the running state at a certain moment in the container through the application image. It should be understood that although the mirror repository server is shown as being located within the data center 100, the mirror repository server may be located outside of the data center 100, for example, it may be a client server, located within a client's computer room, located within a common repository, or a server of another data center. In the hybrid cloud service, the client server stores regularly packaged application images, and the application images are stored in the cloud service system to realize double security guarantee.
In cloud environment 100, a customer accesses a cloud service system using terminals, such as notebook 106, desktop 107, and customer server 108. The cloud service system integrates services provided by the cloud storage service and the cloud computing service by adopting a uniform application service interface. The customer jumps to the application interfaces of the storage service and the computing service via the application service interface. These interfaces may all be web pages. These web pages may also include configuration pages for configuring, for example, backup functions, configuring the start and stop of the virtual machines, deploying software and deploying applications on the virtual machines, and the like.
It should be understood that the above examples provide a cloud service system with separate storage and computing, but that this separation is only a logical separation, the two are not physically isolated, and cooperate in some way: i.e. the computing node may access the storage node via a certain means (a communication module implemented based on a protocol) and vice versa. Although the drawings are intended to emphasize the functional differences between the compute nodes and the storage service system, different conceptual block diagrams are drawn to illustrate that the two adopt different technical architectures. The conceptual architecture diagram of the compute node 104 does not list the modules associated with the virtual machine, nor the cluster management, but does list the modules associated with the virtual machine in the storage node 102. However, in practice, the compute nodes 104 may also deploy applications based on a combination of container technology and virtual machine technology.
FIG. 2 illustrates a conceptual block diagram of compute nodes deployed based on container technology and virtual machine technology. The compute nodes 104 may be logically divided into multiple layers, including: a hardware layer 201, a software kernel layer 202 above, and a plurality of virtual machines V1 to Vn constructed above the software kernel layer 202. Virtual machine V1 includes virtual machine operating system Op1, then builds containers c11 through c1n on virtual machine operating system Op1, deploys applications in containers c11 through c1n, and so on.
Furthermore, the layers shown in FIG. 1 are for illustration only. Other conceptual architectural diagrams may be provided depending on the type of service actually assigned by the cloud service system.
As described in the background, when a container is deployed, the computing node 104 needs to pull an application image from the image repository before the container can be started (i.e., the system and the application program in the container are run). The application image is stored on the image warehouse server, and the time consumption for downloading and decompressing is long and unfixed, so that the time consumption from container creation to formal container starting is long and unfixed, and the container deployment efficiency is influenced.
To this end, the embodiments of the present disclosure provide a more efficient method for deploying a container. The method is applied to a system comprising storage nodes and computing nodes (e.g., as shown in FIG. 1). And the storage node constructs a virtual disk snapshot based on the application image. The virtual disk snapshot is a complete copy of a virtual disk, which stores one or more application images, at a certain time point, and a virtual disk can be restored at any storage position based on a virtual disk technology. The storage node may construct the virtual disk snapshot in a variety of ways. In some embodiments, the storage node constructs the virtual disk snapshot using the steps illustrated in FIG. 3 a.
In step S301, a virtual disk is created.
In step S302, one or more application images are copied to the virtual disk.
In step S303, a virtual disk snapshot is constructed for the virtual disk.
When one or more application images are stored on the cloud storage system, the substantial data can be dispersedly stored on the storage spaces of the plurality of storage devices. Therefore, in this embodiment, a virtualization interface provided by the system is adopted, a virtual disk (also referred to as a cloud disk) is created first, then one or more application images are copied to the virtual disk, and then a virtual disk snapshot is constructed for the virtual disk.
Because the cloud storage system has strong storage backup capability, the influence of constructing the virtual disk snapshot on the storage performance of the system is small, but the computing nodes can borrow the capability to improve the deployment efficiency of container products. In particular, the compute node performs the following steps shown in FIG. 3 b.
In step S311, it is determined whether the corresponding virtual disk snapshot exists according to the specified application image, if so, step S314 is executed, otherwise, step S312 is executed.
In step S312, the designated application image is downloaded from the storage node to the computing node.
Step S313, reading the application configuration information from the application image to start the application in the container.
Step S314, a virtual disk is constructed based on the corresponding virtual disk snapshot.
And step S315, mounting the virtual disk on the computing node.
Step S316, reading the configuration information from the designated directory of the virtual disk to start the application in the container.
Step 311 determines whether there are multiple ways for determining whether the virtual disk corresponding to the specified application image exists. For example, when the storage node creates the virtual disk snapshot, the corresponding relationship data between the application image and the virtual disk snapshot is established, and the corresponding relationship data is stored in the shared directory, and the computing node searches the corresponding relationship data in the shared directory by using the name of the specified application image to obtain the corresponding virtual disk snapshot, or the computing node sends the name of the specified application image to the storage node, and if the virtual disk snapshot exists, the storage node sends the name of the corresponding virtual disk snapshot to the computing node. For another example, the name specifications of the virtual disk snapshot and one or more application images are agreed, the computing node determines the name of the virtual disk snapshot corresponding to the appointed application image according to the agreed name specifications, and downloads the virtual disk snapshot into the computing node through the system interface, if the virtual disk snapshot is downloaded successfully, the corresponding virtual disk snapshot exists, and if not, the virtual disk snapshot does not exist.
Step S312-S316, when the virtual disk snapshot of the appointed application image does not exist, the scheme in the prior art is adopted, namely the appointed application image is downloaded to the computing node from the storage node, then the application configuration information is read from the appointed application image, and the application is started in the container according to the application configuration information; when the virtual disk snapshot of the specified application image exists, the scheme of the disclosure is adopted: and constructing a virtual disk based on the corresponding virtual disk snapshot, mounting the virtual disk on a computing node, and reading configuration information from the specified directory of the virtual disk so as to start the application in the container.
Specifically, when steps S314 to S316 are implemented, the virtual disk management apparatus of the computing node restores a virtual disk according to the virtual disk snapshot, and mounts the virtual disk on the computing node. Specifically, the virtual disk management device analyzes the virtual disk snapshot, reads disk information recorded in the virtual disk snapshot, and generates a corresponding virtual disk according to the read disk information. If the operating system of the computing node is a Linux system, the identification and mounting of the virtual disk can be realized by adopting a virtualization management program KVM. The KVM is an open source virtualization management program, and is currently placed in the kernel of the Linux system by a Linux core organization. Specifically, an nbdattach tool of a KVM virtualization component qemu component is packaged in an SDK (java software development kit), a block _ attach and a block _ detach function interface are provided, a virtual disk is mounted on a physical host by calling the block _ attach interface, and when the virtual disk snapshot needs to be dismounted, the block _ detach is called for dismounting. In addition, the SDK may be packaged with virtualization components of other virtualization platforms besides the two virtualization platforms, so that corresponding virtual disks can be created on multiple virtualization platforms according to the virtual disk snapshots.
In the above embodiment, the storage node constructs the virtual disk snapshot based on the application image, and the computing node constructs the virtual disk according to the virtual disk snapshot when the specified application image is needed, and mounts the virtual disk into the computing node, so that the uncertain duration of pulling the application image from the image warehouse server in the prior art is converted into the determined duration of creating the virtual disk based on the virtual disk snapshot, thereby avoiding uncertainty of container deployment, and the determined duration is generally less than the time consumption in the prior art.
Furthermore, the storage node has strong storage backup capability, the influence of the virtual disk snapshot construction on the storage performance of the storage node is small, and the computing system can borrow the capability to improve the deployment efficiency of container products. With virtual disk snapshots, containers can be deployed even without a network. Moreover, multiple compute nodes may all use the same virtual disk snapshot when deploying the container.
It should be appreciated that the computing node may or may not use the virtual machine deployment container described in fig. 2, and that when using the virtual machine deployment container, the virtual disk is identified in the virtual machine operating system as a virtual disk, and if not, the virtual disk is identified in the physical operating system as a virtual disk. If multiple virtual machines are deployed on a compute node, each virtual machine operating system may be based on identifying the same virtual disk snapshot as a different virtual disk.
In some embodiments, the mounting of the virtual disk by the computing node to the operating system on the computing node comprises: and reading the partition information from the virtual disk snapshot, and respectively mounting a plurality of partitions to a specified directory of the operating system according to the partition information.
In some embodiments, the storage node may further comprise: and storing the corresponding relation data of the virtual disk snapshot and the application mirror image corresponding to the virtual disk. The computing node may find the virtual disk snapshot corresponding to the specified application image based on the correspondence data, so that the virtual disk snapshot is created into a virtual disk and is mounted on an operating system of the computing node. The storage node may store the correspondence data in a shared directory for reading by the compute node. The storage node may also send the correspondence data to the compute node, or the compute node may request the correspondence data from the storage node as needed. It should be appreciated that the virtual disk snapshot and the application image are not limited to a one-to-one correspondence. The storage node may construct a virtual disk snapshot for one application image, or may construct a virtual disk snapshot based on a plurality of application images.
In theory, the storage node may make a corresponding virtual disk for each application image, but in order to avoid consuming too much storage space, the storage node typically selectively builds a virtual disk for one or more application images. Specifically, as an implementation manner, after downloading a specified application image (if the application image is a compressed application image, decompressing the application image) by using the prior art, the computing node collects the time consumed by downloading and/or compressing the application image, and feeds back the collected information to the storage node, and the storage node determines whether to create a corresponding virtual disk for the application image according to the time consumed by downloading and/or compressing the specified application image, and executes the virtual disk accordingly. In another embodiment, when downloading a specified application image by using the prior art, the computing node determines whether a virtual disk snapshot should be created for the application image based on a predetermined policy, and indicates the virtual disk snapshot to the storage node, and the storage node asynchronously creates the virtual disk snapshot according to the determination result. As a third implementation manner, the storage node collects, via an application interface of the storage service system, an indication whether a user indicates to create a virtual disk snapshot, and performs the same, and the storage node may also actively collect the size of each application image, compare the size of each application image with a set threshold, and then create a virtual disk snapshot for those application images whose size exceeds the set threshold.
FIG. 4a illustrates an interaction diagram of a storage node and a compute node in an embodiment of the disclosure. As shown in the figure, the method specifically comprises the following steps.
In step S401, the computing node 104 downloads the specified application image from the image repository 103.
In step S402, the compute node 104 decompresses the application image.
In step S403, the computing node 104 reads the application configuration information from the application image to start the application.
In step S404, the computing node 104 collects download and decompression time-consuming information.
In step S405, the computing node 104 transmits the download and decompression time-consumption information to the storage node 102.
In step S406, the storage node determines whether to create a virtual disk snapshot for the specified application image according to the download and decompression information, and executes accordingly.
In step S407, the storage node 102 sends the corresponding relationship data of the application image and the virtual disk snapshot to the computing node 104.
In step S408, when the computing node 102 needs to create a container again for the specified application image, the virtual disk snapshot of the specified application image is obtained through the correspondence data.
In step S409, the computing node 102 creates a virtual disk based on the virtual disk snapshot and mounts the virtual disk on the operating system.
In step S410, the computing node reads the application configuration information of the application image from the specified directory of the virtual disk to start the application in the container.
FIG. 4b illustrates an interaction diagram of a storage node and a compute node in another embodiment of the disclosure. As shown in the figure, the method specifically comprises the following steps.
In step S411, the storage node 102 receives the application image uploaded by the user and an indication as to whether or not to create the virtual disk snapshot from the user terminal 199.
In step S412, the storage node 102 creates a virtual disk snapshot according to the instruction.
In step S413, the storage node 102 records correspondence data of the application image and the virtual disk snapshot.
In step S414, the storage node 102 transmits the correspondence data to the compute node 104.
In step S415, when the computing node 104 needs to create a container according to the specified application image, the corresponding relationship data is retrieved to obtain the virtual disk snapshot of the specified application image.
In step S416, the computing node 104 creates a virtual disk according to the virtual disk snapshot, where the virtual disk stores the specified application image, and then mounts the virtual disk onto the operating system of the computing node.
In step S417, the computing node 104 reads the application configuration information from the specified directory of the virtual disk to start the application in the container.
The difference between the above embodiments is that the first embodiment is initiated by a computing node, for any application image, when the computing node deploys a container based on the application image for the first time, the computing node downloads the application image from an image repository, and feeds back the time consumed for downloading and compressing the application image to a storage node, the storage node determines whether to create a virtual disk snapshot for the application image according to a predetermined rule and executes the virtual disk snapshot accordingly, and when the computing node deploys the container based on the application image again, the computing node can create a virtual disk in the container based on the virtual disk snapshot and read application configuration information required for starting the application from the virtual disk snapshot; the second embodiment is initiated by a user terminal, and when uploading an application image, the user terminal indicates whether a virtual disk snapshot needs to be created for the application image, the storage node executes the virtual disk snapshot as such, and then when the computing node needs to deploy a container by using the application image, the virtual disk snapshot is retrieved from the container, and then a virtual disk is created based on the virtual disk snapshot and application configuration information needed for starting an application is read from the virtual disk snapshot.
In the above interaction diagram, various verification steps may also be included. For example, the computing node may check the virtual disk snapshot, for example, by using a check code at the time of previous creation.
In the interaction graph, the computing node reads the application configuration information from the specified directory of the virtual disk, and the specified directory can be appointed by the computing node and the storage node in advance.
It should be noted that the operations or steps executed by the container referred to in this disclosure refer to operations and steps executed by a container process (or thread) corresponding to the container created when the container is created. The container can be understood as an object instance, and the container process correspondingly created when the container is created can be regarded as a method process of the object instance.
Further, according to an embodiment of the present disclosure, a cloud computing product of a containerized Elastic computing service, namely an Elastic Container Instance (Elastic Container Instance), is provided. By applying the product, a user only needs to provide a packaged application mirror image, the cloud server is responsible for starting the container to run the application mirror image, and the user only needs to pay for resources consumed by the actual running of the container.
Fig. 5a-5d show a flow chart of an exemplary flexible container instance deployment process. Referring to fig. 5a, step S501 is first executed to create a container, and when creating a container, step S502 is executed to determine whether there is a cache hit, i.e. whether there is a cached virtual disk snapshot of the application image, if so, sub-process P505 is executed, i.e. the ECI creation flow with the image cache is executed, otherwise, sub-processes P503 and P504 are executed, i.e. the asynchronous image cache creation flow and the normal ECI creation flow are executed. The specific flowchart of the sub-process P503 is shown in fig. 5b, the specific flowchart of the sub-process P504 is shown in fig. 5c, and the specific flowchart of the sub-process P505 is shown in fig. 5 d. This is described in detail below with reference to fig. 5b-5 d.
Referring to FIG. 5b, the asynchronous mirror cache creation process includes steps S5031-S5035. Step S5031 creates a virtual disk, step S5032 pulls the application image from the image repository or other storage, step S5033 copies the application image to the virtual disk, step S5034 makes a virtual disk snapshot based on the copied virtual disk, and step S5035 returns a snapshot ID.
Referring to fig. 5c, the general ECI creation flow includes steps S5041-S5043 and a sub-process P503. Step S5041 is to pull the application image from the image repository or other storage, step S5042 is to determine whether to enable the cache, if not, i.e. it is not necessary to construct a virtual disk snapshot for the application image, only S5043 is to be executed, the application image is read in the ECI, and the application is enabled, and if the cache is enabled, the sub-process P503 is called to asynchronously make the image cache. It should be noted that in fig. 5c, the ECI performs two operations with caching enabled: on one hand, the ECI reads the application image and starts the application, and on the other hand, the ECI calls the subprocess P503 to asynchronously make the image cache. To accomplish both of the above operations, the ECI typically requires at least two threads or processes to be enabled.
Referring to FIG. 5d, the ECI creation flow with mirror caching includes steps S5051-S5056. Step S5051 checks the virtual disk snapshot, step S5052 creates the virtual disk according to the virtual disk snapshot, step S5053 mounts the virtual disk to the virtual machine, step S5054 identifies the virtual disk by the Container-agent, step S5055 switches the Container image read directory to the appointed directory of the file system of the cache disk by the Container-agent, step S5056 reads the application image by the Container-agent, and starts the Container. A Container-agent is a thread of the ECI, generated at Container creation time.
As will be appreciated by one skilled in the art, the present disclosure may be embodied as systems, methods and computer program products. Accordingly, the present disclosure may be embodied in the form of entirely hardware, entirely software (including firmware, resident software, micro-code), or in the form of a combination of software and hardware. Furthermore, in some embodiments, the present disclosure may also be embodied in the form of a computer program product in one or more computer-readable media having computer-readable program code embodied therein.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium is, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer-readable storage medium include: an electrical connection for the particular wire or wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical memory, a magnetic memory, or any suitable combination of the foregoing. In this context, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with a processing unit, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a chopper. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any other suitable combination. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., and any suitable combination of the foregoing.
Computer program code for carrying out embodiments of the present disclosure may be written in one or more programming languages or combinations. The programming language includes an object-oriented programming language such as JAVA, C + +, and may also include a conventional procedural programming language such as C. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Embodiments of the present disclosure are of great benefit to cloud environments that include logically separate computing and storage. By utilizing the strong backup storage capacity of the storage service system, the data are cached for the computing service system, the starting time consumption of various container products can be greatly improved, the user experience is prompted, and the core competitiveness of the container products is increased. For example, the solution of the embodiment of the present disclosure can be applied to a commercial Elastic Container (Elastic Container) to enhance the core competitiveness of the product.
It should be understood that the above-described are only preferred embodiments of the present disclosure, and are not intended to limit the present disclosure, since many variations of the embodiments described herein will occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
It should be understood that the embodiments in this specification are described in a progressive manner, and that the same or similar parts in the various embodiments may be referred to one another, with each embodiment being described with emphasis instead of the other embodiments. In particular, as for the method embodiments, since they are substantially similar to the methods described in the apparatus and system embodiments, the description is simple, and the relevant points can be referred to the partial description of the other embodiments.
It should be understood that the above description describes particular embodiments of the present specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
It is to be understood that the terms and expressions employed herein are used as terms of description and not of limitation, and that the embodiment or embodiments of the specification are not intended to be limited to the terms and expressions. The use of such terms and expressions is not intended to exclude any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications may be made within the scope of the claims. Other modifications, variations, and alternatives are also possible. Accordingly, the claims should be looked to in order to cover all such equivalents.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (21)

1. A method for deploying a container is applied to a system comprising a computing node and a storage node, wherein the storage node constructs a virtual disk snapshot based on an application image, the computing node creates the container, and the container performs the following operations:
determining whether a corresponding virtual disk snapshot exists according to the specified application mirror image;
if the virtual disk snapshot exists, a virtual disk is constructed based on the corresponding virtual disk snapshot, the virtual disk is mounted on the computing node, and application configuration information is read from the specified directory of the virtual disk, so that the application is started in the container.
2. The method of claim 1, wherein the compute node is deployed with a plurality of virtual machines, the container being deployed in the virtual machines.
3. The method of claim 1, the container further performing the steps of: and when the virtual disk snapshot corresponding to the specified application image does not exist, downloading the specified application image from an image warehouse server, and reading application configuration information from the downloaded application image so as to start the application in the container.
4. The method of claim 3, the container further performing the steps of: before reading the application configuration information, if the downloaded application image is compressed, it is decompressed.
5. The method of claim 3, further comprising: and the computing node collects the time consumed by downloading and/or compressing the specified application mirror image and feeds the time consumed by downloading and/or compressing the specified application mirror image back to the storage node, and the storage node determines whether to construct a virtual disk snapshot based on the specified application mirror image or not according to feedback information.
6. The method of claim 1, wherein the constructing a virtual disk snapshot comprises:
creating a virtual disk;
copying one or more application images to the virtual disk; and
and constructing the virtual disk snapshot aiming at the virtual disk.
7. The method of claim 6, wherein the storage node stores correspondence data of the virtual disk snapshot and the application image.
8. The method of claim 1, wherein the storage node determines whether to construct a virtual disk snapshot for the application image based on one or more of the following options:
user indication or not;
whether the application image size exceeds a set threshold.
9. The method of claim 1, wherein the mounting of the virtual disk onto the computing node comprises:
reading partition information from the virtual disk snapshot;
and mounting the plurality of partitions to a specified directory according to the partition information.
10. The method of claim 9, wherein the container mounts the virtual disk under a specified directory of a virtual machine operating system or under a specified directory of an operating system of a physical host.
11. The method of claim 1, the computing node further comprising: and unloading the virtual disk.
12. A cloud service system comprises a storage node and a computing node, wherein the storage node constructs a virtual disk snapshot based on an application image, the computing node creates a container, and the container executes the following operations:
determining whether a corresponding virtual disk snapshot exists according to the specified application mirror image;
if the virtual disk snapshot exists, a virtual disk is manufactured based on the corresponding virtual disk snapshot, the virtual disk is mounted on the computing node, application configuration information is read from the specified directory of the virtual disk, and application is started in the container.
13. The cloud service system of claim 12, wherein the compute nodes are deployed with a plurality of virtual machines, the containers being deployed in the virtual machines.
14. The cloud service system of claim 12, the container further performing the steps of: and when the corresponding virtual disk snapshot does not exist, downloading the specified application image from the image warehouse server, and reading application configuration information from the downloaded application image so as to start the application in the container.
15. The cloud service system of claim 12, wherein the virtual disk snapshot comprises:
creating a virtual disk;
copying the one or more application images to the virtual disk; and
and constructing the virtual disk snapshot by using the snapshot system of the storage node.
16. The cloud service system of claim 12, wherein the storage node determines whether to construct a virtual disk snapshot for the application image based on one or more of the following options:
user indication or not;
whether the application image size exceeds a set threshold.
17. The cloud service system of claim 12, wherein the mounting of the virtual disk onto the computing node comprises:
reading partition information from the virtual disk snapshot of the virtual disk file;
and mounting the plurality of partitions to a specified directory according to the partition information.
18. The cloud service system of claim 17, wherein the container mounts the corresponding virtual disk under a specified directory of a virtual machine operating system or under a specified directory of an operating system of a physical host.
19. The cloud service system of claim 12, the container further performing the steps of: and when the virtual disk snapshot corresponding to the specified application image does not exist, downloading the specified application image from an image warehouse server, and reading application configuration information from the downloaded application image so as to start the application in the container.
20. The cloud service system of claim 19, wherein the computing node collects time consumed by downloading and/or compressing the specified application image and feeds the time consumed by downloading and/or compressing the specified application image back to the storage node, and the storage node determines whether to construct the virtual disk snapshot based on the specified application image according to feedback information.
21. A computer readable medium having stored thereon computer instructions which, when executed, implement the method of any one of claims 1 to 11.
CN202010240992.0A 2020-03-31 2020-03-31 Method and system for deploying containers Pending CN113467882A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010240992.0A CN113467882A (en) 2020-03-31 2020-03-31 Method and system for deploying containers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010240992.0A CN113467882A (en) 2020-03-31 2020-03-31 Method and system for deploying containers

Publications (1)

Publication Number Publication Date
CN113467882A true CN113467882A (en) 2021-10-01

Family

ID=77865136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010240992.0A Pending CN113467882A (en) 2020-03-31 2020-03-31 Method and system for deploying containers

Country Status (1)

Country Link
CN (1) CN113467882A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115061701A (en) * 2022-08-16 2022-09-16 新华三信息技术有限公司 Out-of-band installation method and device for server
CN115665172A (en) * 2022-10-31 2023-01-31 北京凯思昊鹏软件工程技术有限公司 Management system and management method of embedded terminal equipment
CN117493271A (en) * 2023-12-28 2024-02-02 苏州元脑智能科技有限公司 Container node management device, method, equipment and medium of cloud operating system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115061701A (en) * 2022-08-16 2022-09-16 新华三信息技术有限公司 Out-of-band installation method and device for server
CN115665172A (en) * 2022-10-31 2023-01-31 北京凯思昊鹏软件工程技术有限公司 Management system and management method of embedded terminal equipment
CN117493271A (en) * 2023-12-28 2024-02-02 苏州元脑智能科技有限公司 Container node management device, method, equipment and medium of cloud operating system
CN117493271B (en) * 2023-12-28 2024-03-01 苏州元脑智能科技有限公司 Container node management device, method, equipment and medium of cloud operating system

Similar Documents

Publication Publication Date Title
US11567755B2 (en) Integration of containers with external elements
US10908892B2 (en) Generating and deploying object code files compiled on build machines
US10379967B2 (en) Live rollback for a computing environment
US9495193B2 (en) Monitoring hypervisor and provisioned instances of hosted virtual machines using monitoring templates
US10083092B2 (en) Block level backup of virtual machines for file name level based file search and restoration
CN112099918A (en) Live migration of clusters in containerized environments
US10007533B2 (en) Virtual machine migration
US11243758B2 (en) Cognitively determining updates for container based solutions
US9886284B2 (en) Identification of bootable devices
US10585760B2 (en) File name level based file search and restoration from block level backups of virtual machines
US20220129355A1 (en) Creation of virtual machine packages using incremental state updates
CN113467882A (en) Method and system for deploying containers
US11334372B2 (en) Distributed job manager for stateful microservices
US20180232249A1 (en) Virtual machine migration between software defined storage systems
US11768740B2 (en) Restoring operation of data storage systems at disaster recovery sites
CN113296891B (en) Platform-based multi-scene knowledge graph processing method and device
CN115336237A (en) Predictive provisioning of remotely stored files
US11262932B2 (en) Host-aware discovery and backup configuration for storage assets within a data protection environment
US11226743B2 (en) Predicting and preventing events in a storage system using copy services
US20210157507A1 (en) Storage alteration monitoring
JP6700848B2 (en) Management system, control method
US11650809B2 (en) Autonomous and optimized cloning, reinstating, and archiving of an application in a containerized platform
US11410082B2 (en) Data loss machine learning model update
US20220342686A1 (en) Virtual machine file management using file-level snapshots
US11204940B2 (en) Data replication conflict processing after structural changes to a database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40063923

Country of ref document: HK