WO2022052898A1 - 一种计算机系统、容器管理方法及装置 - Google Patents

一种计算机系统、容器管理方法及装置 Download PDF

Info

Publication number
WO2022052898A1
WO2022052898A1 PCT/CN2021/116842 CN2021116842W WO2022052898A1 WO 2022052898 A1 WO2022052898 A1 WO 2022052898A1 CN 2021116842 W CN2021116842 W CN 2021116842W WO 2022052898 A1 WO2022052898 A1 WO 2022052898A1
Authority
WO
WIPO (PCT)
Prior art keywords
container
virtual device
network
computing node
node
Prior art date
Application number
PCT/CN2021/116842
Other languages
English (en)
French (fr)
Inventor
陈现
冯绍宝
张永明
Original Assignee
华为云计算技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202011618590.6A external-priority patent/CN114237809A/zh
Application filed by 华为云计算技术有限公司 filed Critical 华为云计算技术有限公司
Priority to EP21865952.2A priority Critical patent/EP4202668A4/en
Publication of WO2022052898A1 publication Critical patent/WO2022052898A1/zh
Priority to US18/179,644 priority patent/US20230205505A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/63Image based installation; Cloning; Build to order
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5055Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering software capabilities, i.e. software resources associated or available to the machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources

Definitions

  • the present application relates to the field of cloud computing, and in particular, to a server cabinet-based virtual machine management method and device.
  • container management architectures are mostly based on the existing infrastructure as a service (IAAS) layer.
  • IAAS infrastructure as a service
  • service proxy functions related to the IAAS layer and container management components to manage containers on computing nodes.
  • the container management component deployed on the computing node will occupy the resources on the computing node, resulting in resource consumption of the computing node.
  • the present application provides a computer system, a container management method and an apparatus, which are used to reduce resource consumption when implementing container management on a computing node.
  • an embodiment of the present application provides a container management method.
  • the method is applied to an unloading card and can be executed by the unloading card.
  • the unloading card is inserted into a computing node, a communication channel is established between the unloading card and the computing node, and the unloading card is It is connected to the container cluster management node (also referred to as the management node for short) through the network.
  • the management node can send a container creation request to the unloading card.
  • the unloading card can Create a request to obtain a container image.
  • the uninstall card can obtain the container image from the container image repository, and save the container image on the storage resource that the uninstall card can access.
  • the storage resource can be the local storage of the uninstall card, or it can be connected to the uninstall card.
  • the offload card can notify the computing node through the communication channel to create a container in the computing node according to the container image.
  • the computing node no longer needs to interact directly with the management node, that is to say, the computing node no longer needs to manage the container, but the unloading card inserted on the computing node creates and manages the container. It needs to consume resources to support the container management function, which improves the resource utilization of computing nodes.
  • the uninstall card may create a virtual device when notifying the computing node to create a container in the computing node according to the container image.
  • the virtual device here is referred to as the first virtual device.
  • the uninstall card can associate the container image with the first virtual device, notify the computing node through the communication channel to create the container running environment of the container, and mount the first virtual device to the root directory of the container.
  • the uninstall card provides a container image to the computing node in the form of a virtual device, so as to ensure that the computing node can use the container image to create a container, and the container creation method is relatively simple and convenient.
  • the offload card may also be connected to the storage service node where the storage service is deployed through the network, and the offload card may provide storage resources to the container on the computing node.
  • the uninstall card first applies to the storage service node for storage resources; then, it sets up a virtual device according to the storage resources.
  • the virtual device here is called the second virtual device; after the second virtual device is set, uninstall the The card can mount the second virtual device to the directory of the container through the communication channel.
  • the storage resource of the container is provided to the computing node in the form of the second virtual device, so that the container on the computing node can access the storage resource through the second virtual device, and store the data on the storage resource, so that the unloading It is possible for the card to provide storage resources for the container on the computing node, further reducing the resource consumption on the computing node.
  • the second virtual device when the uninstall card sets the second virtual device according to the storage resource, the second virtual device may be created first, and after the second virtual device is created, the storage resource may be associated with the second virtual device .
  • the offload card can create a virtual device locally and provide it to the container on the computing node, so that the container on the computing node can obtain storage resources.
  • the storage resource may be an object storage resource or a block storage resource.
  • the uninstall card can directly provide the file storage resource to the computing node in the form of a network file system, and notify the computing node to mount the network file system to the directory of the container, that is, the file storage resource can not be Associated with the second virtual device.
  • the offload card can provide different types of storage resources to the container in the computing node, which is suitable for object storage, file storage and block storage scenarios, effectively expanding the scope of application.
  • the uninstall card when the uninstall card mounts the second virtual device to the directory of the container through the communication channel, different mounting methods may be used for different types of containers.
  • the container is an ordinary container
  • the uninstall card can directly mount the second virtual device to a directory (eg, a storage directory) of the container through a communication channel.
  • the container is a secure container
  • the uninstall card will pass the second virtual device through the communication channel to the secure container virtual machine for deploying the container, and the secure container virtual machine will mount the second virtual device into the directory of the container.
  • the offload card can be connected to a network service node through a network, and the offload card can not only provide storage resources to the container of the computing node, but also provide network resources to the container of the computing node.
  • the offload card can first apply for network resources from the network service node; after applying for network resources, the offload card can set up virtual devices according to the network resources. After the device setting is completed, the uninstall card can set the third virtual device in the container through the communication channel.
  • the network resources of the container are provided to the computing node in the form of a third virtual device, so that the container on the computing node can obtain network resources through the third virtual device, so that the container has network capabilities, so that the offload card can be used for computing.
  • the container on the node provides the possibility of network resources, which ensures that the unloading card can realize the container management function and further reduces the resource consumption on the computing node.
  • the uninstall card when setting the third virtual device according to the network resource, may first create the third virtual device; after the third virtual device is created, the network resource may be associated with the third virtual device superior.
  • the offload card can create a virtual device locally and provide it to the container on the computing node, so that the container on the computing node can obtain network resources and enable the container to have network capabilities.
  • the offload card sets network processing rules for the third virtual device, and the network processing rules include some or all of the following: load balancing policy, security group policy, quality of service, routing rule (routing), address mapping rule.
  • the security group policy may include access control lists (ACL)
  • the address mapping rules include address translation (net address trancelate, NAT) and full address translation (FULL NAT), where NAT includes but is not limited to destination address translation (destination net address trancelate, DNAT), source address translation (source net address trancelate, SNAT), port translation (port net address trancelate, PNAT).
  • the container can have service discovery capabilities and network policy capabilities, etc., so that the container has strong network capabilities.
  • the uninstall card when the uninstall card sets the third virtual device to the container through the communication channel, different setting methods may be adopted for different types of containers. If the container is a common container, the uninstall card can add the third virtual device to the namespace of the container through the communication channel. If the container is a secure container, the unloading card can directly pass the third virtual device to the secure container virtual machine for deploying the container through the communication channel.
  • the communication channel may be a PCIe channel.
  • the offload card can efficiently exchange information with the computing node through the PCIe channel, which further ensures that the offload card can manage the containers on the computing node.
  • an embodiment of the present application further provides a container management device, the container management device is located in the unloading card, and has the function of implementing the unloading card behavior in the method example of the first aspect.
  • the functions can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more units corresponding to the above functions.
  • the structure of the apparatus includes a transmission unit, an acquisition unit, a notification unit, and optionally, a first setting unit and a second setting unit, and these units can perform the method in the first aspect above.
  • the corresponding functions of the method please refer to the detailed description in the method example for details, which will not be repeated here.
  • an embodiment of the present application also provides a device, which may be an unloading card, and has the function of implementing the behavior of unloading the card in the method example of the first aspect.
  • the structure of the apparatus includes a processor and a memory, and the processor is configured to support the offload card to perform the corresponding functions in the method of the first aspect.
  • the memory is coupled to the processor and holds program instructions and data necessary for the communication device.
  • the structure of the communication device further includes a communication interface for communicating with other devices.
  • an embodiment of the present application further provides a computer system, and the beneficial effects can be found in the relevant description of the first aspect, which will not be repeated here.
  • the computing system includes an offloading card and a computing node, the offloading card is inserted in the computing node, a communication channel is established between the offloading card and the computing node, and the offloading card is also connected to the container cluster management node through a network.
  • the uninstall card is used to receive the container creation request sent by the container cluster management node, and obtain the container image according to the container creation request;
  • the computing node is used to obtain container images through communication channels and create containers based on the container images.
  • the uninstall card may create a first virtual device, associate the container image with the first virtual device, and provide the computing node with the first virtual device through a communication channel.
  • the computing node can obtain the first virtual device through the communication channel, and after obtaining the first virtual device, can create a container running environment of the container and mount the first virtual device to the root directory of the container.
  • the offload card can also be connected to the storage service node through a network, and the offload card and the computing node can cooperate to configure storage resources for the container on the computing node.
  • the uninstall card can first apply to the storage service node for storage resources; after applying for the storage resources, the uninstall card can set up a second virtual device according to the storage resources, and provide the second virtual device to the computing node through the communication channel. After acquiring the second virtual device through the communication channel, the computing node may mount the second virtual device to the directory of the container.
  • the second virtual device when the uninstall card sets the second virtual device according to the storage resource, the second virtual device may be created first; after the second virtual device is created, the storage resource and the second virtual device may be associated.
  • the storage resource may be an object storage resource or a block storage resource.
  • the uninstall card can directly provide the file storage resource to the computing node in the form of a network file system, and notify the computing node to mount the network file system to the directory of the container, that is, the file storage resource can not be Associated with the second virtual device.
  • the compute node can mount the network file system into the container's directory under the notification of the unmount card.
  • the computing node mounts the second virtual device to the directory of the container
  • different mounting methods may be used for different types of containers.
  • ordinary containers are containers different from secure containers, and the computing node can directly mount the second virtual device to the directory of the container; for secure containers, the computing node can directly pass the second virtual device to the container used to deploy the container.
  • the secure container virtual machine where the secure container virtual machine mounts the second virtual device into the directory of the container.
  • the offload card is further connected to the network service node through a network, and the offload card cooperates with the computing node to configure network resources for the container on the computing node.
  • the offload card can first apply for network resources from the network service node; after applying for the network resources, a third virtual device can be set according to the network resources, and the third virtual device can be provided to the computing node through the communication channel.
  • the computing node can obtain the third virtual device through the communication channel, and set the third virtual device in the container.
  • the third virtual device when the uninstall card sets the third virtual device according to the network resource, the third virtual device may be created; and the network resource and the third virtual device may be associated.
  • the uninstall card when setting the third virtual device according to network resources, may set network processing rules for the third virtual device, and the network processing rules include some or all of the following: load balancing policy, security group policy , routing rules routing, address mapping rules, quality of service QoS.
  • the computing node when the computing node sets the third virtual device in the container, different mounting methods may be used for different types of containers.
  • the computing node can add a third virtual device to the container's namespace.
  • the compute node passes the third virtual device through to the secure container virtual machine used to deploy the container.
  • the communication channel is a high-speed peripheral device interconnect PCIe channel.
  • an embodiment of the present application further provides a container management method, the method is executed by an offload card and a computing node cooperatively, and the beneficial effects can be found in the relevant description of the first aspect, which will not be repeated here.
  • the offload card is inserted in the computing node, a communication channel is established between the offload card and the computing node, and the offload card is also connected to the container cluster management node through the network.
  • the uninstall card receives the container creation request sent by the container cluster management node, and obtains the container image according to the container creation request;
  • the computing node obtains the container image through the communication channel, and creates a container based on the container image.
  • the uninstall card can create a first virtual device; it is also possible to associate the container image with the first virtual device, and provide the first virtual device to the computing node through a communication channel;
  • the computing node can acquire the first virtual device through the communication channel, create a container running environment of the container, and mount the first virtual device to the root directory of the container.
  • the offload card is further connected to the storage service node through the network, and the offload card and the computing node can configure storage resources for the container.
  • the uninstall card may first apply to the storage service node for storage resources; then, set the second virtual device according to the storage resources. After the second virtual device is set up, the second virtual device may be provided to the computing node through the communication channel.
  • the computing node may acquire the second virtual device through the communication channel, and after acquiring the second virtual device, mount the second virtual device to the directory of the container.
  • the uninstall card when setting the second virtual device according to the storage resource, may first create the second virtual device; after creating the second virtual device, associate the storage resource with the second virtual device.
  • the storage resource may be an object storage resource or a block storage resource.
  • the uninstall card can directly provide the file storage resource to the computing node in the form of a network file system, and notify the computing node to mount the network file system to the directory of the container.
  • a network filesystem can be mounted in the container's directory.
  • the love file storage resource may not be associated with the second virtual device.
  • the computing node when the computing node mounts the second virtual device to the directory of the container, for a common container, that is, a container different from a secure container, the computing node can directly mount the second virtual device to the container's directory.
  • the computing node may directly pass the second virtual device to the secure container virtual machine for deploying the container, and the secure container virtual machine mounts the second virtual device into the directory of the container.
  • the offload card is further connected to the network service node through the network, and the offload card can also cooperate with the computing node to configure network resources for the container, so that the container has network capabilities.
  • the offloading card can first apply to the network service node for network resources; then, set the third virtual device according to the network resources; and provide the computing node with the third virtual device through the communication channel.
  • the computing node can obtain the third virtual device through the communication channel, and set the third virtual device in the container.
  • the uninstall card when setting the third virtual device according to the network resources, may also set network processing rules for the third virtual device, and the network processing rules include some or all of the following: load balancing policy, security group Policies, routing rules, address mapping rules, quality of service.
  • the computing node when the computing node sets the third virtual device in the container, for a common container, the third virtual device may be added to the namespace of the container. For secure containers, the compute node passes the third virtual device through to the secure container virtual machine used to deploy the container.
  • the communication channel is a high-speed peripheral device interconnect PCIe channel.
  • the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium is run on a computer, the computer can execute the first aspect and the various possibilities of the first aspect.
  • the method described in the implementation manner of the above-mentioned fifth aspect and the method described in each possible implementation manner of the fifth aspect is performed.
  • the present application further provides a computer program product comprising instructions, which, when run on a computer, causes the computer to execute the method described in the first aspect and each possible implementation manner of the first aspect or execute the above-mentioned method.
  • the present application further provides a computer chip, the chip is connected to a memory, and the chip is used for reading and executing a software program stored in the memory, and executing the above-mentioned first aspect and various possibilities of the first aspect
  • the method described in the implementation manner of the above-mentioned fifth aspect and the method described in each possible implementation manner of the fifth aspect is performed.
  • FIG. 1 is a schematic diagram of the architecture of a system provided by the application.
  • FIG. 2 is a schematic diagram of the architecture of another system provided by the application.
  • FIG. 3 is a schematic diagram of a method for creating a container according to the present application.
  • Fig. 4 is a flow chart of container creation provided by the application.
  • FIG. 5 is a schematic diagram of a method for deleting a container provided by the present application.
  • FIG. 6 is a schematic diagram of a method for configuring container storage resources provided by the present application.
  • FIG. 7 is a schematic diagram of a method for configuring container network resources provided by the present application.
  • FIG. 8 is a schematic structural diagram of a container management device provided by the present application.
  • FIG. 9 is a schematic structural diagram of a device provided by the present application.
  • FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application
  • the system includes a container management cluster 100 and a computing node cluster 200 .
  • the container management cluster 100 is between the user and the computing node cluster 200 , and can interact with the user and the computing node cluster 200 .
  • the user interacts with the container management cluster 100 to manage the containers on the computing nodes 210 rented or owned by the user.
  • the management here includes but is not limited to: creating containers, deleting containers, and querying containers.
  • the container management cluster 100 may include one or more management nodes 110, and each management node 110 can manage containers on one or more computing nodes 210 in the computing cluster.
  • the embodiment of the present application does not limit the location where the management node 110 is deployed and the specific form of the management node 110 .
  • the management node 110 may be a computing node 210 deployed in a cloud computing device system or an edge computing device system, or may be a terminal computing device close to the user side.
  • Different management nodes 110 may be deployed in the same system or in different systems.
  • the multiple management nodes 110 may all be deployed in a cloud computing device system or an edge computing system, and the multiple management nodes 110 may also be deployed in different systems. Distributed deployment in cloud computing equipment systems, edge computing systems and terminal computing equipment.
  • the computing node cluster 200 includes one or more computing nodes 210 , and an offload card 220 may also be inserted on each computing node 210 .
  • the embodiment of the present application does not limit the architecture type of the computing node 210, and the computing node 210 may be a computing node 210 of an X86 architecture, or may be a computing node 210 of an ARM architecture.
  • the offload card 220 inserted on the computing node 210 is a hardware device with a certain data processing capability, and the offload card 220 may include components such as a processor, a memory, a hardware acceleration device, a network card, or the like, or the offload card 220 may be connected with the network card. connect.
  • the uninstall card 220 can interact with the management node 110 , and notify the computing node 210 where it is located to create a container according to an instruction issued by the management node 110 , and can also receive 210 for the computing node where it is located according to the instruction issued by the management node 110 .
  • the management here includes but is not limited to: container creation, container deletion, and container query.
  • the user can send a container creation request to the management node 110 in the container management cluster 100 through the client.
  • the container creation request can carry resource configuration information of the container, and the resource configuration information of the container can indicate the resources required by the container.
  • the management node 110 can record the resource configuration information of the container locally, and then select a target computing node for the container according to the resource status of the one or more computing nodes 210 under management, and then use the container to create the container. Scheduled to the target computing node.
  • the uninstall card 220 inserted on the target computing node will monitor the scheduling operation of the management node 110, and when it is detected that the management node 110 schedules a container to the target computing node, the uninstall card 220 will prepare corresponding resources for the container , and notify the computing node 210 where it is located to use the resource to create a container.
  • the target computing node creates a container under the notification of the unloading card 220.
  • the unloading card 220 detects that the container is created, it may state information of the container (the state information includes, but is not limited to, the running state of the container, the service status of the container, etc.). Running status, container resource usage, etc.) are reported to the management node 110, and the management node 110 can display the status information of the container to the user through the client. Users can also query the status information of the container through the client.
  • Deleting a container and querying a container are similar to the above processes, except that the information exchanged between the user, the container management cluster 100 and the computing node cluster 200 is different. For details, please refer to the foregoing description, which will not be repeated here.
  • the function of container management is unloaded to the unloading card 220, and the unloading card 220 implements the container management of the computing node 210.
  • the computing node 210 only needs to run the container, and the computing node 210 no longer has the container management function. This reduces the resources occupied on the computing node 210 to implement the container management function, so that the resources on the computing node 210 can be effectively utilized.
  • the unloading card 220 can be connected with the One or more management nodes 110 in the container management cluster 100 interact.
  • the offload card 220 can also interact with the computing node 210 where it is located.
  • the offload card 220 can also interact with the virtual network service node 300 (also referred to as a network service node), and is connected to the virtual network service node 300 through the network.
  • a virtual network service is deployed on the virtual network service node 300.
  • the virtual network service node 300 can provide a virtual network service for the computing node 210 and the container on the computing node 210.
  • the virtual network server is an external service that the container needs to rely on and can Network resources are provided for the containers, so that the containers on the different computing nodes 210 can implement network intercommunication, so that the containers have network capabilities.
  • the offload card 220 can also interact with the storage service node 400 and connect with the storage service node 400 node through the network.
  • the storage service node 400 can be deployed with storage services such as block storage services, file storage services, or object storage services. Block storage services, file storage services, and object storage services all belong to distributed storage services. Distributed storage services refer to The storage resources can be deployed on different storage nodes in a distributed manner, and the storage service node 400 can provide storage resources for the computing node 210 and the container on the computing node 210, so that the data in the computing node 210 or the container on the computing node can be stored in the storage on the node.
  • a network proxy module 221 and a storage proxy module 222 can be deployed in the uninstall card 220 .
  • the network proxy module 221 is used to interact with the virtual network service node 300 and apply for network resources from the virtual network service node 300 for the container on the computing node 210 .
  • the storage agent module 222 is configured to interact with the storage service node 400 to apply for storage resources from the storage service node 400 for the container on the computing node 210 .
  • the uninstall card 220 can manage containers on the computing node 210 under the instruction of the management node 110 .
  • the uninstall card 220 includes a management agent module 223 , a container runtime module 224 , a container storage module 225 , and a container network module 226 .
  • the management agent module 223 is the commander-in-chief to implement container management on the uninstall card 220.
  • the management agent module 223 can trigger the container runtime module 224 to prepare a container image and a running environment for the container, and can trigger the container storage module 225 to prepare storage for the container. resource, triggering the container network module 226 to prepare network resources for the container.
  • the management agent module 223 may also interact with the management node 110 to report the status information of the container on the computing node 210 and the resource status on the computing node 210 to the management node 110 .
  • the management agent module 223 can communicate with the computing node 210 (such as the front-end proxy module 211 on the computing node 210), and obtain the resource status on the computing node 210 through the front-end proxy module 211. After the resource status on the computing node 210 is reported, the resource status on the computing node 210 is reported to the management node 110.
  • the management node 110 needs to schedule the container, the resource status on the computing node 210 can be used as a reference to determine the target to which the container needs to be scheduled. calculate node.
  • the container runtime module 224 can create a container image for the container and build a running environment.
  • the embodiment of the present application does not limit the specific representation of the container image.
  • the container runtime module 224 acquires the container image from the container image repository, and loads the container image to the storage that can be accessed by the unload card 220 (container runtime module 224).
  • the storage resource is a local storage resource of the uninstall card 220, or it may be a storage resource connected to the uninstall card 220, such as a storage such as a disk.
  • the container runtime module 224 presents the container image to the computing node 210 in the form of a virtual (virtual function, VF) device (such as the first VF device in the embodiment of the present application), and the virtual device may be a single root input
  • the output virtual (single root i/o virtualization, SRIOV) device is presented to the computing node 210.
  • the protocol supported by the SRIOV device is not limited here, and the protocol can be virtio-blk, virtio-scsi, virtio-fs or virtio-9p, etc. .
  • the container in this embodiment of the present application may be an ordinary container, that is, the container does not need high security and does not need to be isolated from other containers, and the container runtime module 224 can build a lightweight isolation environment for the container to run the container.
  • the construction of the lightweight isolation environment requires the configuration of namespaces (namespaces), control groups (control groups, cgroups), etc.
  • the namespace is used to isolate the resources required by the container, such as inter-process communication (IPC), network resources, file systems, etc.
  • IPC inter-process communication
  • the resources required by the container can be separated from those required by other containers.
  • the resources are isolated to achieve the effect of exclusive use of resources.
  • cgroups are used to limit resources isolated by namespaces. For example, you can set weights (representation priorities) for these resources, configure the usage of resources, and so on.
  • the container can also be a safety container. Compared with ordinary containers, safety containers have higher requirements on safety and need to be isolated from other containers.
  • the container runtime module 224 can use the front-end proxy module 211 to first build a security container virtual machine exclusive to the security container for the security container, and then notify the front-end proxy module 211 to create a container in the security container virtual machine, so as to obtain a security container with high security isolation.
  • a container that is, a security container, wherein the security container virtual machine is a virtual machine specially deploying a security container.
  • the container network module 226 is used to prepare network resources for the containers on the computing node 210, and through the network resources, the containers on the computing node 210 can realize network intercommunication with other containers, so that the containers have network capabilities, and the network capabilities of the containers can be Including network interoperability between containers, service discovery capabilities, and network policy control capabilities.
  • the container network module 226 applies to the virtual network service node 300 for network resources through the network proxy module 221, and the network resources may include network port resources, and may also include other network resources. Then, establish an association relationship between the applied network resource and the virtual device (such as the fourth VF device in the embodiment of the present application), and present the network resource to the computing node 210 in the form of a virtual device for use by the container, and the computing node 210 Containers on the virtual device can interact with other containers through this virtual device.
  • the virtual device such as the fourth VF device in the embodiment of the present application
  • the container storage module 225 can prepare storage resources, such as block storage resources, file storage resources and object storage resources, for the containers on the computing node 210 .
  • the container storage module 225 can interact with the storage service node 400 through the storage agent module 222 in the unload card 220, apply for storage resources, such as block storage resources and object storage resources, mount the storage resources to the unload card 220, and then the container storage module 225 presents the storage resource to the container in the computing node 210 through a virtual device (such as the second VF device and the third VF device in the embodiment of the present application).
  • a virtual device such as the second VF device and the third VF device in the embodiment of the present application.
  • the offload card 220 and the computing node 210 may communicate through an Internet protocol, and the interconnection protocol may be a high-speed peripheral component interconnect express (PCIe) protocol, that is, the offload card 220 and the computing node 210 The communication channel between them is a PCIe channel.
  • PCIe peripheral component interconnect express
  • the embodiment of the present application does not limit the specific form in which the offload card 220 communicates with the computing node 210 through the PCIe channel.
  • the offload card 220 may be connected to the computing node 210 in the form of a PCIe protocol-based network card that supports a network protocol stack, and communicates with the computing node 210.
  • the node 210 may communicate with the computing node 210 in the form of a virtio-vsock device based on the PCIe protocol and a virtio architecture to communicate with the computing node 210 .
  • the communication channel between the offload card 220 and the computing node 210 is a PCIe channel for illustration, and the embodiment of the present application does not limit the specific communication channel between the offload card 220 and the computing node 210 type, any communication channel that enables communication between the offload card 220 and the computing node 210 is applicable to the embodiment of the present application.
  • Container management configuration of storage resources, and configuration of network resources will be described below with reference to the accompanying drawings.
  • Container management includes container creation, container deletion, etc., covering the entire life cycle of a container. The process of container creation and container deletion are described here:
  • a schematic diagram of a container creation method provided by an embodiment of the present application, the method includes:
  • Step 301 The user sends a container creation request to the management node 110 in the container management cluster 100, the container creation request carries the resource configuration information of the container, and the resource configuration information of the container can indicate the resources that the container needs to occupy.
  • the resources include but are not limited to: processors, memory space, storage resources, and network resources (eg, the network resources may be a host network, an independent network, etc.).
  • the resource configuration information of the container can describe information such as the type and size of the resources occupied by the container.
  • the container resource configuration information may indicate the number of processors and the size of the memory space; the container resource configuration information may also indicate the type of the storage resource as block storage resource, file storage resource or object storage resource, and the size of the storage resource.
  • the container resource configuration information may also indicate that the network resource is a host network (that is, a network that needs to reuse the computing node 210) or an independent network (that is, a network that is configured solely for the container and is independent of the computing node 210).
  • the network resource needs to support the service discovery capability (service) and the network control policy (network policy) capability, and the number of the network port.
  • the embodiments of the present application do not limit the manner in which the user interacts with the management node 110.
  • the user may select or input the resource configuration information of the container to be created through the client deployed on the user side.
  • the client After detecting the resource configuration information of the container selected or input by the user, the client sends a container creation request to the management node 110 under the trigger of the user (eg, the user clicks the "Create” option on the interface provided by the client).
  • the user can directly interact with the management node 110, and the management node 110 can provide the user with a container creation interface, on which the user can select or input the configuration information of the container to be created, and the user triggers (for example, the user in the management node 110) Click the "Create" option on the provided interface) to create a request for the container.
  • the management node 110 can provide the user with a container creation interface, on which the user can select or input the configuration information of the container to be created, and the user triggers (for example, the user in the management node 110) Click the "Create" option on the provided interface) to create a request for the container.
  • Step 302 After the management node 110 receives the container creation request, the management node 110 schedules the container according to the container configuration information and the resource status of each managed computing node 210, and sends the container creation request to the target computing node.
  • the resource status of each managed computing node 210 may be pre-collected by the management node 110 .
  • the management node 110 The management agent module 223 obtains the resource status on the computing node 210, the resource status on the computing node 210 can indicate the idle resources of the computing node 210, and the resources here include but are not limited to: memory space, processor, storage resources, etc. .
  • the management node 110 may actively send a resource status acquisition request to the management agent module 223 on the offload card 220 to request the management agent module 223 to report the resource status of the computing node 210 . Further, the resource status on the computing node 210 is obtained from the management agent module 223 .
  • the management agent module 223 on the offload card 220 can also actively report the resource status of the computing node 210 to the management node 110 , for example, after the offload card 220 is started, periodically report the resource status of the computing node 210 to the management node 110 .
  • Scheduling the container by the management node 110 refers to determining the target computing node to be deployed by the container, and sending a container creation request to the target computing node. There are many ways for the management node 110 to send the container creation request, two of which are listed below:
  • the management node 110 schedules the container through the container resource database.
  • the container resource database is a database jointly maintained by each management node 110 in the container management cluster 100.
  • the container resource database records the relevant information (such as resource configuration information, container identification information, state information of the container) and the computing node 210 where the container is located, that is, the container resource database includes the corresponding relationship between the container and the computing node 210 .
  • the management node 110 may select a computing node 210 whose idle resources can support the container from among the computing nodes 210 as a target computing node. After determining the target computing node, the management node 110 updates the scheduling result to the container resource database, where the scheduling result indicates the target computing node to be deployed by the container.
  • the management node 110 can update the corresponding relationship between the container and the target computing node to the container resource database.
  • the container resource database recording the container and the computing node 210 refers to recording the resource configuration information of the container, the identification information of the container, and the identification information of the computing node 210 .
  • the specific type of container identification information is not limited here. For example, it may be the identification configured for the container when the container is created, or the container name. Any method that can uniquely identify the container is applicable to the embodiments of the present application.
  • the specific type of the identification information of the computing node 210 is not limited here. For example, it may be the identification of the computing node 210 in the computing node cluster 200, or the name of the computing node 210. Any method that can uniquely identify the computing node 210 is applicable. in the examples of this application.
  • the management node 110 can directly send a container creation request to the target computing node (eg, a container management module in the target computing node).
  • the management node 110 may also save the scheduling result in the container resource database.
  • the management agent module 223 can start to create the container.
  • Creating a container mainly includes two operations. One is to configure the container image (refer to step 303), and the other is to configure the container image (see step 303). The second is to carry the container running environment (refer to step 304).
  • Step 303 The management agent module 223 in the uninstall card 220 triggers the container runtime module 224 to create a container image for the container, and provides the container image to the target computing node through the first virtual device.
  • the container image is a collection of configuration files and tool libraries required for container operation, such as required library files, system configuration files, and system tools.
  • the management agent module 223 can detect the scheduling operation of the management node 110 in the following two ways.
  • the management agent module 223 can monitor the container resource database in real time, and determine the container to be deployed on the target computing node by monitoring the container resource data.
  • the management agent module 223 can monitor the container resource database in real time, and determine the container to be deployed on the target computing node by monitoring the container resource data.
  • the added information determines whether new containers need to be deployed on the target compute node.
  • the management agent module 223 determines that the management node 110 schedules the container to the target computing node.
  • the container runtime module 224 configures a container image for the container, it can obtain the container image from a container image repository deployed at a remote location, and load the container image onto a storage resource accessible to the unload card 220 . After that, a first VF device is created, and the container image is bound (also called associating) to the first VF device. The container runtime module 224 provides the container image to the target computing node through the first VF device.
  • the first VF device may be a SRIOV device.
  • the container runtime module 224 when the container runtime module 224 obtains the container image from the container image repository, it can also obtain the container image on demand. That is, only part of the data of the container image is acquired, the part of the data is associated with the first VF device, and provided to the target computing node. During the subsequent process of starting or running the container, the container runtime module 224 obtains other data required by the container image from the container image repository, and provides the other data to the target computing node through the first VF device.
  • Step 304 The container runtime module 224 in the uninstall card 220 carries the runtime environment for the container in the target computing node through the front-end proxy module 211 in the target computing node.
  • the container runtime module 224 creates a normal container runtime environment for the container on the target computing node through the front-end proxy module 211, including configuring namespace and cgroup.
  • the container runtime module 224 creates a secure container virtual machine for the container on the target computing node through the front-end proxy module 211. After the secure container virtual machine is started, the container runtime module 224 acts as a container inside the secure container virtual machine. Create the corresponding runtime environment.
  • Step 305 the front-end proxy module 211 mounts the first VF device to the root directory (rootfs) of the container under the instruction of the management proxy module 223 .
  • the front-end proxy module 211 can access the first VF device based on protocols such as virtio-scsi and virtio-blk. After detecting the first VF device, the front-end proxy module 211 can directly mount the first VF device to the root directory of the container for a common container. For the secure container, the front-end proxy module 211 may pass the first VF device to the secure container virtual machine, and the secure container virtual machine mounts the first virtual machine device into the root directory of the container.
  • Step 306 After the container is successfully created, the management agent module 223 synchronizes the state information of the container to the container cluster resource database for the state of the container.
  • the management node 110 can schedule the container to the computing node 210.
  • the management agent module 223 can trigger the The container runtime module 224 provides an accurate container image for the container, and provides the container image to the target computing node through the first VF device.
  • the management agent module 223 in the target computing node can call the container runtime module 224 to create a runtime environment for the container.
  • a schematic diagram of a container deletion method provided by an embodiment of the present application, the method includes:
  • Step 501 The user sends a container deletion request to the management node 110 in the container management cluster 100, where the container deletion request includes the identification information of the container.
  • the manner in which the user sends the container deletion request to the management node 110 in the container management cluster 100 is similar to the manner in which the user sends the container creation instruction to the management node 110 in the container management cluster 100 .
  • Step 502 After the management node 110 receives the container creation request, the management node 110 instructs the management agent module 223 in the target computing node to delete the container.
  • management node 110 instructs the management agent module 223 in the target computing node to delete the container:
  • the container status of the management node 110 in the resource database is marked as deleted.
  • the management agent module 223 in the target computing node determines that the container needs to be deleted by monitoring the resource database.
  • the management node 110 sends a container deletion instruction to the management agent module 223 in the target computing node, instructing the management agent module 223 to delete the container.
  • Step 503 The management agent module 223 instructs the container runtime module 224 to delete the container.
  • the container runtime module 224 can release the running environment of the container by invoking the front-end proxy module 211 in the target computing node, and the ways of releasing the running environment of the container are different for different types of containers.
  • the front-end proxy module 211 can send an end signal to the process of the container, and after the process of the container ends, release the namespace and cgroup occupied by the container, and can also unload the first VF bound to the container image. equipment.
  • the front-end proxy module 211 can first issue an end signal to the container process through the secure container virtual machine, and after the container process ends, clear the resources occupied by the container in the secure container virtual machine, and unload the container image. The first VF device to be bound. After the resources occupied by the container in the virtual machine are cleared, the security container virtual machine process can be terminated.
  • FIG. 6 is a schematic flow chart for configuring storage resources for containers.
  • the management agent module 223 detects that the management node 110 schedules the container to the target computing node, if it detects an update of the resource database or receives a container creation request, it can The database or container creation request determines the storage resources that need to be configured for the container, and the management agent module 223 triggers the container storage module 225 to configure storage resources for the container.
  • storage service node, object storage service node and file storage service node apply for storage resources
  • the container storage module 225 establishes an association relationship between the virtual device and the storage resource through the storage service proxy module, and provides the virtual device to the computing node 210.
  • the front-end proxy module 211 in the computing node 210 can mount the virtual device to the storage directory of the container.
  • the storage resources include but are not limited to block storage resources, object storage resources, file storage resources or local storage resources.
  • the following describes how to configure storage resources for containers for different types of storage resources.
  • Block storage resources which can be presented in the form of block devices.
  • the block storage resource may be pre-applied by the storage agent module 222 in the uninstall card 220, or may be triggered by the storage agent module 222 to apply to the storage service node 400 when the management agent module 223 determines that the container is scheduled to the target computing node . That is to say, whether it is pre-applied or applied in real time, the storage agent module 222 applies for the block storage resource. After the storage agent module 222 applies for the block storage resource, it can mount the block storage resource to the unloading card 220. , that is, the block storage resource is presented to the offload card 220 in the form of a device for the offload card 220 to use.
  • the container storage module 225 may create a second VF device, and associate the block storage resource with the second VF device, that is, establish an association relationship between the block storage resource and the second VF device.
  • the management agent module 223 triggers the container runtime module 224 to create a container
  • the container storage module 225 can notify the front-end agent module 211 in the target computing node to mount the second VF device into the storage directory of the container.
  • the second VF device may be a virtual device supporting the virito-blk or virtio-scsi protocol.
  • the front-end proxy module 211 on the target computing node can directly mount the second VF device into the storage directory of the container (the management proxy module 223 can instruct the front-end proxy module 211 to mount the second VF device to the storage directory of the container). to the container's storage directory).
  • the front-end proxy module 211 on the computing node 210 can pass the second VF device to the secure container virtual machine, and the secure container virtual machine mounts the second VF device into the storage directory of the container.
  • the object storage resource may be pre-applied by the storage agent module 222 in the uninstall card 220, or may be triggered by the storage agent module 222 to apply to the storage service node 400 when the management agent module 223 determines that the container is scheduled to the target computing node . That is to say, regardless of pre-application or real-time application, the object storage resource is applied by the storage agent module 222. After the storage agent module 222 applies for the object storage resource, it can mount the object storage resource to the unloading card 220. , that is, the object storage resource is presented to the offload card 220 in the form of a device for the offload card 220 to use.
  • the container storage module 225 may create a third VF device, and associate the object storage resource with the third VF device, that is, establish an association relationship between the object storage resource and the third VF device.
  • the management agent module 223 triggers the container runtime module 224 to create a container
  • the container storage module 225 may notify the computing node 210 to mount the third VF device into the storage directory of the container.
  • the third VF device may be a virtual device supporting the virtio-fs or virtio-9p protocol.
  • the agent module on the computing node 210 can directly mount the third VF device into the storage directory of the container.
  • the proxy module on the computing node 210 can pass the third VF device to the secure container virtual machine, and the secure container virtual machine mounts the third VF device into the storage directory of the container.
  • POSIX Portable Operating System Interface
  • the local storage resource refers to the storage resource in the computing node 210 .
  • the container storage module 225 allocates local storage resources for the container through the front-end proxy module 211 in the target computing node, and the local storage resources can be a subdirectory of a storage partition in the computing node 210, or an independent storage partition, or Can be an independent storage partition.
  • the management agent module 223 may instruct the front-end agent module 211 to mount the local storage resource to the storage of the container Under contents.
  • the front-end proxy module 211 can directly mount the local storage resource to the storage directory of the container.
  • the front-end proxy module 211 can use a file sharing protocol (such as virtio-9p or virtio-fs protocol) to share the local storage resource with the secure container virtual machine.
  • a file sharing protocol such as virtio-9p or virtio-fs protocol
  • the secure container virtual machine will The local storage resource is mounted under the storage directory of the container.
  • the file storage resource may be pre-applied by the storage agent module 222 in the uninstall card 220, or may be triggered by the storage agent module 222 to apply to the storage service node 400 when the management agent module 223 determines that the container is scheduled to the target computing node . That is, the file storage resource is applied for by the storage agent module 222 regardless of whether it is applied in advance or applied in real time.
  • the container storage module 225 mounts the file storage resource in the form of a network file system on the target computing node or the secure container virtual machine through the front-end proxy module 211 for the container to use.
  • the front-end proxy module 211 on the computing node 210 can directly mount the network file system into the storage directory of the container.
  • the front-end proxy module 211 on the computing node 210 can mount the network file system to the storage directory of the container in the secure container virtual machine.
  • the data generated by the container during the running process can be stored in the storage resource.
  • the container can directly store the generated container in the local storage resource.
  • the container can send the generated data to the storage proxy module 222 in the offload card 220 by sending the VF device (such as the second VF device and the third VF device) associated with the storage resource to the storage proxy module 222 (specifically, to the storage proxy module 222).
  • the storage backend driver the storage proxy module 222 sends the generated data to the storage service node 400, and the storage service node 400 stores the generated data in the storage resources allocated for the container.
  • the file storage service is a storage service based on network attached storage (NAS), it is attached to the network. Therefore, in the process of storing the generated data in the corresponding file storage resources, it is necessary to use the network resources configured for the container.
  • the container can send the generated data to the network proxy module 221 in the offload card 220, and the network proxy module 221 sends the generated data to the storage node through the network resources (such as ports) configured for the container, and the storage node sends the generated data to the storage node.
  • the resulting data is stored in the storage resources allocated for the container.
  • the management agent module 223 can pass the former agent in the target computing node.
  • the module unloads the VF device associated with the storage resource (such as block storage resource or object storage resource).
  • the management agent module 223 can instruct the container storage module 225 to disassociate the storage resource from the VF device.
  • the container storage module 225 may instruct the storage proxy module 222 to uninstall the storage resource from the uninstall card 220 .
  • the management agent module 223 may unload the file storage resources through the former agent module in the target computing node.
  • network resources can also be configured for the container. Based on the network resources, data interaction can be realized between the containers, and the network capabilities required by the container can be realized.
  • the network capabilities that containers need to have include: network interoperability between containers, service discovery capabilities, and network policy control capabilities. The following three aspects are explained separately:
  • the inter-container network interoperability is the most basic network capability that containers need to have.
  • the inter-container network interoperability requires data interaction between containers.
  • the management agent module 223 on the target computing node detects that a container is scheduled to the target computing node, if it detects an update of the resource database or receives a container creation request, it can determine that the container needs to be configured according to the updated resource database or the container creation request
  • the management agent module 223 can trigger the container network module 226 to prepare network resources for the container.
  • the container network module 226 applies for network resources from the virtual network service node 300 through the network proxy module 221, such as applying for a network port, and obtains the information of the network port (such as the port identifier, the number of ports) and the internet protocol (Internet Protocol, IP) address and other information.
  • the container network module 226 creates a fourth VF device, and establishes an association relationship between the fourth VF device and network resources.
  • the fourth VF device may be an abstract virtual device based on the network card provided in the offload card 220 itself, or may support virtio -Virtual device for net protocol.
  • the container network module 226 After applying for the network resource, in the process that the management agent module 223 triggers the container runtime module 224 to create a container, the container network module 226 provides the fourth VF device to the target computing node, and the container network module 226 can notify the target computing node.
  • the front-end proxy module 211 in the target computing node allocates the container to the fourth VF device, and the front-end proxy module 211 in the target computing node allocates the fourth VF device to the container under the notification.
  • the front-end proxy module 211 adds the fourth VF device to the namespace of the container.
  • the front-end proxy module 211 may pass the fourth VF device to the secure container so that the fourth VF device can be used by the container.
  • containers can be divided into back-end servers (servers) and front-end applications (applications).
  • Front-end applications are usually user-oriented. Users can operate the front-end applications to achieve their own needs, such as in front-end applications. Click Query, Run, etc. to select.
  • the back-end server can provide operations and data for the front-end application to display the final result to the user with the front-end application.
  • any front-end application it can be connected to multiple different back-end servers, that is, information from one front-end application can be received by one of the multiple different back-end servers.
  • a service discovery instance may be added.
  • the service discovery instance can be connected to the multiple back-end servers, the service discovery instance can also be deployed on the computing nodes 210 where the multiple back-end servers are located, and the service discovery instance can also be deployed in a distributed manner It is inserted in the offload card 220 of the computing node 210 where the multiple backend servers are located, and the offload card 220 cooperates to perform the function of discovering instances of services.
  • the service discovery instances distributed on each computing node 210 or offload card 220 may also be configured on a fourth VF device associated with network resources, that is, the fourth VF device is configured with a load balancing policy .
  • the service discovery instance can receive information from the front-end application, and the destination address of the information is the address of the service discovery instance. After receiving the information, the information is transmitted to the multiple back-end servers based on the load balancing strategy.
  • the load balancing strategy indicates the rules to be followed when selecting a back-end server from the multiple back-end servers. For example, the load balancing strategy may indicate the selection of an idle back-end server or a back-end server with the strongest data processing capability.
  • the back-end server for example, the load balancing policy may indicate the proportion of information that can be received by the multiple back-end servers.
  • the service discovery instance updates the destination address in the information to the address of one of the back-end servers, and sends the information of the updated destination address to one of the back-end servers, so that the back-end service performs data processing according to the information.
  • the service discovery instance can also feed back the processing result of the back-end server to the front-end application.
  • the service discovery instance is used to implement load balancing and distribute the information from the front-end application to the back-end server, and the load balancing strategy and the corresponding relationship between the service discovery instance and the back-end server are configured by the user.
  • the management node 110 can configure a discovery service (service) under the operation of the user, and the configuration operation performed by the management node 110 includes configuring the address of the service discovery instance that supports the discovery service, and the correspondence between the service discovery instance and the backend server. relationship and load balancing strategy.
  • the container network module 226 installed in the offload card 220 on the computing node 210 can monitor the configuration operations of the management node 110 and create service discovery instances.
  • the process of creating the service discovery instance by the container network module 226 in the offload card 220 is mainly a process of configuring service access rules.
  • the service access rules include load balancing policies and the correspondence between the address of the service discovery instance and the address of the backend server. relation. That is to say, the service discovery instance includes the load balancing policy and the correspondence between the service access address and the address of the corresponding container.
  • the container network module 226 determines the container address of the address corresponding to the service discovery instance on each computing node 210 by interacting with the management node 110, the container address may include the container's internet protocol (IP) address With the network port, configure the load balancing policy, and the corresponding relationship between the service access address and the address of the container.
  • IP internet protocol
  • the embodiments of the present application do not limit the specific type and deployment location of the service discovery instance.
  • the service discovery instance may be centrally deployed on one computing node 210, or, for example, the service discovery instance may be distributed and deployed on multiple computing nodes In step 210, specifically, the service discovery instance may be deployed in a distributed manner in the offload card 220 inserted on each computing node 210, and any instance that can achieve load balancing is applicable to the embodiment of the present application.
  • the container network module 226 may also update the service discovery instance according to the address of the changed container.
  • the container network module 226 can monitor the change of the container of the computing node 210 , and the change of the container includes but is not limited to new creation (a container is newly created on the computing node 210 ), deletion (deletion of the existing container on the computing node 210 ) container), migration (migrate the services of one container on the computing node 210 to another container), etc., when the container network module 226 determines the address of the changed container, updates the service access address and the address of the changed container. Correspondence.
  • the container network module 226 may add a corresponding relationship between the service access address and the address of the newly created container.
  • the container network module 226 may delete the corresponding relationship between the service access address and the address of the deleted container.
  • the container network module 226 may update the correspondence between the service access address and the address of the container before migration to the correspondence between the service access address and the address of the migrated container.
  • the service discovery instance After the service discovery instance is created or updated, when the service discovery instance receives the information that the destination address is the address of the service discovery instance, it can be based on the load balancing policy and the correspondence between the service access address and the address of the container. The destination address in the information is converted and forwarded to the back-end server.
  • Network resources provide the possibility for network intercommunication between containers, and the network policy control capability further restricts the way of network intercommunication between containers.
  • the network policy control capability is implemented based on the security group policy, which specifies the containers that are allowed to communicate with each other. , and also specifies the containers that are not allowed to communicate with each other.
  • the security group policy includes Access Control Lists (ACLs), which can indicate which containers can accept information from and which containers can reject information.
  • ACLs Access Control Lists
  • a policy control instance can be added.
  • the policy control instance can be connected to multiple containers, centrally deployed on one device, and connected to the computing node where the multiple containers are located.
  • the policy control instance can also be deployed in a distributed manner on the computing node 210 where the multiple containers are located, and the policy control instance can also be deployed and inserted in the unloading card of the computing node 210 where the multiple containers are located.
  • the card 220 cooperates to execute the function of the policy control instance.
  • the policy control instance distributed on each computing node 210 or the offload card 220 may also be configured on a fourth VF device associated with network resources, that is, a security group is configured on the fourth VF device Strategy.
  • the policy control instance can receive information from different containers and forward the information. Take the policy control instance receiving information from container 1 as an example. The destination address of the information is the address of container 2. After receiving the information, the policy control instance determines whether the information can be sent to container 2 based on the security group policy. If it is determined that it can be sent to container 2, the policy control instance forwards the information 2 to container 2, and if it is determined that it cannot be sent to container 2, it refuses to forward the information.
  • the security group policy is configured by the user. For example, the user configures on the client terminal containers that are allowed to communicate with each other and containers that are not allowed to communicate with each other. After detecting the user's configuration, the client can send the user's configuration to the management node 110. Also, the identification information of the containers that are allowed to communicate and the identification information of the containers that are not allowed to communicate are sent to the management node 110 . After receiving the configuration of the user, the management node 110 can send an instruction to the container network module 226 through the management agent module 223, where the instruction is used to indicate the identification information of the container that is allowed to communicate and the identification information of the container that is not allowed to communicate.
  • the management node 110 can configure the security group policy under the operation of the user, and the configuration operation performed by the management node 110 includes configuring the corresponding relationship of the containers that are allowed to communicate and the corresponding relationship of the containers that are not allowed to communicate with each other.
  • the container network module 226 installed in the offload card 220 of each computing node 210 can monitor the configuration operation of the management node 110 and create a policy control instance.
  • the process of creating the policy control instance by the container network module 226 in the uninstall card 220 is mainly to configure the security group policy, which indicates the correspondence between the addresses of the containers that are allowed to communicate with each other and the address of the containers that are not allowed to communicate with each other. Correspondence between.
  • the address of the container may include the internet protocol (IP) address and network port of the container, and the security is set according to the configuration operation of the management node 110.
  • IP internet protocol
  • the policy control instance may be centrally deployed on one computing node 210, or, for example, the policy control instance may be distributed and deployed on multiple computing nodes In step 210, specifically, the policy control instance may be distributed and deployed in the offload card 220 inserted in each computing node 210, and any instance capable of realizing load balancing is applicable to the embodiment of the present application.
  • the container network module 226 may also update the policy control instance according to the address of the changed container.
  • the container network module 226 can monitor the change of the container of the computing node 210 , and the change of the container includes but is not limited to new creation (a container is newly created on the computing node 210 ), deletion (deletion of the existing container on the computing node 210 )
  • the container network module 226 determines the address of the changed container, it updates the correspondence between the addresses of the containers that are allowed to communicate with each other. relationship and the correspondence between the addresses of containers that are not allowed to communicate.
  • the container network module 226 can add the address of the newly created container to the addresses of the containers that are allowed to communicate with each other, and the address of the newly added container is different from the addresses of other containers. Correspondence between.
  • the container network module 226 may delete the correspondence between the addresses of the deleted containers and the addresses of other containers in the security group policy.
  • the container network module 226 may update the correspondence between the addresses of the container before migration and the addresses of other containers in the security group policy to the address of the container after migration and the addresses of other containers. Correspondence.
  • a policy control instance After a policy control instance is created or updated, when the policy control instance receives the information that the destination address is the address of the policy control instance, it can determine that the information can be forwarded according to the security group policy, and after it is determined that the information can be converted , convert the information, otherwise refuse to forward the information.
  • FIG. 7 is a schematic flowchart of container network configuration.
  • the management node 110 can trigger the container network module 226 to configure network resources for the container through the management agent module 223 in the uninstall card 220 , and the container storage module 225 can use the network service proxy module.
  • the container network module 226 After applying for network resources from the network service node, the container network module 226 establishes an association relationship between the virtual device and the network resource through the network service proxy module, and provides the virtual device to the computing node 210 .
  • the front-end proxy module 211 in the compute node 210 can assign the virtual device to the container.
  • the management node 110 can trigger the container network module 226 to configure service access rules (such as load balancing policies) and security group policies for the container through the management agent module 223 in the offload card 220 .
  • service access rules such as load balancing policies
  • quality of service In addition to the network interworking capability, service discovery capability, and container network policy control capability between containers, quality of service (QoS), routing rules, and address mapping rules can also be configured for containers.
  • Information service quality such as regulating information delay, blocking, monitoring, speed limit, etc.
  • the routing rule is used to indicate the gateway to which the information sent by the container needs to be sent, that is, the information sent by the container can be routed to the gateway based on the routing rule.
  • the address mapping rules are used to implement the conversion between local area network addresses and public network addresses.
  • the address mapping rules include NAT and FULL NAT, where NAT includes some or all of the following: SNAT, DNAT, and PNAT.
  • the configuration of QoS, routing rules, and address mapping rules is similar to the configuration of service discovery instances. Users can configure some or all of the QoS, routing rules, and address mapping rules through the client.
  • the management node detects the user's configuration. After that, the management node can trigger the container network module 226 to create an instance that can implement some or all of the above rules (the instance can be deployed in a distributed manner, that is, centrally deployed on one device, for example, the container network module 226 can use the above-mentioned part or All the rules are configured on the fourth VF device.
  • the container network module 226 can use the above-mentioned part or All the rules are configured on the fourth VF device.
  • an embodiment of the present application further provides a container management device for executing the method for unloading a card executed in any of the above method embodiments.
  • a container management device for executing the method for unloading a card executed in any of the above method embodiments.
  • the container management apparatus 800 may be located on an uninstall card, the uninstall card is inserted into a computing node, and the container management apparatus 800 is connected to the computing node.
  • a communication channel is established between them, and the container management device 800 is also connected to the container cluster management node through the network; the container management device 800 is used to manage the containers on the computing nodes, and the container management device 800 includes a transmission unit 801, an acquisition unit 802, a notification
  • the unit 803 optionally, further includes a first setting unit 804 and a second setting unit 805 .
  • the transmission unit 801 is configured to receive a container creation request sent by a container cluster management node.
  • the transmission unit 801 may be configured to implement the method in which the management agent module 223 receives the container creation request in the above method embodiment.
  • the obtaining unit 802 is configured to obtain the container image according to the container creation request, and the obtaining unit 802 may be configured to implement the method for obtaining the container image by the container runtime module 224 in the above method embodiments.
  • the notification unit 803 is configured to notify the computing node to create a container in the computing node according to the container image through the communication channel.
  • the notification unit 803 may be configured to implement the method in which the container runtime module 224 notifies the computing node to create a container in the above method embodiment.
  • the notification unit 803 may also create a first virtual device; and associate the container image with the first virtual device; after that, notify the computing node to create a container
  • the container running environment of the container and the first virtual device are mounted to the root directory of the container.
  • the container management apparatus 800 is also connected with the storage service node through a network.
  • the transmission unit 801 may apply to the storage service node for storage resources; that is, the transmission unit 801 executes the method executed by the storage proxy module 222 in the above method embodiments.
  • the first setting unit 804 can set the second virtual device according to the storage resource; the first setting unit 804 can execute the method for configuring the virtual device by the container storage module 225 in the above-mentioned embodiments.
  • the notification unit 803 is configured to mount the second virtual device to the directory of the container through the communication channel.
  • the notification unit 803 may execute the method for mounting the virtual device by the container storage module 225 in the embodiment of the above manner.
  • the first setting unit 804 may create the second virtual device; and associate the storage resource with the second virtual device.
  • the storage resource may be an object storage resource or a block storage resource.
  • the notification unit 803 may provide the file storage resource to the container on the computing node in the form of a network file system, and notify the computing node to mount the network file system to a directory of the container.
  • the notification unit 803 when the notification unit 803 mounts the second virtual device to the directory of the container through the communication channel, when the container is a common container, the notification unit 803 can mount the second virtual device to the container through the communication channel in the container's storage directory.
  • the notification unit 803 can directly communicate the second virtual device to the secure container virtual machine for deploying the container through the communication channel, and the secure container virtual machine mounts the second virtual device into the storage directory of the container.
  • the container management apparatus is further connected with the network service node through a network.
  • the transmission unit 801 may apply to the network service node for network resources; that is, the transmission unit 801 executes the method executed by the network proxy module 221 in the above method embodiments.
  • the second setting unit 805 can set the third virtual device according to the network resources; the second setting unit 805 can execute the method for configuring the virtual device by the container network module 226 in the embodiment of the above manner.
  • the notification unit 803 is configured to set the third virtual device in the container through the communication channel.
  • the notification unit 803 may execute the method for the container network module 226 to mount the virtual device in the above-mentioned embodiments.
  • the second setting unit 805 may create a third virtual device when setting the third virtual device according to the network resource, and then associate the network resource with the third virtual device.
  • the second setting unit 805 may set network processing rules for the third virtual device, and the network processing rules include some or all of the following: load balancing policy, security Group policy, quality of service, routing rules, address mapping rules.
  • the notification unit 803 sets the third virtual device in the container through the communication channel
  • the notification unit 803 adds the third virtual device to the namespace of the container through the communication channel.
  • the notification unit 803 directly communicates the third virtual device to the secure container virtual machine for deploying the container through the communication channel.
  • the communication channel is a high-speed peripheral device interconnecting PCIe channel.
  • each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the above embodiments may be implemented in whole or in part by software, hardware, firmware or any other combination.
  • the above-described embodiments may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded or executed on a computer, all or part of the processes or functions described in the embodiments of the present invention are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that contains one or more sets of available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media.
  • the semiconductor medium may be a solid state drive (SSD).
  • the unloading card or the container management device in the above-mentioned embodiment can take the form shown in FIG. 9 .
  • the apparatus 900 shown in FIG. 9 includes at least one processor 901 , a memory 902 , and optionally, a communication interface 903 .
  • the memory 902 can be a volatile memory, such as random access memory; the memory can also be a non-volatile memory, such as read-only memory, flash memory, hard disk drive (HDD) or solid-state drive (solid-state drive, SSD), or memory 902 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. Memory 902 may be a combination of the above-described memories.
  • connection medium between the above-mentioned processor 901 and the memory 902 is not limited in this embodiment of the present application.
  • the processor 901 can be a central processing unit (central processing unit, CPU), and the processor 901 can also be other general-purpose processors, digital signal processors (digital signal processors, DSP), application specific integrated circuits (application specific integrated circuit, ASIC) ), field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, artificial intelligence chips, chips on a chip, etc.
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • the processor 901 in FIG. 9 may execute instructions by calling the computer stored in the memory 902, so that the container management apparatus may execute any of the above method embodiments. The method for uninstalling the card execution.
  • the functions/implementation processes of the transmission unit 801 , the acquisition unit 802 , the notification unit 803 , the first setting unit 804 and the second setting unit 805 in FIG. 8 can all be called by the processor 901 in FIG.
  • the computer executes the instructions to do so.
  • the functions/implementation processes of the acquiring unit 802, the notification unit 803, the first setting unit 804 and the second setting unit 805 in FIG. 8 may be implemented by the processor 901 in FIG. 9 calling the computer-executed instructions stored in the memory 902
  • the function/implementation process of the transmission unit 801 in FIG. 8 can be implemented through the communication interface 903 in FIG. 9 .
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种计算机系统、容器管理方法及装置,方法应用于卸载卡(220),卸载卡(220)插置于计算节点(210),卸载卡(220)与计算节点(210)之间建立有通信通道,卸载卡(220)与管理节点(110)通过网络连接,在本申请中,管理节点(110)向卸载卡(220)发送容器创建请求,卸载卡(220)在接收到容器创建请求后,根据容器创建请求获取容器镜像,将容器镜像保存到本地。之后,卸载卡(220)通过通信通道通知计算节点(210)根据容器镜像在计算节点(210)中创建容器。计算节点(210)不再需要对容器进行管理,而是由插置在计算节点(210)上的卸载卡(220)创建容器、管理容器,计算节点(210)不再需消耗资源去支持容器管理功能,提高了计算节点(210)的资源利用率。

Description

一种计算机系统、容器管理方法及装置 技术领域
本申请涉及云计算领域,特别涉及一种基于服务器机柜的虚拟机管理方法及装置。
背景技术
随着云端技术的不断发展,鉴于容器轻便、高速等优点,数据中心中部署的实例从虚拟机,逐渐向容器过渡。目前容器管理架构大都是基于已有的基础设施即服务(infrastructure as a service,IAAS)层来构建的。计算节点上除了部署容器,还需要部署与IAAS层相关的服务代理功能,以及容器管理组件,以实现对计算节点上容器的管理。
计算节点上部署的容器管理组件会占用计算节点上的资源,造成计算节点的资源消耗。
发明内容
本申请提供一种计算机系统、容器管理方法及装置,用以减少计算节点上用于实现容器管理时的资源消耗。
第一方面,本申请实施例提供了一种容器管理方法,方法应用于卸载卡,可以由卸载卡执行,卸载卡插置于计算节点,卸载卡与计算节点之间建立有通信通道,卸载卡与容器集群管理节点(也可以简称为管理节点)通过网络连接,在该方法中,管理节点可以向卸载卡发送容器创建请求,卸载卡在接收到管理节点发送的容器创建请求后,可以根据容器创建请求获取容器镜像,如卸载卡可以从容器镜像仓库获取容器镜像,将容器镜像保存卸载卡能够访问的存储资源上,该存储资源可以为卸载卡本地的存储器,也可以为与卸载卡连接的存储器,之后,卸载卡可以通过通信通道通知计算节点根据容器镜像在计算节点中创建容器。
通过上述方法,计算节点不再需要与管理节点直接交互,也就是说,计算节点不再需要对容器进行管理,而是由插置在计算节点上的卸载卡创建容器、管理容器,计算节点不再需消耗资源去支持容器管理功能,提高了计算节点的资源利用率。
在一种可能的实施方式中,卸载卡在通知计算节点根据容器镜像在计算节点中创建容器时,可以创建虚拟设备,为了区分,将此处的虚拟设备称为第一虚拟设备。卸载卡在创建了第一虚拟设备后,可以将容器镜像关联到第一虚拟设备,通过通信通道通知计算节点创建容器的容器运行环境并将第一虚拟设备挂载至容器的根目录。
通过上述方法,卸载卡通过虚拟设备的形式向计算节点提供容器镜像,保证计算节点能够利用该容器镜像创建容器,容器创建的方式较为简单、方便。
在一种可能的实施方式中,卸载卡还可以通过网络与部署有存储服务的存储服务节点连接,卸载卡可以向计算节点上的容器提供存储资源。具体的,卸载卡先向存储 服务节点申请存储资源;之后,再根据存储资源设置虚拟设备,为了方便区分,这里的虚拟设备称为第二虚拟设备;在对第二虚拟设备设置完成后,卸载卡可以通过通信通道将第二虚拟设备挂载到容器的目录下。
通过上述方法,以第二虚拟设备的形式向计算节点提供容器的存储资源,可以使得计算节点上的容器可以通过该第二虚拟设备访问到存储资源,将数据存储到该存储资源上,使得卸载卡为计算节点上的容器提供存储资源成为可能,进一步减少计算节点上资源消耗。
在一种可能的实施方式中,卸载卡根据存储资源设置第二虚拟设备时,可以先创建该第二虚拟设备,在创建了该第二虚拟设备之后,可以将存储资源关联到第二虚拟设备。
通过上述方法,卸载卡能够在本地创建虚拟设备,并提供给计算节点上的容器,以便计算节点上的容器获得存储资源。
在一种可能的实施方式中,存储资源可以为对象存储资源,也可以为块存储资源。当存储资源为文件存储资源时,卸载卡可以直接以网络文件系统的形式向计算节点提供该文件存储资源,通知计算节点将网络文件系统挂载到容器的目录中,也即文件存储资源可以不与第二虚拟设备关联。
通过上述方法,卸载卡能够向计算节点中的容器提供不同类型的存储资源,适用于对象存储、文件存储以及块存储场景中,有效的扩展了适用范围。
在一种可能的实施方式中,卸载卡在通过通信通道将第二虚拟设备挂载到容器的目录时,对于不同类型的容器,可以采用不同的挂载方式。若容器为普通容器,卸载卡可以通过通信通道直接将第二虚拟设备挂载到容器的目录(如存储目录)中。若容器为安全容器,卸载卡将通过通信通道第二虚拟设备直通给用于部署容器的安全容器虚拟机,由安全容器虚拟机将第二虚拟设备挂载到容器的目录中。
通过上述方法,对于不同类型的容器,采用不同的挂载方式,以保证容器能够获得该存储资源,便于后续容器能够将数据存储到对应的存储资源上。
在一种可能的实施方式中,卸载卡可以通过网络与网络服务节点连接,卸载卡除了能够向计算节点的容器提供存储资源,还可以向计算节点的容器提供网络资源。卸载卡可以先向网络服务节点申请网络资源;在申请到网络资源后,卸载卡可以根据网络资源设置虚拟设备,这里为了方便区分,此处将虚拟设备称为第三虚拟设备,在第三虚拟设备设置完成后,卸载卡可以通过通信通道将第三虚拟设备设置于容器。
通过上述方法,以第三虚拟设备的形式向计算节点提供容器的网络资源,可以使得计算节点上的容器可以通过该第三虚拟设备获得网络资源,使得容器具备网络能力,使得卸载卡能够为计算节点上的容器提供网络资源称为可能性,保证卸载卡能够实现容器管理功能,进一步减少计算节点上资源消耗。
在一种可能的实施方式中,卸载卡在根据网络资源设置第三虚拟设备时,可以先创建第三虚拟设备;在创建了第三虚拟设备后,可以将网络资源关联到该第三虚拟设备上。
通过上述方法,卸载卡能够在本地创建虚拟设备,并提供给计算节点上的容器,以便计算节点上的容器获得网络资源,使得容器具备网络能力。
在一种可能的实施方式中,卸载卡为第三虚拟设备设置网络处理规则,网络处理规则包括下列的部分或全部:负载均衡策略、安全组策略、服务质量、路由规则(routing)、地址映射规则。其中,安全组策略可以包括访问控制列表(access control lists,ACL),地址映射规则包括地址转换(net address trancelate,NAT)和全地址转换(FULL NAT),其中,NAT包括但不限于目标地址转换(destination net address trancelate,DNAT)、源地址转换(source net address trancelate,SNAT)、端口转换(port net address trancelate,PNAT)。
通过上述方法,通过为虚拟网卡了设备设置网络处理规则,使得容器能够具备服务发现能力以及网络策略能力等,使得容器具备较强的网络能力。
在一种可能的实施方式中,卸载卡在通过通信通道将第三虚拟设备设置于容器时,对于不同类型的容器,可以采用不同设置方式。若容器为普通容器,卸载卡可以通过通信通道将第三虚拟设备加入到容器的命名空间中。若容器为安全容器,卸载卡可以通过通信通道将第三虚拟设备直通给用于部署容器的安全容器虚拟机。
通过上述方法,对于不同类型的容器,采用不同的设置方式,以保证容器能够获得该网络资源,便于后续容器能够具备网络能力。
在一种可能的实施方式中,通信通道的类型有很多,本申请实施例并不限定通信通道的具体类型,例如,该通信通道可以为PCIe通道。
通过上述方法,卸载卡可以与计算节点之间可以通过PCIe通道进行较为高效的信息交互,进一步确保了卸载卡能够对计算节点上的容器进行管理。
第二方面,本申请实施例还提供了一种容器管理装置,该容器管理装置位于卸载卡中,具有实现上述第一方面的方法实例中卸载卡行为的功能,有益效果可以参见第一方面的描述此处不再赘述。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的单元。在一个可能的设计中,所述装置的结构中包括传输单元、获取单元、通知单元,可选的,还包括第一设置单元和第二设置单元,这些单元可以执行上述第一方面方法示例中的相应功能,具体参见方法示例中的详细描述,此处不做赘述。
第三方面,本申请实施例还提供了一种装置,该装置可以为卸载卡,具有实现上述第一方面的方法实例中卸载卡行为的功能,有益效果可以参见第一方面的描述此处不再赘述。所述装置的结构中包括处理器和存储器,所述处理器被配置为支持所述卸载卡执行上述第一方面方法中相应的功能。所述存储器与所述处理器耦合,其保存所述通信装置必要的程序指令和数据。所述通信装置的结构中还包括通信接口,用于与其他设备进行通信。
第四方面,本申请实施例还提供了一种计算机系统,有益效果可以参见第一方面的相关描述,此处不再赘述。该计算系统包括卸载卡和计算节点,卸载卡插置于计算节点,卸载卡与计算节点之间建立有通信通道,卸载卡还通过网络与容器集群管理节点连接。
卸载卡,用于接收容器集群管理节点发送的容器创建请求,根据容器创建请求获取容器镜像;
计算节点,用于通过通信通道通获取容器镜像,并根据容器镜像创建容器。
在一种可能的实施方式中,卸载卡在获取了容器镜像后,可以创建第一虚拟设备,关联容器镜像与第一虚拟设备,还可以通过通信通道向计算节点提供该第一虚拟设备。计算节点在获得容器镜像时,可以通过该通信通道获取第一虚拟设备,在获取了第一虚拟设备之后,可以创建容器的容器运行环境并将第一虚拟设备挂载至容器的根目录。
在一种可能的实施方式中,卸载卡还可以通过网络与存储服务节点连接,卸载卡与计算节点配合可以为计算节点上的容器配置存储资源。卸载卡可以先向存储服务节点申请存储资源;在申请到存储资源之后,卸载卡可以根据存储资源设置第二虚拟设备,通过通信通道向计算节点提供第二虚拟设备。计算节点通过通信通道获取第二虚拟设备后,可以将第二虚拟设备挂载到容器的目录下。
在一种可能的实施方式中,卸载卡在根据存储资源设置第二虚拟设备时,可以先创建第二虚拟设备;在创建了该第二虚拟设备之后,可以关联存储资源和第二虚拟设备。
在一种可能的实施方式中,存储资源可以为对象存储资源,也可以为块存储资源。当存储资源为文件存储资源时,卸载卡可以直接以网络文件系统的形式向计算节点提供该文件存储资源,通知计算节点将网络文件系统挂载到容器的目录中,也即文件存储资源可以不与第二虚拟设备关联。计算节点可以在卸载卡的通知下将网络文件系统挂载到容器的目录中。
在一种可能的实施方式中,计算节点在将第二虚拟设备挂载到容器的目录下时,对于不同的类型的容器,可以采用不同的挂载方法。对于普通容器,普通容器为不同于安全容器的容器,计算节点可以直接将第二虚拟设备挂载到容器的目录下;对安全容器,计算节点可以将第二虚拟设备直通给用于部署容器的安全容器虚拟机,由安全容器虚拟机将第二虚拟设备挂载到容器的目录中。
在一种可能的实施方式中,卸载卡还通过网络与网络服务节点连接,卸载卡与计算节点配合可以为计算节点上的容器配置网络资源。卸载卡可以先向网络服务节点申请网络资源;在申请到网络资源后,可以根据网络资源设置第三虚拟设备,通过通信通道向计算节点提供该第三虚拟设备。计算节点可以通过通信通道获取第三虚拟设备,将第三虚拟设备设置于容器。
在一种可能的实施方式中,卸载卡在根据网络资源设置第三虚拟设备时,可以创建第三虚拟设备;关联网络资源和第三虚拟设备。
在一种可能的实施方式中,卸载卡在根据网络资源设置第三虚拟设备时,可以为第三虚拟设备设置网络处理规则,网络处理规则包括下列的部分或全部:负载均衡策略、安全组策略、路由规则routing、地址映射规则、服务质量Qos。
在一种可能的实施方式中,计算节点在将第三虚拟设备设置于容器时,对于不同的类型的容器,可以采用不同的挂载方法。对于普通容器,计算节点可以将第三虚拟设备加入到容器的命名空间中。对于安全容器,计算节点将第三虚拟设备直通给用于部署容器的安全容器虚拟机。
在一种可能的实施方式中,通信通道为高速外部设备互联PCIe通道。
第五方面,本申请实施例还提供了一种容器管理方法,该方法由卸载卡和计算节点配合执行,有益效果可以参见第一方面的相关描述,此处不再赘述。卸载卡插置于 计算节点,卸载卡与计算节点之间建立有通信通道,卸载卡还通过网络与容器集群管理节点连接。
卸载卡接收容器集群管理节点发送的容器创建请求,根据容器创建请求获取容器镜像;
计算节点通过通信通道通获取容器镜像,并根据容器镜像创建容器。
在一种可能的实施方式中,卸载卡在获取容器镜像后,可以创建第一虚拟设备;还可以对容器镜像与第一虚拟设备进行关联,通过通信通道向计算节点提供该第一虚拟设备;计算节点在获取容器镜像时,可以通过通信通道通获取第一虚拟设备,创建容器的容器运行环境并将第一虚拟设备挂载至容器的根目录。
在一种可能的实施方式中,卸载卡还通过网络与存储服务节点连接,卸载卡与计算节点配合可以为容器配置存储资源。卸载卡可以先向存储服务节点申请存储资源;之后,根据存储资源对第二虚拟设备进行设置。在设置了第二虚拟设备之后,可以通过通信通道向计算节点提供第二虚拟设备。计算节点可以通过通信通道获取第二虚拟设备,在获取第二虚拟设备后,将第二虚拟设备挂载到容器的目录下。
在一种可能的实施方式中,卸载卡在根据存储资源设置第二虚拟设备时,可以先创建第二虚拟设备;在创建了第二虚拟设备之后,对存储资源与第二虚拟设备进行关联。
在一种可能的实施方式中,存储资源可以为对象存储资源,也可以为块存储资源。当存储资源为文件存储资源时,卸载卡可以直接以网络文件系统的形式向计算节点提供该文件存储资源,通知计算节点将网络文件系统挂载到容器的目录中,计算节点在卸载卡的通知下可以将网络文件系统挂载到容器的目录中。这种情况下爱文件存储资源可以不与第二虚拟设备关联。
在一种可能的实施方式中,计算节点在将第二虚拟设备挂载到容器的目录下时,对于普通容器,也即不同于安全容器的容器,计算节点可以直接将第二虚拟设备挂载到容器的目录下。对于安全容器,计算节点可以将第二虚拟设备直通给用于部署容器的安全容器虚拟机,由安全容器虚拟机将第二虚拟设备挂载到容器的目录中。
在一种可能的实施方式中,卸载卡还通过网络与网络服务节点连接,卸载卡还可以与计算节点配合为容器配置网络资源,使得容器具备网络能力。卸载卡可以先向网络服务节点申请网络资源;之后,再根据网络资源设置第三虚拟设备;通过通信通道向计算节点提供第三虚拟设备。计算节点可以通过通信通道获取第三虚拟设备,将第三虚拟设备设置于容器。
在一种可能的实施方式中,卸载卡在根据网络资源设置第三虚拟设备时,可以先创建第三虚拟设备,在创建了第三虚拟设备之后,可以对网络资源和第三虚拟设备进行关联。
在一种可能的实施方式中,卸载卡在根据网络资源设置第三虚拟设备时,还可以为第三虚拟设备设置网络处理规则,网络处理规则包括下列的部分或全部:负载均衡策略、安全组策略、路由规则、地址映射规则、服务质量。
在一种可能的实施方式中,计算节点在将第三虚拟设备设置于容器时,对于普通容器,可以将第三虚拟设备加入到容器的命名空间中。对于安全容器,计算节点将第 三虚拟设备直通给用于部署容器的安全容器虚拟机。
在一种可能的实施方式中,通信通道为高速外部设备互联PCIe通道。
第六方面,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面以及第一方面的各个可能的实施方式中所述的方法或执行上述第五方面以及第五方面的各个可能的实施方式中所述的方法。
第七方面,本申请还提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面以及第一方面的各个可能的实施方式中所述的方法或执行上述第五方面以及第五方面的各个可能的实施方式中所述的方法。
第八方面,本申请还提供一种计算机芯片,所述芯片与存储器相连,所述芯片用于读取并执行所述存储器中存储的软件程序,执行上述第一方面以及第一方面的各个可能的实施方式中所述的方法或执行上述第五方面以及第五方面的各个可能的实施方式中所述的方法。
附图说明
图1为本申请提供的一种系统的架构示意图;
图2为本申请提供的另一种系统的架构示意图;
图3为本申请提供的一种容器创建的方法示意图;
图4为本申请提供的一种容器创建的流程图;
图5为本申请提供的一种容器删除的方法示意图;
图6为本申请提供的一种配置容器存储资源的方法示意图;
图7为本申请提供的一种配置容器网络资源的方法示意图;
图8为本申请提供的一种容器管理装置的结构示意图;
图9为本申请提供的一种装置的结构示意图。
具体实施方式
如图1所示,为本申请实施例提供的一种系统架构示意图,该系统中包括容器管理集群100、以及计算节点集群200。
容器管理集群100介于用户和计算节点集群200之间,能够与用户、以及计算节点集群200进行交互。用户通过与容器管理集群100交互对该用户所租用或所有的计算节点210上容器的管理,这里的管理包括但不限于:创建容器、删除容器、查询容器等。
容器管理集群100中可以包括一个或多个管理节点110,每个管理节点110能够对计算集群中的一个或多个计算节点210上的容器进行管理。
本申请实施例并不限定该管理节点110所部署的位置以及该管理节点110的具体形态。例如,该管理节点110可以是部署在云计算设备系统、或边缘计算设备系统中的计算节点210,也可以是靠近用户侧的终端计算设备。不同的管理节点110可以部署在相同系统中,也可以部署在不同系统中,如该多个管理节点110可以均部署在云计算设备系统、或边缘计算系统中,该多个管理节点110也可以分布地部署在云计算 设备系统、边缘计算系统以及终端计算设备中。
计算节点集群200包括一个或多个计算节点210,每个计算节点210上还可以插置有卸载卡220。本申请实施例并不限定该计算节点210的架构类型,该计算节点210可以是X86架构的计算节点210,也可以是ARM架构的计算节点210。
插置在计算节点210上的卸载卡220具备一定的数据处理能力的硬件装置,该卸载卡220上可以包括有处理器、内存、硬件加速装置、网卡等组件,或该卸载卡220可以与网卡连接。在本申请实施例中该卸载卡220能够与管理节点110交互,根据管理节点110下发的指令通知所在的计算节点210创建容器,也可以根据管理节点110下发的指令对所在的计算接收210上的容器进行管理,这里的管理包括但不限于:容器创建、删除容器、查询容器。
下面以创建容器为例,对用户、容器管理集群100以及计算节点集群200之间的交互过程进行说明:
用户可以通过客户端向容器管理集群100中的管理节点110发送容器创建请求,该容器创建请求中可以携带容器的资源配置信息,该容器的资源配置信息可以指示该容器所需占用的资源。管理节点110在接收到该容器创建请求后,可以在本地记录该容器的资源配置信息,之后根据所管理的一个或多个计算节点210的资源状态,为该容器选择目标计算节点,将该容器调度到该目标计算节点上。该目标计算节点上插置的卸载卡220会监控该管理节点110的调度操作,当检测到该管理节点110将容器调度到该目标计算节点上时,卸载卡220会为该容器准备相应的资源,通知所在的计算节点210利用该资源创建容器。
该目标计算节点在该卸载卡220的通知下创建容器,卸载卡220检测到该容器创建完成后,可以将该容器的状态信息(该状态信息包括但不限于容器的运行状态、容器上业务的运行状态、容器资源使用量等)上报给管理节点110,该管理节点110可以通过客户端向用户展示该容器的状态信息。用户也可以通过客户端查询该容器的状态信息。
删除容器、容器查询与上述过程类似,区别在于用户、容器管理集群100以及计算节点集群200之间交互的信息不同,具体可以参见前述说明,此处不再赘述。
从上述说明可以看出,容器管理的功能卸载到卸载卡220中,由卸载卡220实现对计算节点210的容器管理,计算节点210仅需运行容器,计算节点210不再具备容器管理功能,也就减少了计算节点210上为实现容器管理功能所需占用资源,使得计算节点210上的资源能够有效利用。
下面具体到一个计算节点210以及该计算节点210上插置的卸载卡220,对计算节点210以及该计算节点210上插置的卸载卡220的结构进行说明,参见图2,卸载卡220能够与容器管理集群100中的一个或多个管理节点110交互。卸载卡220还能够与所在的计算节点210交互。
另外,为了给计算节点210上的容器配置或更新网络资源,卸载卡220还能够与虚拟网络服务节点300(也可以简称为网络服务节点)交互,通过网络与虚拟网络服务节点300节点连接,该虚拟网络服务节点300上部署有虚拟网络服务,该虚拟网络服务节点300能够为计算节点210以及计算节点210上的容器提供虚拟网络服务,该 虚拟网络服务器是容器所需依赖的外置服务,能够为容器提供网络资源,使得该不同计算节点210上的容器可以实现网络互通,使得容器具备网络能力。
为了给计算节点210上的容器配置或更新存储资源,卸载卡220还能够与存储服务节点400交互,通过网络与存储服务节点400节点连接。该存储服务节点400可以部署有块存储服务、文件存储服务、或对象存储服务等存储服务,块存储服务、文件存储服务、以及对象存储服务均属于分布式存储服务,分布式存储服务是指将存储资源能够分布式的部署不同的存储节点上,存储服务节点400能够为计算节点210以及计算节点210上的容器提供存储资源,以使得计算节点210或计算节点上容器中的数据可以存储在存储节点上。
卸载卡220中可以部署有网络代理模块221以及存储代理模块222,网络代理模块221用于与虚拟网络服务节点300交互,为该计算节点210上的容器向虚拟网络服务节点300申请网络资源。存储代理模块222用于与存储服务节点400交互,为该计算节点210上的容器从存储服务节点400申请存储资源。
卸载卡220能够在管理节点110的指示下对计算节点210上的容器进行管理,具体的,该卸载卡220包括管理代理模块223、容器运行时模块224、容器存储模块225、容器网络模块226。
其中,管理代理模块223是卸载卡220上实现容器管理的总指挥,该管理代理模块223能够触发容器运行时模块224为容器准备容器镜像、搭载运行环境,能够触发容器存储模块225为容器准备存储资源,触发容器网络模块226为容器准备网络资源。管理代理模块223还可以与管理节点110交互,向管理节点110上报该计算节点210上容器的状态信息以及该计算节点210上的资源状态。管理代理模块223在卸载卡220启动后,能够与计算节点210(如计算节点210上的前端代理模块211)通信,通过前端代理模块211获取计算节点210上的资源状态,在获取了该计算节点210上的资源状态后,将该计算节点210上的资源状态上报给管理节点110,管理节点110需要调度容器时,可以以计算节点210上的资源状态为参考,确定容器所需调度到的目标计算节点。
容器运行时模块224,能够为容器创建容器镜像,搭建运行环境。本申请实施例并不限定容器镜像的具体表现形式,例如,容器运行时模块224在从容器镜像仓库获取容器镜像,并将容器镜像加载到卸载卡220(容器运行时模块224)可以访问的存储资源上,例如该存储资源为卸载卡220本地的存储资源,也可以为与卸载卡220连接的存储资源,如磁盘等存储器。之后,容器运行时模块224将该容器镜像进行通过虚拟(virtual function,VF)设备(如本申请实施例中的第一VF设备)的形式呈现给计算节点210,该虚拟设备可以为单根输入输出虚拟(single root i/o virtualization,SRIOV)设备形式呈现给计算节点210,这里并不限定SRIOV设备支持的协议,该协议可以为virtio-blk,virtio-scsi,virtio-fs或者virtio-9p等。
本申请实施例中的容器可以为普通容器,也即该容器不需要较高的安全性,无需与其他容器隔离,容器运行时模块224可以为容器搭建容器运行所需的轻量级隔离环境,该轻量级隔离环境的搭建需要配置命名空间(namespace)、控制组(control group,cgroup)等。
其中,命名空间用于隔离容器所需的资源,如进程间通信(inter-process communication,IPC)、网络资源、文件系统等,通过命名空间可以将该容器所需的资源与其他容器所需的资源隔离起来,达到资源独享的效果。cgroup用于限制被命名空间隔离起来的资源。例如可以为这些资源设置权重(表征优先级),配置资源的使用量等。
容器也可以为安全容器,安全容器相较于普通容器,对安全性的要求更高,需要与其他容器保持隔离。容器运行时模块224可以通过前端代理模块211先为安全容器搭建该安全容器独占的安全容器虚拟机,然后通知前端代理模块211在安全容器虚拟机中创建容器,以获得具备较高安全隔离性的容器,也即安全容器,其中,安全容器虚拟机为专门部署安全容器的虚拟机。
容器网络模块226,用于为计算节点210上的容器准备网络资源,通过该网络资源使得计算节点210上的容器能够与其他容器之间实现网络互通,使得容器具备网络能力,容器的网络能力可以包括容器间网络互通能力、服务发现能力以及网络策略控制能力。
在创建容器时,容器网络模块226通过网络代理模块221向虚拟网络服务节点300申请网络资源,该网络资源可以包括网络端口资源,还可以包括其他网络资源。之后将申请到的网络资源与虚拟设备(如本申请实施例中的第四VF设备)建立关联关系,将该网络资源通过虚拟设备的形式呈现给计算节点210,以供容器使用,计算节点210上的容器通过该虚拟设备能与其他容器进行交互。
容器存储模块225,可以为计算节点210上的容器准备存储资源,如块存储资源、文件存储资源和对象存储资源。容器存储模块225可以通过卸载卡220中存储代理模块222与存储服务节点400交互,申请存储资源,如块存储资源、对象存储资源,将该存储资源挂载到卸载卡220,然后,容器存储模块225将该存储资源通过虚拟设备(如本申请实施例中的第二VF设备以及第三VF设备)呈现给计算节点210中的容器。关于虚拟设备的描述具体可以参见前述说明,此处不再赘述。
在本申请实施例中,卸载卡220与计算节点210可以通过互联网协议进行通信,该互联协议为可以为高速外部设备互联(peripheral component interconnect express,PCIe)协议,也即卸载卡220与计算节点210之间的通信通道为PCIe通道。本申请实施例并不限定卸载卡220与计算节点210通过PCIe通道进行通信的具体形式,例如卸载卡220可以以基于PCIe协议的、支持网络协议栈的网卡的形态连接到计算节点210,与计算节点210进行通信,也可以是以基于PCIe协议的、virtio架构的virtio-vsock设备的形态连接到计算节点210,与计算节点210进行通信。在本申请实施例中仅是以卸载卡220与计算节点210之间的通信通道为PCIe通道为例进行说明,本申请实施例并不限定卸载卡220与计算节点210之间的通信通道的具体类型,凡是能够使得卸载卡220与计算节点210之间进行通信的通信通道均适用于本申请实施例。
下面结合附图分别对容器管理、存储资源的配置以及网络资源的配置方式进行说明。
(1)、容器管理。
容器管理包括容器创建、容器删除等,覆盖了容器的整个生命周期。这里分别对 容器创建的流程以及容器删除的流程进行说明:
一、容器创建。
如图3所示,为本申请实施例提供的一种容器创建方法示意图,该方法包括:
步骤301:用户向容器管理集群100中的管理节点110发送容器创建请求,该容器创建请求中携带了该容器的资源配置信息,该容器的资源配置信息可以指示该容器所需占用的资源,在资源包括但不限于:处理器、内存空间、存储资源、网络资源(如该网络资源可以为宿主机网络、或独立网络等)。
容器的资源配置信息能够描述容器所需占用的资源的类型、大小等信息。例如,容器资源配置信息可以指示处理器的数量,内存空间的大小;容器资源配置信息也可以指示该存储资源的类型为块存储资源、文件存储资源或对象存储资源,以及该存储资源的大小。容器资源配置信息还可以指示该网络资源为宿主机网络(也即需要复用计算节点210的网络)或独立网络(也即单独为容器配置的、与计算节点210独立的网络),也可以指示该网络资源需要支持的服务发现能力(service)以及网络控制策略(network policy)能力、以及该网络端口的数量。
本申请实施例并不限定用户与管理节点110交互的方式,例如用户可以通过部署在用户侧的客户端选择或输入需要创建的容器的资源配置信息。客户端检测到用户选择或输入的容器的资源配置信息之后,在用户的触发下(如用户在客户端提供的界面上点击“创建”选项)向该管理节点110发送容器创建请求。
又例如,用户可以直接与该管理节点110交互,管理节点110可以向用户提供容器创建界面,在该界面上用户可以选择或输入需要创建的容器的配置信息,用户触发(如用户在管理节点110提供的界面上点击“创建”选项)该容器创建请求。
步骤302:管理节点110接收到该容器创建请求后,管理节点110会根据容器配置信息以及所管理的各个计算节点210的资源状态,对该容器进行调度,向目标计算节点发送容器创建请求。
所管理的各个计算节点210的资源状态可以是管理节点110预先收集的,对于该管理节点110所管理的任一计算节点210,管理节点110可以通过该计算节点210上插置的卸载卡220上的管理代理模块223获取该计算节点210上的资源状态,该计算节点210上的资源状态可以指示该计算节点210的空闲资源,这里的资源包括但不限于:内存空间、处理器、存储资源等。
管理节点110可以主动的向卸载卡220上的管理代理模块223发送资源状态获取请求,以请求该管理代理模块223上报计算节点210的资源状态。进而,从该管理代理模块223获取该计算节点210上的资源状态。
卸载卡220上的管理代理模块223也可以主动地向管理节点110上报计算节点210的资源状态,如在卸载卡220启动之后,周期性地向管理节点110上报计算节点210的资源状态。
管理节点110对该容器进行调度是指确定该容器所需部署的目标计算节点,向目标计算节点发送容器创建请求,管理节点110发送容器创建请求的方式有许多种,下面列举其中两种:
第一种、管理节点110通过容器资源数据库对该容器进行调度。容器资源数据库 是容器管理集群100中的各个管理节点110共同维护的一个数据库,该容器资源数据库记录了该计算节点集群200中各个计算节点210上的容器的相关信息(如资源配置信息、容器的标识信息、容器的状态信息)以及该容器所在的计算节点210,也就是说,该容器资源数据库包括了容器与计算节点210的对应关系。
管理节点110可以从该各个计算节点210中选择空闲资源能够支持该容器的计算节点210作为目标计算节点。管理节点110在确定了该目标计算节点后,将该调度结果更新到容器资源数据库中,该调度结果指示该容器所需部署的目标计算节点。
当需要创建新的容器时,管理节点110在确定了目标计算节点后,可以将该容器与该目标计算节点的对应关系更新到该容器资源数据库中。
需要说明的是,该容器资源数据库记录容器以及计算节点210是指记录该容器的资源配置信息、容器的标识信息以及该计算节点210的标识信息。这里并不限定容器的标识信息的具体类型,例如可以是在创建容器是为容器配置的标识,也可以是容器名称,凡是能够唯一标识该容器的方式均适用于本申请实施例。这里并不限定计算节点210的标识信息的具体类型,例如可以是计算节点集群200中该计算节点210的标识,也可以是计算节点210的名称,凡是能够唯一标识该计算节点210的方式均适用于本申请实施例。
第二种、管理节点110从该各个计算节点210中选择能够部署该容器的目标计算节点后,可以直接向该目标计算节点(如该目标计算节点中的容器管理模块)发送容器创建请求。
可选的,管理节点110也可以将调度结果保存在容器资源数据库中。
在检测到该管理节点110将该容器调度到该目标计算节点后,管理代理模块223可以开始创建容器,创建容器主要包括两方面的操作,其一为配置容器镜像(可以参见步骤303),其二为搭载容器运行环境(可以参见步骤304)。
步骤303:卸载卡220中的管理代理模块223触发容器运行时模块224为该容器创建容器镜像,通过第一虚拟设备向目标计算节点提供该容器镜像。其中,容器镜像是容器运行所需的配置文件以及工具库的集合,例如所需的库文件、系统配置文件、系统工具等。
针对在步骤302中列举的两种管理节点110将该容器调度到目标计算节点的方式,管理代理模块223可以通过如下两种方式检测该管理节点110的调度操作。
针对第一种方式,管理代理模块223可以实时监控该容器资源数据库,通过监控该容器资源数据确定目标计算节点上需部署的容器,当监控到该容器资源数据库更新时,根据该容器资源数据库中增加的信息确定是否有新的容器需要部署到该目标计算节点上。
针对第二种方式,管理代理模块223在接收到容器创建请求时,确定该管理节点110将该容器调度到该目标计算节点。
容器运行时模块224为该容器配置容器镜像时,可以从部署在远端的容器镜像仓库中获取该容器镜像,将该容器镜像加载到卸载卡220可以访问的存储资源上。之后,创建第一VF设备,将该容器镜像绑定(也可以称为关联)到该第一VF设备上。容器运行时模块224通过该第一VF设备将该容器镜像提供给目标计算节点。该第一VF 设备可以为SRIOV设备。
需要说明的是,容器运行时模块224在从容器镜像仓库中获取该容器镜像时,也可以按需获取容器镜像。也即只获取容器镜像的部分数据,将该部分数据与第一VF设备关联,提供给目标计算节点。在后续容器在启动或运行的过程中,容器运行时模块224再从容器镜像仓库获取容器镜像的所需要其他数据,在通过第一VF设备向目标计算节点提供该其他数据。
步骤304:卸载卡220中的容器运行时模块224通过目标计算节点中的前端代理模块211在该目标计算节点中为容器搭载运行环境。
容器的类型不同,搭建的运行环境也不同。
若容器为普通容器,容器运行时模块224通过前端代理模块211在目标计算节点上为容器创建普通容器运行环境,其中包括配置namespace和cgroup。
若容器为安全容器,容器运行时模块224通过前端代理模块211在目标计算节点上为容器创建安全容器虚拟机,安全容器虚拟机启动后,容器运行时模块224在该安全容器虚拟机内部为容器创建对应的运行环境。
步骤305:前端代理模块211在管理代理模块223的指示下将该第一VF设备挂载到该容器的根目录下(rootfs)。
当该第一VF设备为SRIOV设备时,前端代理模块211可以基于virtio-scsi和virtio-blk等协议访问该第一VF设备。前端代理模块211在探测到该第一VF设备后,对于普通容器,可以直接将该第一VF设备挂载该容器的根目录下。对于安全容器,前端代理模块211可以将该第一VF设备直通给安全容器虚拟机,由安全容器虚拟机将该第一虚拟机设备挂载到该容器的根目录中。
步骤306:容器创建成功后,管理代理模块223将该容器状态将容器的状态信息同步到容器集群资源数据库中。
参见图4为容器创建时的流程示意图,图4中,管理节点110可以将容器调度到计算节点210上,当卸载卡220检测到有容器调度到计算节点210上时,管理代理模块223可以触发容器运行时模块224,为该容器准确容器镜像,并通过第一VF设备向目标计算节点提供该容器镜像。目标计算节点中的管理代理模块223可以调用容器运行时模块224为容器创建运行环境。
二、容器删除。
如图5所示,为本申请实施例提供的一种容器删除方法示意图,该方法包括:
步骤501:用户向容器管理集群100中的管理节点110发送容器删除请求,该容器删除请求中包括该容器的标识信息。
用户向容器管理集群100中的管理节点110发送容器删除请求的方式与用户向容器管理集群100中的管理节点110发送容器创建指令的方式类似,具体可以参见前述说明,此处不再赘述。
步骤502:管理节点110接收到该容器创建请求后,管理节点110指示目标计算节点中的管理代理模块223中删除该容器。
管理节点110指示目标计算节点中的管理代理模块223中删除该容器的方式有两种:
第一种、管理节点110在资源数据库中的容器状态标注为删除。目标计算节点中的管理代理模块223通过监控该资源数据库,确定该容器需要被删除。
第二种、管理节点110向目标计算节点中的管理代理模块223发送容器删除指令,指示管理代理模块223删除该容器。
步骤503:管理代理模块223的指示容器运行时模块224删除该容器。
容器运行时模块224可以通过调用目标计算节点中的前端代理模块211释放容器的运行环境,对于不同类型的容器释放容器的运行环境的方式也不同。
若该容器为普通容器,前端代理模块211可以该容器的进程下发结束信号,等该容器的进程结束后,释放该容器占用的namespace和cgroup,还可以卸载与容器镜像绑定的第一VF设备。
若该容器为安全容器,前端代理模块211可以先通过安全容器虚拟机给容器进程下发结束信号,等该容器的进程结束后,清除安全容器虚拟机中容器所占用的资源,卸载与容器镜像绑定的第一VF设备。待虚拟机中容器所占用的资源被清除后,可以结束该安全容器虚拟机进程。
(2)、存储资源配置。
在创建了容器之后,还可以为该容器配置存储资源。如图6为容器配置存储资源的流程示意图,管理代理模块223在检测到管理节点110将容器调度到目标计算节点上时,如检测到资源数据库的更新或接收容器创建请求,可以根据更新的资源数据库或容器创建请求确定需要为容器配置的存储资源,管理代理模块223触发容器存储模块225为容器配置存储资源,容器存储模块225可以通过存储代理模块222从不同类型的存储服务节点400(如块存储服务节点、对象存储服务节点以及文件存储服务节点)申请存储资源,之后,容器存储模块225通过存储服务代理模块建立虚拟设备与存储资源的关联关系,并向计算节点210提供该虚拟设备。计算节点210中的前端代理模块211可以将该虚拟设备挂载到容器的存储目录下。
该存储资源包括但不限于块存储资源、对象存储资源、文件存储资源或本地存储资源等。
下面针对不同类型的存储资源,对为容器配置存储资源的方式进行说明。
1)、块存储资源,该块存储资源可以以块设备的形式呈现。
该块存储资源可以是卸载卡220中的存储代理模块222预先申请的,也可以在管理代理模块223确定容器调度到该目标计算节点上时,触发该存储代理模块222向存储服务节点400申请的。也即无论预先申请还是实时申请,该块存储资源均是有存储代理模块222申请的,该存储代理模块222在申请到该块存储资源后,可以将该块存储资源挂载到该卸载卡220中,也即将该块存储资源以设备的形式呈现给卸载卡220,以供该卸载卡220使用。
容器存储模块225可以创建第二VF设备,并将该块存储资源关联到该第二VF设备上,也就是说,建立块存储资源与第二VF设备的关联关系。在管理代理模块223在触发容器运行时模块224创建容器的过程中,容器存储模块225可以通知该目标计算节点中的前端代理模块211将第二VF设备挂载至所述容器的存储目录中。该第二VF设备可以为支持virito-blk或virtio-scsi协议的虚拟设备。
其中,对于普通容器,目标计算节点上的前端代理模块211可以直接将第二VF设备挂载到该容器的存储目录中(管理代理模块223可以指示该前端代理模块211将第二VF设备挂载到该容器的存储目录)。对于安全容器,计算节点210上的前端代理模块211可以将第二VF设备直通给安全容器虚拟机,由该安全容器虚拟机将该第二VF设备挂载到容器的存储目录中。
2)、对象存储资源,该对象存储资源可以以桶的形式呈现。
该对象存储资源可以是卸载卡220中的存储代理模块222预先申请的,也可以在管理代理模块223确定容器调度到该目标计算节点上时,触发该存储代理模块222向存储服务节点400申请的。也即无论预先申请还是实时申请,该对象存储资源均是有存储代理模块222申请的,该存储代理模块222在申请到该对象存储资源后,可以将该对象存储资源挂载到该卸载卡220中,也即将该对象存储资源以设备的形式呈现给卸载卡220,以供该卸载卡220使用。
容器存储模块225可以创建第三VF设备,并将该对象存储资源关联到该第三VF设备上,也就是说,建立对象存储资源与第三VF设备的关联关系。在管理代理模块223在触发容器运行时模块224创建容器的过程中,容器存储模块225可以通知该计算节点210将第三VF设备挂载至所述容器的存储目录中。该第三VF设备可以为支持virtio-fs或virtio-9p协议的虚拟设备。
其中,对于普通容器,计算节点210上的代理模块可以直接将第三VF设备挂载到该容器的存储目录中。对于安全容器,计算节点210上的代理模块可以将第三VF设备直通给安全容器虚拟机,由该安全容器虚拟机将该第三VF设备挂载到容器的存储目录中。
容器在需要从桶中读取数据或向在桶中存储数据时,可以通过可移植操作系统接口(portable operating system interface,POSIX)访问桶。
3)、本地存储资源,是指计算节点210中的存储资源。
容器存储模块225通过该目标计算节点中的前端代理模块211为容器分配本地存储资源,该本地存储资源可以为该计算节点210中一个存储分区的子目录,也可以为一个独立的存储分区,还可以为一个独立的存储分区。
在分配了本地存储资源之后,在管理代理模块223在触发容器运行时模块224创建容器的过程中,管理代理模块223可以指示该前端代理模块211可以将该本地存储资源挂载到该容器的存储目录下。
其中,对于普通容器,该前端代理模块211可以直接将本地存储资源挂载到容器的存储目录下。对于安全容器,该前端代理模块211可以使用文件共享协议(如virtio-9p或者virtio-fs协议)将该本地存储资源共享给安全容器虚拟机,在安全容器虚拟机内部,由安全容器虚拟机将该本地存储资源挂载在该容器的存储目录下。
4)、文件存储资源
该文件存储资源可以是卸载卡220中的存储代理模块222预先申请的,也可以在管理代理模块223确定容器调度到该目标计算节点上时,触发该存储代理模块222向存储服务节点400申请的。也即无论预先申请还是实时申请,该文件存储资源均是有存储代理模块222申请的。
容器存储模块225通过前端代理模块211将文件存储资源以网络文件系统的形式在目标计算节点或安全容器虚拟机上挂载,以供容器使用。
其中,对于普通容器,计算节点210上的前端代理模块211可以直接将网络文件系统挂载到该容器的存储目录中。对于安全容器,计算节点210上的前端代理模块211可以将网络文件系统挂载到安全容器虚拟机中该容器的存储目录中。
在为容器配置了存储资源之后,容器在运行过程中产生的数据可以存储该存储资源中,对于本地存储资源,容器可以直接将产生的容器存储到本地存储资源中。
对于块存储资源以及对象存储资源。容器可以通过将与该存储资源关联的VF设备(如第二VF设备、第三VF设备)将产生的数据发送给卸载卡220中的存储代理模块222(具体的,发送给存储代理模块222中的存储后端驱动),存储代理模块222将该产生的数据发送给存储服务节点400,由存储服务节点400将该产生的数据存储到为容器分配的存储资源中。
对于文件存储资源,由于文件存储服务是基于网络附属存储(network attached storage,NAS)构建的存储服务,是依附于网络的。故而在将产生的数据存储到对应的文件存储资源的过程中,需要借助为容器配置的网络资源。具体的,容器可以通过将产生的数据发送给卸载卡220中网络代理模块221,网络代理模块221通过为容器配置的网络资源(如端口)将该产生的数据发送给存储节点,存储节点将该产生的数据存储到为容器分配的存储资源中。
在本申请实施例中在为容器配置了存储资源之后,也允许将该存储资源收回,存储资源收回的流程与配置存储资源的流程相反,管理代理模块223可以通过该目标计算节点中的前代理模块将与存储资源(如块存储资源或对象存储资源)关联的VF设备卸载,在将该VF设备卸载之后,管理代理模块223可以指示容器存储模块225解除存储资源与VF设备的关联关系,在解除了关联关系之后,容器存储模块225可以指示存储代理模块222将该存储资源从该卸载卡220中卸载掉。
对于文件存储资源,在进行存储资源回收时,管理代理模块223可以通过该目标计算节点中的前代理模块将文件存储资源卸载即可。
(3)、容器网络配置。
在创建了容器之后,还可以为该容器配置网络资源,基于该网络资源容器之间可以实现数据交互,实现该容器需具备的网络能力。
容器需具备的网络能力包括:容器间的网络互通能力、服务发现能力以及网络策略控制能力。下面从这三方面分别进行说明:
1)、容器间网络互通能力。
容器间网络互通能力是容器需具备的最基本网络能力,该容器间网络互通能力要求容器之间能够进行数据交互。
为了使得容器能够具备容器间网络互通能力,需要先为该容器配置网络资源,如网络端口,以使得该容器可以通过该网络端口与其他容器实现数据交互。
当目标计算节点上的管理代理模块223监测到有容器调度到目标计算节点上时,如检测到资源数据库的更新或接收容器创建请求,可以根据更新的资源数据库或容器创建请求确定需要为容器配置的网络资源,管理代理模块223可触发容器网络模块226 为该容器准备网络资源。容器网络模块226通过网络代理模块221从虚拟网络服务节点300申请网络资源,如申请网络端口,获取该网络端口的信息(如端口标识、端口数量)以及网络互连协议(internet protocol,IP)地址等信息。
容器网络模块226创建第四VF设备,建立该第四VF设备与网络资源的关联关系,该第四VF设备可以是基于该卸载卡220中本身具备的网卡抽象的虚拟设备,也可以是支持virtio-net协议的虚拟设备。
在申请到网络资源之后,在管理代理模块223在触发容器运行时模块224创建容器的过程中,容器网络模块226向该目标计算节点提供该第四VF设备,容器网络模块226可以通知目标计算节点中的前端代理模块211为该容器分配给第四VF设备,目标计算节点中前端代理模块211在该通知下,将该第四VF设备分配给该容器。其中,对于普通容器,前端代理模块211将第四VF设备加入到该容器的namespace中。对于安全容器,前端代理模块211可以将该第四VF设备直通给安全容器,以使得该第四VF设备可以被该容器使用。
2)、容器服务发现能力。
从功能上区分,容器可以分为后端服务端(server)和前端应用(application),前端应用通常为面向用户的,用户可以通过对前端应用进行操作,实现自身的需求,如在前端应用中点击“查询”、“运行”等选择。后端服务端能够为前端应用提供运算以及数据,以配合前端应用向用户展示最终的结果。
而对于任一前端应用,可以与多个不同的后端服务端连接,也即来自一个前端应用的信息可以被该多个不同后端服务端中的一个接收。为了进一步控制该后端服务端(server)和前端应用(application)之间的信息交互,可以增设服务发现实例。该服务发现实例可以与该多个后端服务端连接,该服务发现实例也可以分布式的部署在该多个后端服务端所在的计算节点210上,该服务发现实例也可以分布式的部署插置在该多个后端服务端所在的计算节点210的卸载卡220中,由卸载卡220配合执行服务发现实例的功能。作为一种可能的实施方式,分布在各个计算节点210或卸载卡220的服务发现实例也可以配置在与网络资源关联的第四VF设备上,也即该第四VF设备上配置有负载均衡策略。
该服务发现实例能够接收到来自前端应用的信息,该信息的目的地址为该服务发现实例的地址,在接收到该信息之后,基于负载均衡策略将该信息传输该多个后端服务端中的一个后端服务端。其中,负载均衡策略指示从该多个后端服务端中选择一个后端服务端所需遵循的规则,例如负载均衡策略可以指示选择处于空闲状态的后端服务端或选择数据处理能力最强的后端服务端,又例如,负载均衡策略可以指示该多个后端服务端所能接收的信息比例。
该服务发现实例将该信息中的目的地址更新为其中一个后端服务端的地址,将更新了目的地址的信息发送给其中一个后端服务端,以使得该后端服务根据该信息进行数据处理。该服务发现实例还能够将后端服务端的处理结果反馈给前端应用。
从上述描述可知,服务发现实例用于实现负载均衡,将来自前端应用的信息分发给后端服务端,而该负载均衡策略以及该服务发现实例与后端服务端的对应关系是用户配置的。如管理节点110可以在用户的操作下配置发现服务(service),管理节点 110执行的配置操作包括配置支持该发现服务的服务发现实例的地址、该服务发现实例与后端服务端之间的对应关系以及负载均衡策略。安插在计算节点210上的卸载卡220中的容器网络模块226可以监控管理节点110的配置操作,创建服务发现实例。
卸载卡220中的容器网络模块226创建该服务发现实例的过程主要是配置服务访问规则的过程,该服务访问规则包括负载均衡策略以及该服务发现实例的地址与后端服务端的地址之间的对应关系。也就是说,服务发现实例中包括了负载均衡策略,以及服务访问地址与对应的容器的地址的对应关系。
当容器网络模块226通过与管理节点110交互确定各个计算节点210上与该服务发现实例对应的地址的容器的地址,该容器的地址可以包括该容器的网络互连协议(internet protocol,IP)地址与网络端口,配置负载均衡策略、以及服务访问地址与该容器的地址的对应关系。
本申请实施例并不限定服务发现实例的具体类型以及部署位置,例如该服务发现实例可集中的部署在一个计算节点210上,又例如,该服务发现实例可以分布式的部署在多个计算节点210上,具体的,该服务发现实例可以分布式的部署在各个计算节点210上插置的卸载卡220中,凡是能够实现负载均衡的实例均适用于本申请实施例。
当容器网络模块226通过与管理节点110交互确定相关容器发生变更时,容器网络模块226也可以根据变更后的容器的地址更新该服务发现实例。具体的,容器网络模块226可以监控该计算节点210的容器的变更,该容器的变更包括但不限于新建(在该计算节点210上新创建了容器)、删除(删除该计算节点210上已有的容器)、迁移(将该计算节点210上的一个容器的业务迁移到另一个容器)等,当容器网络模块226确定变更后的容器的地址,更新服务访问地址与变更后的容器的地址的对应关系。
当该变更后的容器为新创建的容器,容器网络模块226可以为增加服务访问地址与新创建的容器的地址的对应关系。
当该变更后的容器为删除的容器,容器网络模块226可以删除该服务访问地址与删除的容器的地址的对应关系。
当该变更后的容器为迁移后的容器,容器网络模块226可以将服务访问地址与迁移前的容器的地址的对应关系更新为服务访问地址与迁移后的容器的地址的对应关系。
在创建或更新了服务发现实例后,当该服务发现实例接收到目的地址为该服务发现实例的地址的信息,可以根据负载均衡策略以及服务访问地址与该容器的地址的对应关系,将对该信息中目的地址进行转换,转发到后端服务端中。
3)、网络策略控制能力。
网络资源为容器间网络互通提供了可能性,而网络策略控制能力则进一步限制了容器间网络互通的方式,网络策略控制能力是基于安全组策略实现的,安全组策略规定了允许进行互通的容器,还规定了不允许进行互通的容器,该安全组策略包括访问控制列表(Access Control Lists,ACL),ACL可以指示来自哪些容器的信息可以接受,来自哪些容器的信息可以拒绝。
为了实现该网络策略控制能力,可以增设策略控制实例,该策略控制实例可以与多个容器连接,集中部署在一个设备上,与该多个容器所在的计算节点连接。该策略控制实例也可以分布式的部署在该多个容器所在的计算节点210,该策略控制实例也 可以分布式的部署插置在该多个容器所在的计算节点210的卸载卡中,由卸载卡220配合执行策略控制实例的功能。作为一种可能的实施方式,分布在各个计算节点210或卸载卡220上的策略控制实例也可以配置在与网络资源关联的第四VF设备上,也即该第四VF设备上配置有安全组策略。
该策略控制实例够接收到来自不同容器的信息,转发该信息。以策略控制实例接收到来自容器1的信息为例,该信息的目的地址为容器2的地址,在接收到该信息之后,策略控制实例基于安全组策略,确定该信息是否能够发送给容器2,若确定能够发送给容器2,策略控制实例将该信息2转发给容器2,若确定不能发送给容器2,则拒绝转发该信息。
而该安全组策略是用户配置的,例如用户在客户端上配置允许进行互通的容器以及不允许进行互通的容器,客户端在检测到用户的配置后可以将用户的配置发送给管理节点110,也即将允许进行互通的容器的标识信息以及不允许进行互通的容器的标识信息发送给管理节点110。管理节点110在接收到该用户的配置后,可以通过管理代理模块223向容器网络模块226发送指示,该指示用于指示允许进行互通的容器的标识信息以及不允许进行互通的容器的标识信息。管理节点110可以在用户的操作下配置安全组策略,管理节点110执行的配置操作包括配置允许进行互通的容器的对应关系以及不允许进行互通的容器的对应关系。安插在各个计算节点210上的卸载卡220中的容器网络模块226可以监控管理节点110的配置操作,创建策略控制实例。
卸载卡220中的容器网络模块226创建该策略控制实例的过程主要是配置安全组策略,该安全组策略指示允许进行互通的容器的地址之间的对应关系以及不允许进行互通的容器的地址之间的对应关系。
当容器网络模块226通过与管理节点110交互获取相关容器的地址,该容器的地址可以包括该容器的网络互连协议(internet protocol,IP)地址与网络端口,根据管理节点110的配置操作设置安全组策略。
本申请实施例并不限定策略控制实例的具体类型以及部署位置,例如该策略控制实例可以集中的部署在一个计算节点210上,又例如,该策略控制实例可以分布式的部署在多个计算节点210上,具体的,该策略控制实例可以分布式的部署在各个计算节点210上插置的卸载卡220中,凡是能够实现负载均衡的实例均适用于本申请实施例。
当容器网络模块226通过与管理节点110交互确定相关容器发生变更时,容器网络模块226也可以根据变更后的容器的地址更新该策略控制实例。具体的,容器网络模块226可以监控该计算节点210的容器的变更,该容器的变更包括但不限于新建(在该计算节点210上新创建了容器)、删除(删除该计算节点210上已有的容器)、迁移(将该计算节点210上的一个容器的业务迁移到另一个容器)等,当容器网络模块226确定变更后的容器的地址,更新允许进行互通的容器的地址之间的对应关系以及不允许进行互通的容器的地址之间的对应关系。
当该变更后的容器为新创建的容器,容器网络模块226可以为将该新创建的容器的地址添加到允许进行互通的容器的地址中,该新增加的容器的地址与其他容器的地址之间的对应关系。
当该变更后的容器为删除的容器,容器网络模块226可以将安全组策略中删除的容器的地址与其他容器的地址的对应关系删除。
当该变更后的容器为迁移后的容器,容器网络模块226可以将安全组策略中迁移前的容器的地址与其他容器的地址的对应关系更新为迁移后的容器的地址与其他容器的地址的对应关系。
在创建或更新了策略控制实例后,当该策略控制实例接收到目的地址为该策略控制实例的地址的信息,可以根据安全组策略,确定能够转发该信息,在确定能对该信息进行转换后,转换该信息,否则拒绝转发该信息。
如图7为容器网络配置的流程示意图,图7中,管理节点110可以通过卸载卡220中的管理代理模块223触发容器网络模块226为容器配置网络资源,容器存储模块225可以通过网络服务代理模块从网络服务节点申请网络资源,之后,容器网络模块226通过网络服务代理模块建立虚拟设备与网络资源的关联关系,并向计算节点210提供该虚拟设备。计算节点210中的前端代理模块211可以将该虚拟设备分配给容器。管理节点110可以通过卸载卡220中的管理代理模块223触发容器网络模块226为容器配置服务访问规则(如负载均衡策略)和安全组策略。
除了容器间的网络互通能力、服务发现能力以及容器网络策略控制能力,还可以为容器配置服务质量(quality of service,QoS)、路由规则以及地址映射规则,其中服务质量用于规范对容器发出的信息的服务质量,如规范信息的延时、阻塞、监控、限速等。路由规则用于指示容器发送的信息所需发往的网关,也即基于该路由规则能够将容器发送的信息路由到网关上。地址映射规则用于实现局域网络地址与公网地址的转换,地址映射规则包括NAT以及FULL NAT,其中,NAT包括下列的部分或全部:SNAT、DNAT、PNAT。
服务质量、路由规则以及地址映射规则的配置与服务发现实例的配置方式类似,用户可以通过客户端配置服务质量、路由规则以及地址映射规则中的部分或全部规则,管理节点在检测到用户的配置后,管理节点可以触发容器网络模块226创建能够实现上述部分或全部规则的实例(该实例可以是分布式部署,也即集中部署在一个设备上,例如,容器网络模块226可以将上述的部分或全部规则配置到第四VF设备上。具体可参见前述内容,此处不再赘述。
基于与方法实施例同一发明构思,本申请实施例还提供了一种容器管理装置,用于执行上述任一方法实施例中所述卸载卡执行的方法,相关特征可参见上述方法实施例,此处不再赘述。如图8所示,为本申请实施例提供的一种容器管理装置,该容器管理装置800可以位于卸载卡上,该卸载卡插置与计算节点上,该容器管理装置800与所述计算节点之间建立有通信通道,容器管理装置800还通过网络与容器集群管理节点连接;容器管理装置800用于对计算节点上的容器进行管理,容器管理装置800包括传输单元801、获取单元802、通知单元803,可选的,还包括第一设置单元804和第二设置单元805。
传输单元801,用于接收容器集群管理节点发送的容器创建请求。传输单元801可以用于实现上述方法实施例中管理代理模块223接收容器创建请求的方法。
获取单元802,用于根据容器创建请求获取容器镜像,获取单元802可以用于实 现上述方法实施例中容器运行时模块224获取容器镜像的方法。
通知单元803,用于通过通信通道通知计算节点根据容器镜像在计算节点中创建容器。通知单元803可以用于实现上述方法实施例中容器运行时模块224通知计算节点创建容器的方法。
作为一种可能的实施方式,通知单元803在通知计算节点根据容器镜像在计算节点中创建容器时,还可以创建第一虚拟设备;并关联容器镜像与第一虚拟设备;之后,通知计算节点创建容器的容器运行环境并将第一虚拟设备挂载至容器的根目录。
作为一种可能的实施方式,容器管理装置800还通过网络与存储服务节点连接。传输单元801可以向存储服务节点申请存储资源;也即传输单元801执行上述方法实施例中存储代理模块222执行的方法。
第一设置单元804可以根据存储资源设置第二虚拟设备;第一设置单元804可以执行上述方式实施例中容器存储模块225配置虚拟设备的方法。
通知单元803,用于通过通信通道将第二虚拟设备挂载到容器的目录下。通知单元803可以执行上述方式实施例中容器存储模块225挂载虚拟设备的方法。
作为一种可能的实施方式,第一设置单元804在根据存储资源设置第二虚拟设备时,可以创建第二虚拟设备;并关联存储资源和第二虚拟设备。
作为一种可能的实施方式,存储资源可以为对象存储资源,也可以为块存储资源。该存储资源为文件存储资源时,通知单元803可以将该文件存储资源以网络文件系统的形式提供给计算节点上的容器,并通知计算节点将该网络文件系统挂载到容器的目录下。
作为一种可能的实施方式,通知单元803在通过通信通道将第二虚拟设备挂载到容器的目录下时,当容器为普通容器,通知单元803可以通过通信通道将第二虚拟设备挂载到容器的存储目录中。当容器为安全容器,通知单元803可以通过通信通道将第二虚拟设备直通给用于部署容器的安全容器虚拟机,由安全容器虚拟机将第二虚拟设备挂载到容器的存储目录中。
作为一种可能的实施方式,容器管理装置还通过网络与网络服务节点连接。传输单元801可以向网络服务节点申请网络资源;也即传输单元801执行上述方法实施例中网络代理模块221执行的方法。
第二设置单元805可以根据网络资源设置第三虚拟设备;第二设置单元805可以执行上述方式实施例中容器网络模块226配置虚拟设备的方法。
通知单元803,用于通过通信通道将第三虚拟设备设置于容器。通知单元803可以执行上述方式实施例中容器网络模块226挂载虚拟设备的方法。
作为一种可能的实施方式,第二设置单元805在根据网络资源设置第三虚拟设备时,可以创建第三虚拟设备;之后,再关联网络资源和第三虚拟设备。
作为一种可能的实施方式,第二设置单元805在根据网络资源设置第三虚拟设备时,可以为第三虚拟设备设置网络处理规则,网络处理规则包括下列的部分或全部:负载均衡策略、安全组策略、服务质量、路由规则、地址映射规则。
作为一种可能的实施方式,通知单元803在通过通信通道将第三虚拟设备设置于容器时,当容器为普通容器,通知单元803通过通信通道将第三虚拟设备加入到容器 的命名空间中。当容器为安全容器,通知单元803通过通信通道将第三虚拟设备直通给用于部署容器的安全容器虚拟机。
作为一种可能的实施方式,通信通道为高速外部设备互联PCIe通道。
需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。在本申请的实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述实施例,可以全部或部分地通过软件、硬件、固件或其他任意组合来实现。当使用软件实现时,上述实施例可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载或执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以为通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集合的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质。半导体介质可以是固态硬盘(solid state drive,SSD)。
在一个简单的实施例中,本领域的技术人员可以想到上述实施例中卸载卡或容器管理装置可采用图9所示的形式。
如图9所示的装置900,包括至少一个处理器901、存储器902,可选的,还可以包括通信接口903。
存储器902可以是易失性存储器,例如随机存取存储器;存储器也可以是非易失性存储器,例如只读存储器,快闪存储器,硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)、或者存储器902是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器902可以是上述存储器的组合。
本申请实施例中不限定上述处理器901以及存储器902之间的具体连接介质。
处理器901可以为中央处理器(central processing unit,CPU),该处理器901还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件、人工智能芯片、片上芯片等。通用处理器可以是微处理器或者是任何常规的处理器等。处理器1201在与其他设备进行通信时,可以通过通信接口903进行数据传输,如接收容器创建请求、申请存储资源、申请网络资源。
当所述容器管理装置采用图9所示的形式时,图9中的处理器901可以通过调用 存储器902中存储的计算机执行指令,使得所述容器管理装置可以执行上述任一方法实施例中的所述卸载卡执行的方法。
具体的,图8的传输单元801、获取单元802、通知单元803、第一设置单元804和第二设置单元805的功能/实现过程均可以通过图9中的处理器901调用存储器902中存储的计算机执行指令来实现。或者,图8中的获取单元802、通知单元803、第一设置单元804和第二设置单元805的功能/实现过程可以通过图9中的处理器901调用存储器902中存储的计算机执行指令来实现,图8的传输单元801的功能/实现过程可以通过图9中的通信接口903来实现。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (30)

  1. 一种计算机系统,其特征在于,包括卸载卡和计算节点,所述卸载卡插置于所述计算节点,所述卸载卡与所述计算节点之间建立有通信通道,所述卸载卡还通过网络与容器集群管理节点连接,其中,
    所述卸载卡,用于接收所述容器集群管理节点发送的容器创建请求,根据所述容器创建请求获取容器镜像;
    所述计算节点,通过所述通信通道通获取所述容器镜像,并根据所述容器镜像创建容器。
  2. 根据权利要求1所述的系统,其特征在于,所述卸载卡还通过网络与存储服务节点连接,所述卸载卡,还用于:向所述存储服务节点申请存储资源;根据所述存储资源设置第二虚拟设备;
    所述计算节点,还用于:通过所述通信通道获取所述第二虚拟设备,将所述第二虚拟设备挂载到所述容器的目录下。
  3. 根据权利要求1或2所述的系统,其特征在于,所述卸载卡还通过网络与网络服务节点连接,所述卸载卡,还用于:向所述网络服务节点申请网络资源;根据所述网络资源设置第三虚拟设备;
    所述计算节点,还用于通过所述通信通道获取所述第三虚拟设备,将所述第三虚拟设备设置于所述容器。
  4. 根据权利要求1~3任一所述的系统,其特征在于,所述通信通道为高速外部设备互联PCIe通道。
  5. 一种容器管理方法,其特征在于,所述方法应用于卸载卡,所述卸载卡插置于所述计算节点,所述卸载卡与所述计算节点之间建立有通信通道,所述卸载卡还通过网络与容器集群管理节点连接,所述方法包括:
    接收所述容器集群管理节点发送的容器创建请求;
    根据所述容器创建请求获取容器镜像;
    通过所述通信通道通知所述计算节点根据所述容器镜像在所述计算节点中创建容器。
  6. 根据权利要求5所述的方法,其特征在于,所述通知所述计算节点根据所述容器镜像在所述计算节点中创建容器,包括:
    创建第一虚拟设备;
    关联所述容器镜像与所述第一虚拟设备;
    通知所述计算节点创建所述容器的容器运行环境并将所述第一虚拟设备挂载至所述容器的根目录。
  7. 根据权利要求5或6所述的方法,其特征在于,所述卸载卡还通过网络与存储服务节点连接,所述方法还包括:
    向所述存储服务节点申请存储资源;
    根据所述存储资源设置第二虚拟设备;
    通过所述通信通道将所述第二虚拟设备挂载到所述容器的目录下。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述存储资源设置第二虚 拟设备,包括:
    创建所述第二虚拟设备;
    关联所述存储资源和所述第二虚拟设备。
  9. 根据权利要求7或8所述的方法,其特征在于,所述存储资源包括下列的部分或全部:
    对象存储资源、块存储资源。
  10. 根据权利要求7~9任一所述的方法,其特征在于,所述通过所述通信通道将所述第二虚拟设备挂载到所述容器的目录下,包括:
    若所述容器为安全容器,将所述第二虚拟设备直通给用于部署所述容器的安全容器虚拟机,由所述安全容器虚拟机将所述第二虚拟设备挂载到所述容器的目录中。
  11. 根据权利要求5~10任一所述的方法,其特征在于,所述卸载卡还通过网络与网络服务节点连接,所述方法还包括:
    向所述网络服务节点申请网络资源;
    根据所述网络资源设置第三虚拟设备;
    通过所述通信通道将所述第三虚拟设备设置于所述容器。
  12. 根据权利要求11所述的方法,其特征在于,所述根据所述网络资源设置第三虚拟设备,包括:
    创建所述第三虚拟设备;
    关联所述网络资源和所述第三虚拟设备。
  13. 根据权利要求11或12所述的方法,其特征在于,所述根据所述网络资源设置第三虚拟设备,还包括:
    为所述第三虚拟设备设置网络处理规则,所述网络处理规则包括下列的部分或全部:负载均衡策略、安全组策略、路由规则、地址映射规则,服务质量。
  14. 根据权利要求11~13任一所述的方法,其特征在于,所述通过所述通信通道将所述第三虚拟设备设置于所述容器,包括:
    将所述第三虚拟设备加入到所述容器的命名空间中。
  15. 根据权利要求11~13任一所述的方法,其特征在于,所述通过所述通信通道将所述第三虚拟设备设置于所述容器,包括:
    若所述容器为安全容器,将所述第三虚拟设备直通给用于部署所述容器的安全容器虚拟机。
  16. 根据权利要求5~15任一所述的方法,其特征在于,所述通信通道为高速外部设备互联PCIe通道。
  17. 一种容器管理装置,其特征在于,所述装置应用于卸载卡,所述卸载卡插置于所述计算节点,所述装置与所述计算节点之间建立有通信通道,所述装置还通过网络与容器集群管理节点连接,所述装置包括传输单元、获取单元以及通知单元:
    所述传输单元,用于接收所述容器集群管理节点发送的容器创建请求;
    所述获取单元,用于根据所述容器创建请求获取容器镜像;
    所述通知单元,用于通过所述通信通道通知所述计算节点根据所述容器镜像在所述计算节点中创建容器。
  18. 根据权利要求17所述的装置,其特征在于,所述通知单元在通知所述计算节点根据所述容器镜像在所述计算节点中创建容器时,具体用于:
    创建第一虚拟设备;
    关联所述容器镜像与所述第一虚拟设备;
    通知所述计算节点创建所述容器的容器运行环境并将所述第一虚拟设备挂载至所述容器的根目录。
  19. 根据权利要求17或18所述的装置,其特征在于,所述装置还通过网络与存储服务节点连接,所述装置还包括第一设置单元;
    所述传输单元,用于向所述存储服务节点申请存储资源;
    所述第一设置单元,用于根据所述存储资源设置第二虚拟设备;
    所述通知单元,用于通过所述通信通道将所述第二虚拟设备挂载到所述容器的目录下。
  20. 根据权利要求19所述的装置,其特征在于,所述第一设置单元在根据所述存储资源设置第二虚拟设备时,还用于:
    创建所述第二虚拟设备;
    关联所述存储资源和所述第二虚拟设备。
  21. 根据权利要求19或20所述的装置,其特征在于,所述存储资源包括下列的部分或全部:
    对象存储资源、块存储资源。
  22. 根据权利要求19~21任一所述的装置,其特征在于,所述通知单元在通过所述通信通道将所述第二虚拟设备挂载到所述容器的目录下时,具体用于:
    若所述容器为安全容器,将所述第二虚拟设备直通给用于部署所述容器的安全容器虚拟机,由所述安全容器虚拟机将所述第二虚拟设备挂载到所述容器的目录中。
  23. 根据权利要求17~22任一所述的装置,其特征在于,所述装置还通过网络与网络服务节点连接,所述装置还包括第二设置单元;
    所述传输单元,用于向所述网络服务节点申请网络资源;
    所述第二设置单元,用于根据所述网络资源设置第三虚拟设备;
    所述通知单元,用于通过所述通信通道将所述第三虚拟设备设置于所述容器。
  24. 根据权利要求23所述的装置,其特征在于,所述第二设置单元在根据所述网络资源设置第三虚拟设备时,具体用于:
    创建所述第三虚拟设备;
    关联所述网络资源和所述第三虚拟设备。
  25. 根据权利要求23或24所述的装置,其特征在于,所述第二设置单元在根据所述网络资源设置第三虚拟设备时,还用于:
    为所述第三虚拟设备设置网络处理规则,所述网络处理规则包括下列的部分或全部:负载均衡策略、安全组策略、路由规则、地址映射规则,服务质量。
  26. 根据权利要求23~25任一所述的装置,其特征在于,所述通知单元在通过所述通信通道将所述第三虚拟设备设置于所述容器时,具体用于:
    将所述第三虚拟设备加入到所述容器的命名空间中。
  27. 根据权利要求23~25任一所述的装置,其特征在于,所述通知单元在通过所述通信通道将所述第三虚拟设备设置于所述容器时,具体用于:
    若所述容器为安全容器,将所述第三虚拟设备直通给用于部署所述容器的安全容器虚拟机。
  28. 根据权利要求17~27任一所述的装置,其特征在于,所述通信通道为高速外部设备互联PCIe通道。
  29. 一种装置,其特征在于,包括存储器和处理器;所述存储器存储有程序指令,所述处理器运行所述程序指令以执行权利要求5~16任一所述的方法。
  30. 一种计算机可读存储介质,其特征在于,计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行如权利要求5~16任一所述的方法。
PCT/CN2021/116842 2020-09-08 2021-09-07 一种计算机系统、容器管理方法及装置 WO2022052898A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21865952.2A EP4202668A4 (en) 2020-09-08 2021-09-07 COMPUTER SYSTEM AND CONTAINER MANAGEMENT METHOD AND DEVICE
US18/179,644 US20230205505A1 (en) 2020-09-08 2023-03-07 Computer system, container management method, and apparatus

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202010932403.5 2020-09-08
CN202010932403 2020-09-08
CN202011618590.6A CN114237809A (zh) 2020-09-08 2020-12-31 一种计算机系统、容器管理方法及装置
CN202011618590.6 2020-12-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/179,644 Continuation US20230205505A1 (en) 2020-09-08 2023-03-07 Computer system, container management method, and apparatus

Publications (1)

Publication Number Publication Date
WO2022052898A1 true WO2022052898A1 (zh) 2022-03-17

Family

ID=80632637

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116842 WO2022052898A1 (zh) 2020-09-08 2021-09-07 一种计算机系统、容器管理方法及装置

Country Status (3)

Country Link
US (1) US20230205505A1 (zh)
EP (1) EP4202668A4 (zh)
WO (1) WO2022052898A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160170785A1 (en) * 2014-12-11 2016-06-16 Amazon Technologies, Inc. Managing virtual machine instances utilizing an offload device
CN109564524A (zh) * 2016-06-30 2019-04-02 亚马逊科技公司 虚拟化管理器的安全引导
CN109564514A (zh) * 2016-06-30 2019-04-02 亚马逊科技公司 部分卸载的虚拟化管理器中的存储器分配技术
CN109564523A (zh) * 2016-06-30 2019-04-02 亚马逊科技公司 使用机会性管理程序降低性能可变性
US10474825B1 (en) * 2017-09-27 2019-11-12 Amazon Technologies, Inc. Configurable compute instance secure resets
US20190392150A1 (en) * 2018-06-25 2019-12-26 Amazon Technologies, Inc. Network-accessible computing service for micro virtual machines
US20200186600A1 (en) * 2018-12-11 2020-06-11 Amazon Technologies, Inc. Mirroring network traffic of virtual networks at a service provider network
US20200183724A1 (en) * 2018-12-11 2020-06-11 Amazon Technologies, Inc. Computing service with configurable virtualization control levels and accelerated launches

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160170785A1 (en) * 2014-12-11 2016-06-16 Amazon Technologies, Inc. Managing virtual machine instances utilizing an offload device
CN109564524A (zh) * 2016-06-30 2019-04-02 亚马逊科技公司 虚拟化管理器的安全引导
CN109564514A (zh) * 2016-06-30 2019-04-02 亚马逊科技公司 部分卸载的虚拟化管理器中的存储器分配技术
CN109564523A (zh) * 2016-06-30 2019-04-02 亚马逊科技公司 使用机会性管理程序降低性能可变性
US10474825B1 (en) * 2017-09-27 2019-11-12 Amazon Technologies, Inc. Configurable compute instance secure resets
US20190392150A1 (en) * 2018-06-25 2019-12-26 Amazon Technologies, Inc. Network-accessible computing service for micro virtual machines
US20200186600A1 (en) * 2018-12-11 2020-06-11 Amazon Technologies, Inc. Mirroring network traffic of virtual networks at a service provider network
US20200183724A1 (en) * 2018-12-11 2020-06-11 Amazon Technologies, Inc. Computing service with configurable virtualization control levels and accelerated launches

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4202668A4 *

Also Published As

Publication number Publication date
EP4202668A4 (en) 2024-01-31
US20230205505A1 (en) 2023-06-29
EP4202668A1 (en) 2023-06-28

Similar Documents

Publication Publication Date Title
US11218364B2 (en) Network-accessible computing service for micro virtual machines
US8656355B2 (en) Application-based specialization for computing nodes within a distributed processing system
US8010651B2 (en) Executing programs based on user-specified constraints
US10713071B2 (en) Method and apparatus for network function virtualization
US10158579B2 (en) Resource silos at network-accessible services
CN112532675B (zh) 一种网络边缘计算系统的创建方法、装置及介质
US10397132B2 (en) System and method for granting virtualized network function life cycle management
US10146848B2 (en) Systems and methods for autonomous, scalable, and distributed database management
RU2653292C2 (ru) Перенос служб через границы кластеров
CN114237809A (zh) 一种计算机系统、容器管理方法及装置
US20060041644A1 (en) Unified system services layer for a distributed processing system
US20230006944A1 (en) Interoperable cloud based media processing using dynamic network interface
CN107209642B (zh) 用于控制云环境中的资源的方法和实体
CN111722906A (zh) 一种部署虚拟机和容器的方法及装置
US11966768B2 (en) Apparatus and method for multi-cloud service platform
US20060015505A1 (en) Role-based node specialization within a distributed processing system
US20220206832A1 (en) Configuring virtualization system images for a computing cluster
WO2024016624A1 (zh) 多集群访问方法和系统
US11609777B2 (en) System and method for multi-cluster storage
CN106911741B (zh) 一种虚拟化网管文件下载负载均衡的方法及网管服务器
CN116382585A (zh) 临时卷存储方法、容器化云平台及计算机可读介质
US11726684B1 (en) Cluster rebalance using user defined rules
WO2022052898A1 (zh) 一种计算机系统、容器管理方法及装置
WO2022140945A1 (zh) 容器集群的管理方法和装置
CN112015515B (zh) 一种虚拟网络功能的实例化方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21865952

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021865952

Country of ref document: EP

Effective date: 20230323

NENP Non-entry into the national phase

Ref country code: DE