WO2020151306A1 - 一种容器自适应伸缩方法、服务器及存储介质 - Google Patents

一种容器自适应伸缩方法、服务器及存储介质 Download PDF

Info

Publication number
WO2020151306A1
WO2020151306A1 PCT/CN2019/116556 CN2019116556W WO2020151306A1 WO 2020151306 A1 WO2020151306 A1 WO 2020151306A1 CN 2019116556 W CN2019116556 W CN 2019116556W WO 2020151306 A1 WO2020151306 A1 WO 2020151306A1
Authority
WO
WIPO (PCT)
Prior art keywords
container
containers
resource usage
usage rate
expansion
Prior art date
Application number
PCT/CN2019/116556
Other languages
English (en)
French (fr)
Inventor
刘洪晔
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020151306A1 publication Critical patent/WO2020151306A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Definitions

  • This application relates to the computer field, and in particular to a container adaptive scaling method, server and storage medium.
  • Docker is an open source application container engine that allows developers to package their applications and dependent packages into a portable container, and then publish it to any popular Linux machine. It can also be virtualized. The container is completely sandboxed. Mechanism, there will be no interfaces between each other.
  • a complete Docker is composed of the following parts: client (Docker Client), daemon (Docker Daemon), image (Docker Image) and container (Docker Container).
  • client Docker Client
  • daemon Docker Daemon
  • image Docker Image
  • container Docker Container
  • container technology is a new type of virtualization technology. As a logical abstraction of physical resources, containers have the characteristics of less resource occupation and fast resource supply, and are suitable for Internet application models with sudden workload changes.
  • the uppermost layer is a readable and writable layer.
  • the container can be considered as a running or running image, that is, a change layer is added to the image.
  • Docker needs to perform a series of operations such as virtual disk device creation, file system creation, mounting, runtime configuration writing, and startup process for this layer.
  • the main purpose of this application is to provide a container adaptive scaling method, server and storage medium, aiming to solve the technical problem that the container expansion or contraction takes a long time, which causes the container expansion speed to fail to meet business requirements and affect the normal operation of the business.
  • a container adaptive scaling method provided in this application is applied to a server, and the method includes:
  • Setting steps Set the minimum number of containers, the maximum number of containers, the expansion resource usage threshold that triggers container scaling, and the shrinking resource usage threshold;
  • Monitoring step obtaining the resource usage rate of each current execution container of the server
  • Creation step creating a preheating container according to the resource usage rate and preset creation rules
  • Expansion step when the resource usage rate of a certain current execution container is greater than the expansion resource usage rate threshold and less than the maximum number of containers, activate one of the preheated containers created as an execution container for expansion;
  • Shrinking step when the resource usage rate of a certain current execution container is greater than the minimum container quantity value within a preset time and less than the shrinking resource usage rate threshold, the current execution container is converted to a warm-up container Perform shrinking.
  • the present application further provides a server, the server includes a memory and a processor, the memory stores a container adaptive scaling program, and the container adaptive scaling program is implemented when the processor is executed The following steps:
  • Setting steps Set the minimum number of containers, the maximum number of containers, the expansion resource usage threshold that triggers container scaling, and the shrinking resource usage threshold;
  • Monitoring step obtaining the resource usage rate of each current execution container of the server
  • Creation step creating a preheating container according to the resource usage rate and preset creation rules
  • Expansion step when the resource usage rate of a certain current execution container is greater than the expansion resource usage rate threshold and less than the maximum number of containers, activate one of the preheated containers created as an execution container for expansion;
  • Shrinking step when the resource usage rate of a certain current execution container is greater than the minimum container quantity value within a preset time and less than the shrinking resource usage rate threshold, the current execution container is converted to a warm-up container Perform shrinking.
  • the present application further provides a computer-readable storage medium having a container adaptive scaling program stored on the computer-readable storage medium, and the container adaptive scaling program can be executed by one or more processors, In order to realize the steps of the container adaptive expansion method as described above.
  • the container adaptive scaling method, server and storage medium proposed in this application monitor the resource usage rate of the current execution container in the system in real time, create a pre-heated container in advance according to the resource usage rate and preset creation rules, and then according to the current execution container
  • the relationship between the resource usage rate and the preset threshold determines whether to trigger container expansion or shrinkage.
  • the created preheat container is activated as an execution container.
  • shrinkage is needed, the current execution container is converted into a preheat container.
  • This application can pre-heat the containers that may be started in advance, save the startup time of the containers, speed up the startup speed of the containers when the container is expanded, and can dynamically reduce the number of containers according to the system requirements to ensure the stability and reliability of the server.
  • Figure 1 is a schematic diagram of an embodiment of a server in this application.
  • FIG. 2 is a schematic diagram of program modules of the embodiment of the container adaptive expansion program in FIG. 1;
  • FIG. 3 is a schematic diagram of the method flow of the embodiment of the container adaptive expansion method in FIG. 1.
  • This application provides a server 1. 1 is a schematic diagram of an embodiment of the server 1 of this application.
  • the server 1 sets the expansion resource usage rate threshold and the shrinking resource usage threshold value, and monitors the resource usage of each container. Create a pre-heated container that can prepare for the container that may be started. When the resource usage of a current execution container is greater than the expansion resource usage threshold and less than the maximum number of containers, a pre-created pre-heated container will be created Activate as an execution container for expansion; when the resource usage rate of a current execution container is greater than the minimum number of containers within a preset time and less than the shrinking resource usage threshold, the current execution container is converted to a warm-up container Perform shrinking.
  • This application can pre-prepare the containers that may be started, save the startup time of the containers, speed up the startup speed of the containers when the container is expanded, and can dynamically reduce the number of containers according to the system requirements to ensure the stability and reliability of the server.
  • the server 11 may be one or more of a rack server, a blade server, a tower server, or a cabinet server.
  • the server 1 includes, but is not limited to, a memory 11, a processor 12, and a network interface 13.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like.
  • the storage 11 may be an internal storage unit of the server 1 in some embodiments, such as a hard disk of the server 1. In some other embodiments, the memory 11 may also be an external storage device of the server 1, for example, a plug-in hard disk, a smart media card (SMC), and a secure digital (SD) card equipped on the server 1. , Flash Card, etc.
  • the storage 11 may also include both an internal storage unit of the server 1 and an external storage device.
  • the memory 11 can be used not only to store application software and various data installed in the server 1, such as the code of the container adaptive scaling program 10, etc., but also to temporarily store data that has been output or will be output.
  • the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments, and is used to run the program code or processing stored in the memory 11 Data, for example, execute container adaptive scaling program 10, etc.
  • CPU central processing unit
  • controller microcontroller
  • microprocessor or other data processing chip in some embodiments, and is used to run the program code or processing stored in the memory 11 Data, for example, execute container adaptive scaling program 10, etc.
  • the network interface 13 may optionally include a standard wired interface and a wireless interface (such as a Wi-Fi interface), and is generally used to establish a communication connection between the server 1 and other electronic devices.
  • a standard wired interface and a wireless interface such as a Wi-Fi interface
  • the device 1 may further include a user interface.
  • the user interface may include a display (Display) and an input unit such as a keyboard (Keyboard).
  • the optional user interface may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, etc.
  • the display can also be appropriately called a display screen or a display unit, which is used to display the information processed in the server 1 and to display a visualized user interface.
  • Figure 1 only shows the server 1 with components 11-13 and the container adaptive scaling program 10. Those skilled in the art can understand that the structure shown in Figure 1 does not constitute a limitation on the server 1, and may include a comparison chart. Show fewer or more components, or combinations of certain components, or different component arrangements.
  • Setting steps Set the minimum number of containers, the maximum number of containers, the expansion resource usage threshold that triggers container scaling, and the shrinking resource usage threshold;
  • Monitoring step obtaining the resource usage rate of each current execution container of the server
  • Creation step creating a preheating container according to the resource usage rate and preset creation rules
  • Expansion step when the resource usage rate of a certain current execution container is greater than the expansion resource usage rate threshold and less than the maximum number of containers, activate one of the preheated containers created as an execution container for expansion;
  • Shrinking step when the resource usage rate of a certain current execution container is greater than the minimum container quantity value within a preset time and less than the shrinking resource usage rate threshold, the current execution container is converted to a warm-up container Perform shrinking.
  • the processor runs the container adaptive scaling program, and further executes the following step: when the resource usage rate is greater than the value of the maximum number of containers, stop container expansion.
  • the creating step further includes: creating a preheating container according to the creation rule according to the resource usage rate every preset period.
  • FIG. 2 it is a schematic diagram of program modules of the embodiment 10 of the container adaptive expansion program in FIG. 1.
  • the container adaptive scaling program 10 is divided into multiple modules, and the multiple modules are stored in the memory 11 and executed by the processor 12 to complete the application.
  • the module referred to in this application refers to a series of computer program instruction segments that can complete specific functions.
  • the container adaptive scaling program 10 includes a setting module 110, a monitoring module 120, a creation module 130, a capacity expansion module 140, and a capacity reduction module 150.
  • the setting module 110 is used to set the value of the minimum number of containers and the value of the maximum number of containers, so as to limit the range of adaptive expansion and contraction of the containers. At the same time, it is used to set the expansion resource usage threshold and the shrinking resource usage threshold that trigger container expansion, and the threshold to trigger container expansion or shrinkage.
  • the resource usage rate includes resource usage rates such as CPU and memory.
  • the monitoring module 120 is configured to obtain the resource usage rate of each current execution container of the server, so as to realize the monitoring of the resource usage rate of each container.
  • a data collector is used to collect the load and CPU, memory and other resource usage rates of each container in real time.
  • the data collector can also collect the load and CPU, The utilization rate of resources such as memory.
  • the creation module 130 is configured to create a preheating container according to the aforementioned resource usage rate and preset creation rules.
  • the preheating container is created for each basic image before accepting the request to start the container, so that the corresponding basic image is mounted with a writable layer file system.
  • the basic image is an image packaged based on the Linux system to increase the execution environment required for operation and the program required for control and monitoring.
  • preheating a container refers to using the pause command to convert a container in a running state into a container that does not occupy computing resources, but has completed the state loading, and the application has been loaded into the memory.
  • the formula for the creation rule for calculating the number of preheating containers currently required to be created is:
  • N (the maximum number of containers-the number of currently executed containers-the current number of preheated containers) * (the number of containers whose resource usage rate is greater than the expansion resource usage rate threshold / the number of currently executed containers) ⁇ n-the current number of preheated containers, where , N is the number of preheating containers that need to be created currently, the number of currently executed containers represents the number of containers in a running state, n is an empirical value, and n>1.
  • the creation rule further includes:
  • the creation module 130 may also create a preheating container according to the creation rule every preset period (for example, 10 minutes) according to the resource usage rate, and the preset period (for example, 10 minutes for all preheating containers) The preheating container that is not activated in) will be forcibly closed.
  • the expansion module 140 is used to perform container expansion. When the resource usage rate of a certain current execution container is greater than the expansion resource usage rate threshold and less than the maximum number of containers, the created preheat container is activated as an execution container for expansion.
  • the expansion module also executes to stop container expansion when the resource usage rate is greater than the value of the maximum number of containers.
  • the shrinking module 150 is used to perform container shrinking.
  • the resource usage rate of a certain current execution container is greater than the minimum container quantity value within a preset time (for example, 10 minutes), and is less than the shrinking resource usage rate threshold, the current execution container is turned to warm up The container shrinks.
  • the expansion module 140 executes:
  • this application also provides an adaptive container scaling method.
  • FIG. 3 this is a schematic diagram of a method flow of an embodiment of a container adaptive expansion method of this application.
  • the processor 12 of the server 1 executes the container adaptive scaling program 10 stored in the memory 11 to implement the following steps of the container adaptive scaling method:
  • the setting module 110 is used to set the value of the minimum number of containers and the value of the maximum number of containers, so as to limit the range of self-adaptive expansion of the container. At the same time, it is used to set the expansion resource usage threshold and the shrinking resource usage threshold that trigger container expansion, and the threshold to trigger container expansion or shrinkage.
  • the resource usage rate includes resource usage rates such as CPU and memory.
  • the monitoring module 120 is used to obtain the resource usage rate of each current execution container of the server, so as to realize the monitoring of the resource usage rate of each container.
  • a data collector is used to collect the load and CPU, memory and other resource usage rates of each container in real time. In other embodiments, the data collector can also periodically collect the load and CPU, memory and other resources of each container. Usage rate.
  • the creation module 130 is used to create a preheating container according to the aforementioned resource usage rate and preset creation rules.
  • the preheating container is created for each basic image before accepting the request to start the container, so that the corresponding basic image is mounted with a writable layer file system.
  • the basic image is an image packaged based on the Linux system to increase the execution environment required for operation and the program required for control and monitoring.
  • preheating a container refers to using the pause command to convert a container in a running state into a container that does not occupy computing resources, but has completed the state loading, and the application has been loaded into the memory.
  • the formula for the creation rule for calculating the number of preheating containers currently required to be created is:
  • N (the maximum number of containers-the number of currently executed containers-the current number of preheated containers) * (the number of containers whose resource usage rate is greater than the expansion resource usage rate threshold / the number of currently executed containers) ⁇ n-the current number of preheated containers, where , N is the number of preheating containers that need to be created currently, the number of currently executed containers represents the number of containers in a running state, n is an empirical value, and n>1.
  • the creation rule further includes:
  • the creation module 130 can also be used in step S120 to create a preheating container every preset period (for example, 10 minutes) according to the creation rule according to the resource usage rate, and all preheating containers are preset
  • the preheating container that is not activated within a period (for example, 10 minutes) will be forcibly closed.
  • the expansion module 140 is used to perform container expansion.
  • the resource usage rate of a certain current execution container is greater than the expansion resource usage rate threshold and less than the maximum number of containers, the created preheat container is activated as an execution container for expansion.
  • the expansion module also executes to stop container expansion when the resource usage rate is greater than the value of the maximum number of containers.
  • the shrinking module 150 is used to perform container shrinking.
  • the resource usage rate of a certain current execution container is greater than the minimum container quantity value within a preset time (for example, 10 minutes), and is less than the shrinking resource usage rate threshold, the current execution container is turned to warm up The container shrinks.
  • expansion step S130 further includes:
  • the embodiment of the present application also proposes a computer-readable storage medium.
  • the computer-readable storage medium may be a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a read-only memory (ROM), an erasable programmable read-only Any one or any combination of memory (EPROM), portable compact disk read-only memory (CD-ROM), USB memory, etc.
  • the computer-readable storage medium includes a container adaptive scaling program 10, which implements the following operations when executed by the processor 12:
  • the setting module 110 is used to set the value of the minimum number of containers and the value of the maximum number of containers, so as to limit the range of self-adaptive expansion of the container. At the same time, it is used to set the expansion resource usage threshold and the shrinking resource usage threshold that trigger container expansion, and the threshold to trigger container expansion or shrinkage.
  • the resource usage rate includes resource usage rates such as CPU and memory.
  • the monitoring module 120 is used to obtain the resource usage rate of each current execution container of the server, so as to realize the monitoring of the resource usage rate of each container.
  • a data collector is used to collect the load and CPU, memory and other resource usage rates of each container in real time. In other embodiments, the data collector can also periodically collect the load and CPU, memory and other resources of each container. Usage rate.
  • the creation module 130 is used to create a preheating container according to the aforementioned resource usage rate and preset creation rules.
  • the preheating container is created for each basic image before accepting the request to start the container, so that the corresponding basic image is mounted with a writable layer file system.
  • the basic mirror image is a mirror image packaged based on the Linux system to increase the execution environment required for operation and the program required for control and monitoring.
  • preheating a container refers to using the pause command to convert a container in a running state into a container that does not occupy computing resources, but has completed the state loading, and the application has been loaded into the memory.
  • the formula for the creation rule for calculating the number of preheating containers currently required to be created is:
  • N (the maximum number of containers-the number of currently executed containers-the current number of preheated containers) * (the number of containers whose resource usage rate is greater than the expansion resource usage rate threshold / the number of currently executed containers) ⁇ n-the current number of preheated containers, where , N is the number of preheating containers that need to be created currently, the number of currently executed containers represents the number of containers in a running state, n is an empirical value, and n>1.
  • the creation rule further includes:
  • the creation module 130 can also be used in step S120 to create a preheating container every preset period (for example, 10 minutes) according to the creation rule according to the resource usage rate, and all preheating containers are preset
  • the preheating container that is not activated within a period (for example, 10 minutes) will be forcibly closed.
  • the expansion module 140 is used to perform container expansion.
  • the resource usage rate of a certain current execution container is greater than the expansion resource usage rate threshold and less than the maximum number of containers, the created preheat container is activated as an execution container for expansion.
  • the expansion module also executes to stop container expansion when the resource usage rate is greater than the value of the maximum number of containers.
  • the shrinking module 150 is used to perform container shrinking.
  • the resource usage rate of a certain current execution container is greater than the minimum container quantity value within a preset time (for example, 10 minutes), and is less than the shrinking resource usage rate threshold, the current execution container is turned to warm up The container shrinks.
  • expansion step S130 further includes:

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

本申请公开了一种容器自适应伸缩方法,应用于服务器,该方法包括:设置最小容器数量值、最大容器数量值、触发容器伸缩的扩容资源使用率阈值及缩容资源使用率阈值;获取服务器的各当前执行容器的资源使用率;创建预热容器;当某个当前执行容器的资源使用率大于扩容资源使用率阈值,且小于最大容器数量值时,将一预热容器激活成执行容器进行扩容;及当某个当前执行容器的资源使用率在预设时间内大于最小容器数量值,且小于缩容资源使用率阈值时,将当前执行容器转为预热容器进行缩容。本申请能够对可能启动的容器提前预热准备,加快容器扩容时容器的启动速度,同时能够根据系统需求动态减少容器的数量,保障服务器的稳定性和可靠性。

Description

一种容器自适应伸缩方法、服务器及存储介质
本申请基于巴黎公约申明享有2019年1月23日递交的申请号为CN201910063715.4、名称为“一种容器自适应伸缩方法、服务器及存储介质”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
技术领域
本申请涉及计算机领域,尤其涉及一种容器自适应伸缩方法、服务器及存储介质。
背景技术
Docker是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,也可以实现虚拟化,容器是完全使用沙箱机制,相互之间不会有任何接口。一个完整的Docker有以下几个部分组成:客户端(Docker Client)、守护进程(Docker Daemon)、镜像(Docker Image)及容器(Docker Container)。其中,容器技术是一种新型虚拟化技术,容器作为物理资源的逻辑抽象,具有资源占用少、资源供给快等特点,适合工作负载突变的互联网应用模式。
由于Docker镜像采用分层加载的技术,最上层为可读写的层,对于Docker来说,容器可以认为是已经运行过的或正在运行的镜像,即在镜像上添加了改动层。目前的容器技术中,容器在实际启动过程中,Docker需要为该层进行虚拟磁盘设备创建、文件系统创建、挂载、运行时配置写入及启动进程等一系列操作。
但是,当面对较为复杂的应用时,通过上述的一系列操作将会影响启动容器的速度,导致其启动缓慢,容器扩展或收缩的时间也会变长,导致容器的扩容速度可能无法满足业务需求,影响业务的正常运行。
发明内容
本申请的主要目的在于提供一种容器自适应伸缩方法、服务器及存储介质,旨在解决容器扩展或收缩的时间长,导致容器的扩容速度无法满足业务 需求,影响业务的正常运行的技术问题。
为实现上述目的,本申请提供的一种容器自适应伸缩方法,应用于服务器,该方法包括:
设置步骤:设置最小容器数量值、最大容器数量值、触发容器伸缩的扩容资源使用率阈值及缩容资源使用率阈值;
监控步骤:获取所述服务器的各当前执行容器的资源使用率;
创建步骤:根据所述资源使用率及预设的创建规则创建预热容器;
扩容步骤:当某个当前执行容器的资源使用率大于所述扩容资源使用率阈值,且小于所述最大容器数量值时,将创建的一所述预热容器激活成执行容器进行扩容;及
缩容步骤:当某个当前执行容器的资源使用率在预设时间内大于所述最小容器数量值,且小于所述缩容资源使用率阈值时,将所述当前执行容器转为预热容器进行缩容。
为实现上述目的,本申请还进一步提供一种服务器,所述服务器包括存储器和处理器,所述存储器上存储有容器自适应伸缩程序,所述容器自适应伸缩程序被所述处理器执行时实现如下步骤:
设置步骤:设置最小容器数量值、最大容器数量值、触发容器伸缩的扩容资源使用率阈值及缩容资源使用率阈值;
监控步骤:获取所述服务器的各当前执行容器的资源使用率;
创建步骤:根据所述资源使用率及预设的创建规则创建预热容器;
扩容步骤:当某个当前执行容器的资源使用率大于所述扩容资源使用率阈值,且小于所述最大容器数量值时,将创建的一所述预热容器激活成执行容器进行扩容;及
缩容步骤:当某个当前执行容器的资源使用率在预设时间内大于所述最小容器数量值,且小于所述缩容资源使用率阈值时,将所述当前执行容器转为预热容器进行缩容。
为实现上述目的,本申请进一步提供一种计算机可读存储介质,所述计算机可读存储介质上存储有容器自适应伸缩程序,所述容器自适应伸缩程序可被一个或者多个处理器执行,以实现如上所述的容器自适应伸缩方法的步骤。
本申请提出的容器自适应伸缩方法、服务器及存储介质,实时监控系统当前执行容器的资源使用率,根据所述资源使用率与预设的创建规则预先创建预热容器,再根据当前执行容器的资源使用率与预设阈值的关系判断是否触发容器扩容或缩容,当需要进行扩容时将创建的预热容器激活成执行容器,当需要进行缩容是将当前执行容器转化成预热容器。本申请能够提前对可能要启动的容器进行预热准备,节约容器的启动时间,加快容器扩容时容器的启动速度,同时能够根据系统需求动态减少容器的数量,保障服务器的稳定性和可靠性。
附图说明
图1为本申请中服务器实施例的示意图;
图2为图1中容器自适应伸缩程序实施例的程序模块示意图;
图3为图1中容器自适应伸缩方法实施例的方法流程示意图。
本申请目的的实现、功能特点及优点将结合实施例,参附图做进一步说明。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,在本申请中涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。
本申请提供一种服务器1。参照图1所示,为本申请服务器1实施例的示意图,在本实施例中,服务器1设置扩容资源使用率阈值及缩容资源使用率 阈值,并对各容器的资源使用率情况进行监控,创建能够对可能要启动的容器进行预先准备的预热容器,当某个当前执行容器的资源使用率大于扩容资源使用率阈值,且小于最大容器数量值时,将预先创建好的一个预热容器激活成执行容器进行扩容;当某个当前执行容器到的资源使用率在预设时间内使用率大于最小容器数量值,且小于缩容资源使用率阈值时,将当前执行容器转为预热容器进行缩容。本申请能够对可能要启动的容器进行预先准备,节约容器的启动时间,加快容器扩容时容器的启动速度,同时能够根据系统需求动态减少容器的数量,保障服务器的稳定性和可靠性。
服务器11可以是机架式服务器、刀片式服务器、塔式服务器或机柜式服务器等的一种或几种。该服务器1包括,但不仅限于,存储器11、处理器12及网络接口13。
其中,存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器11在一些实施例中可以是服务器1的内部存储单元,例如该服务器1的硬盘。存储器11在另一些实施例中也可以是服务器1的外部存储设备,例如该服务器1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
进一步地,存储器11还可以既包括服务器1的内部存储单元也包括外部存储设备。存储器11不仅可以用于存储安装于服务器1的应用软件及各类数据,例如容器自适应伸缩程序10的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行容器自适应伸缩程序10等。
网络接口13可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该服务器1与其他电子设备之间建立通信连接。
可选地,该装置1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED 显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在服务器1中处理的信息以及用于显示可视化的用户界面。
图1仅示出了具有组件11-13以及容器自适应伸缩程序10的服务器1,本领域技术人员可以理解的是,图1示出的结构并不构成对服务器1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
在一实施例中,图1的容器自适应伸缩程序10被处理器12执行时,实现以下步骤:
设置步骤:设置最小容器数量值、最大容器数量值、触发容器伸缩的扩容资源使用率阈值及缩容资源使用率阈值;
监控步骤:获取所述服务器的各当前执行容器的资源使用率;
创建步骤:根据所述资源使用率及预设的创建规则创建预热容器;
扩容步骤:当某个当前执行容器的资源使用率大于所述扩容资源使用率阈值,且小于所述最大容器数量值时,将创建的一所述预热容器激活成执行容器进行扩容;及
缩容步骤:当某个当前执行容器的资源使用率在预设时间内大于所述最小容器数量值,且小于所述缩容资源使用率阈值时,将所述当前执行容器转为预热容器进行缩容。
可选地,在另一实施例中,所述处理器运行所述容器自适应伸缩程序,还执行以下步骤:当所述资源使用率大于所述最大容器数量值时,停止容器扩容。
可选地,在另一实施例中,所述创建步骤还包括:每隔预设周期根据所述资源使用率,按照创建规则创建预热容器。
关于上述步骤的详细介绍,请参照下述图2关于容器自适应伸缩程序10 实施例的程序模块示意图及图3关于容器自适应伸缩方法实施例的方法流程示意图的说明。
参照图2所示,为图1中容器自适应伸缩程序10实施例的程序模块示意图。容器自适应伸缩程序10被分割为多个模块,该多个模块存储于存储器11中,并由处理器12执行,以完成本申请。本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段。
在本实施例中,所述容器自适应伸缩程序10包括设置模块110、监控模块120、创建模块130、扩容模块140及缩容模块150。
设置模块110,用于设置最小容器数量值及最大容器数量值,以限制容器自适应伸缩的范围。同时用于设置触发容器伸缩的扩容资源使用率阈值及缩容资源使用率阈值,以界限触发容器进行扩容或缩容的临界值。具体地,其中所述资源使用率包括CPU及内存等资源使用率。
监控模块120,用于获取服务器的各当前执行容器的资源使用率,以此实现对各容器资源使用率情况进行监控。具体地,在一实施例中,采用数据采集器实时采集每个容器的负载及CPU、内存等资源使用率,在其他实施例中,数据采集器还可以定时采集每个容器的负载及CPU、内存等资源使用率。
创建模块130,用于根据前述的资源使用率及预设的创建规则创建预热容器。所述预热容器是在接受启动容器请求之前为每个基础镜像所创建的,以使相应的基础镜像挂载有可写层文件系统。所述基础镜像为基于Linux系统增加运行所需要的执行环境与控制监控所需程序打包而成的镜像。具体的,在开源应用容器引擎Docker中,预热容器指的是使用pause命令将处于运行状态的容器转换为不占用计算资源,但已完成状态加载,应用程序已被加载到内存的容器。通过预热容器的设置,能够对可能要启动的容器进行预先准备,提前为启动容器加载相关配置以及对内存进行初始化,从而节省容器的启动时间,加快容器的启动速度,达到容器快速扩容的目的。
在本实施例中,计算当前所需创建预热容器数量的创建规则公式为:
N=(最大容器数-当前执行容器数-当前预热容器数)*(资源使用率大于所述扩容资源使用率阈值的容器数/当前执行容器数)^n-当前预热容器数,其中,N为当前所需创建的预热容器数量,所述当前执行容器数表示处于运行 状态的容器数量,n为一经验值,且n>1。
更进一步地,所述创建规则还包括:
当N>0时,对N向上取整(例如当N=1.2时,默认N=2),作为所述服务器1当前所需创建的预热容器数量;及
当N≤0时,维持当前执行容器的数量。
在另一实施例中,所述创建模块130还可以根据资源使用率,每隔预设周期(例如10分钟)按照创建规则创建预热容器,且所有预热容器的预设周期(例如10分钟)内未被激活启动的预热容器将被强制关闭。
扩容模块140,用于执行容器扩容。当某个当前执行容器的资源使用率大于所述扩容资源使用率阈值,且小于所述最大容器数量值时,将创建的一所述预热容器激活成执行容器进行扩容。
更一步地,所述扩容模块还执行当资源使用率大于最大容器数量值时,停止容器扩容。
缩容模块150,用于执行容器缩容。当某个当前执行容器的资源使用率在预设时间(例如10分钟)内大于所述最小容器数量值,且小于所述缩容资源使用率阈值时,将所述当前执行容器转为预热容器进行缩容。
进一步地,所述扩容模块140在执行容器扩容动作之前,还执行:
当所述资源使用率大于扩容资源使用率阈值,且小于最大容器数量值时,判断是否存在所述预热容器;
若存在预热容器,则直接将预热容器修改为激活状态提供服务;及
若不存在预热容器,则冷启动一个未运行状态的容器提供服务。
此外,本申请还提供一种容器自适应伸缩方法。参照图3所示,为本申请容器自适应伸缩方法的实施例的方法流程示意图。服务器1的处理器12执行存储器11中存储的容器自适应伸缩程序10时实现容器自适应伸缩方法的如下步骤:
设置步骤S100,利用所述设置模块110设置最小容器数量值及最大容器数量值,以限制容器自适应伸缩的范围。同时用于设置触发容器伸缩的扩容资源使用率阈值及缩容资源使用率阈值,以界限触发容器进行扩容或缩容的临界值。具体地,其中所述资源使用率包括CPU及内存等资源使用率。
监控步骤S110,利用所述监控模块120获取服务器的各当前执行容器的资源使用率,以此实现对各容器的资源使用率情况进行监控。在一实施例中,采用数据采集器实时采集每个容器的负载及CPU、内存等资源使用率,在其他实施例中,数据采集器还可以定时采集每个容器的负载及CPU、内存等资源使用率。
创建步骤S120,利用所述创建模块130根据前述的资源使用率及预设的创建规则创建预热容器。所述预热容器是在接受启动容器请求之前为每个基础镜像所创建的,以使相应的基础镜像挂载有可写层文件系统。所述基础镜像为基于Linux系统增加运行所需要的执行环境与控制监控所需程序打包而成的镜像。具体的,在开源应用容器引擎Docker中,预热容器指的是使用pause命令将处于运行状态的容器转换为不占用计算资源,但已完成状态加载,应用程序已被加载到内存的容器。通过预热容器的设置,能够对可能要启动的容器进行预先准备,提前为启动容器加载相关配置以及对内存进行初始化,从而节省容器的启动时间,加快容器的启动速度,达到容器快速扩容的目的。
在本实施例中,计算当前所需创建预热容器数量的创建规则公式为:
N=(最大容器数-当前执行容器数-当前预热容器数)*(资源使用率大于所述扩容资源使用率阈值的容器数/当前执行容器数)^n-当前预热容器数,其中,N为当前所需创建的预热容器数量,所述当前执行容器数表示处于运行状态的容器数量,n为一经验值,且n>1。
更进一步地,所述创建规则还包括:
当N>0时,对N向上取整(例如当N=1.2时,默认N=2),作为所述服务器1当前所需创建的预热容器数量;及
当N≤0时,维持当前执行容器的数量。
在另一实施例中,所述步骤S120利用所述创建模块130还可以根据资源使用率,每隔预设周期(例如10分钟)按照创建规则创建预热容器,且所有预热容器的预设周期(例如10分钟)内未被激活启动的预热容器将被强制关闭。
扩容步骤S130,利用所述扩容模块140执行容器扩容。当某个当前执行容器的资源使用率大于所述扩容资源使用率阈值,且小于所述最大容器数量值时,将创建的一所述预热容器激活成执行容器进行扩容。
更一步地,所述扩容模块还执行当资源使用率大于最大容器数量值时,停止容器扩容。
缩容步骤S140,利用所述缩容模块150执行容器缩容。当某个当前执行容器的资源使用率在预设时间(例如10分钟)内大于所述最小容器数量值,且小于所述缩容资源使用率阈值时,将所述当前执行容器转为预热容器进行缩容。
进一步地,所述扩容步骤S130还包括:
当所述资源使用率大于扩容资源使用率阈值,且小于最大容器数量值时,判断是否存在所述预热容器;
若存在预热容器,则直接将预热容器修改为激活状态提供服务;及
若不存在预热容器,则冷启动一个未运行状态的容器提供服务。
此外,本申请实施例还提出一种计算机可读存储介质,计算机可读存储介质可以是硬盘、多媒体卡、SD卡、闪存卡、SMC、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器等中的任意一种或者几种的任意组合。计算机可读存储介质中包括容器自适应伸缩程序10,该容器自适应伸缩程序10被处理器12执行时实现以下操作:
设置步骤S100,利用所述设置模块110设置最小容器数量值及最大容器数量值,以限制容器自适应伸缩的范围。同时用于设置触发容器伸缩的扩容资源使用率阈值及缩容资源使用率阈值,以界限触发容器进行扩容或缩容的临界值。具体地,其中所述资源使用率包括CPU及内存等资源使用率。
监控步骤S110,利用所述监控模块120获取服务器的各当前执行容器的资源使用率,以此实现对各容器的资源使用率情况进行监控。在一实施例中,采用数据采集器实时采集每个容器的负载及CPU、内存等资源使用率,在其他实施例中,数据采集器还可以定时采集每个容器的负载及CPU、内存等资源使用率。
创建步骤S120,利用所述创建模块130根据前述的资源使用率及预设的创建规则创建预热容器。所述预热容器是在接受启动容器请求之前为每个基础镜像所创建的,以使相应的基础镜像挂载有可写层文件系统。所述基础镜 像为基于Linux系统增加运行所需要的执行环境与控制监控所需程序打包而成的镜像。具体的,在开源应用容器引擎Docker中,预热容器指的是使用pause命令将处于运行状态的容器转换为不占用计算资源,但已完成状态加载,应用程序已被加载到内存的容器。通过预热容器的设置,能够对可能要启动的容器进行预先准备,提前为启动容器加载相关配置以及对内存进行初始化,从而节省容器的启动时间,加快容器的启动速度,达到容器快速扩容的目的。
在本实施例中,计算当前所需创建预热容器数量的创建规则公式为:
N=(最大容器数-当前执行容器数-当前预热容器数)*(资源使用率大于所述扩容资源使用率阈值的容器数/当前执行容器数)^n-当前预热容器数,其中,N为当前所需创建的预热容器数量,所述当前执行容器数表示处于运行状态的容器数量,n为一经验值,且n>1。
更进一步地,所述创建规则还包括:
当N>0时,对N向上取整(例如当N=1.2时,默认N=2),作为所述服务器1当前所需创建的预热容器数量;及
当N≤0时,维持当前执行容器的数量。
在另一实施例中,所述步骤S120利用所述创建模块130还可以根据资源使用率,每隔预设周期(例如10分钟)按照创建规则创建预热容器,且所有预热容器的预设周期(例如10分钟)内未被激活启动的预热容器将被强制关闭。
扩容步骤S130,利用所述扩容模块140执行容器扩容。当某个当前执行容器的资源使用率大于所述扩容资源使用率阈值,且小于所述最大容器数量值时,将创建的一所述预热容器激活成执行容器进行扩容。
更一步地,所述扩容模块还执行当资源使用率大于最大容器数量值时,停止容器扩容。
缩容步骤S140,利用所述缩容模块150执行容器缩容。当某个当前执行容器的资源使用率在预设时间(例如10分钟)内大于所述最小容器数量值,且小于所述缩容资源使用率阈值时,将所述当前执行容器转为预热容器进行缩容。
进一步地,所述扩容步骤S130还包括:
当所述资源使用率大于扩容资源使用率阈值,且小于最大容器数量值时, 判断是否存在所述预热容器;
若存在预热容器,则直接将预热容器修改为激活状态提供服务;及
若不存在预热容器,则冷启动一个未运行状态的容器提供服务。
需要说明的是,上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个......”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种容器自适应伸缩方法,应用于服务器,其特征在于,该方法包括:
    设置步骤:设置最小容器数量值、最大容器数量值、触发容器伸缩的扩容资源使用率阈值及缩容资源使用率阈值;
    监控步骤:获取所述服务器的各当前执行容器的资源使用率;
    创建步骤:根据所述资源使用率及预设的创建规则创建预热容器;
    扩容步骤:当某个当前执行容器的资源使用率大于所述扩容资源使用率阈值,且小于所述最大容器数量值时,将创建的一所述预热容器激活成执行容器进行扩容;及
    缩容步骤:当某个当前执行容器的资源使用率在预设时间内大于所述最小容器数量值,且小于所述缩容资源使用率阈值时,将所述当前执行容器转为预热容器进行缩容。
  2. 如权利要求1所述的容器自适应伸缩方法,其特征在于,所述扩容步骤还包括:
    当所述资源使用率大于所述最大容器数量值时,停止容器扩容。
  3. 如权利要求1所述的容器自适应伸缩方法,其特征在于,所述创建规则的公式为:
    N=(最大容器数-当前执行容器数-当前预热容器数)*(资源使用率大于所述扩容资源使用率阈值的容器数/当前执行容器数)^n-当前预热容器数,其中,N为当前所需创建的预热容器数量,所述当前执行容器数表示处于运行状态的容器数量,n为一经验值,且n>1。
  4. 如权利要求3所述的容器自适应伸缩方法,其特征在于,所述创建规则还包括:
    当N>0时,对N向上取整,作为所述服务器当前所需创建的预热容器数量;及
    当N≤0时,维持当前执行容器的数量。
  5. 如权利要求1-4中任一项所述的容器自适应伸缩方法,其特征在于,所述创建步骤还包括:每隔预设周期根据所述资源使用率,按照所述创建规则创建预热容器。
  6. 如权利要求5所述的容器自适应伸缩方法,其特征在于,所述创建步骤还包括:
    若某个预热容器在所述预设周期内未被激活,则关闭所述预热容器。
  7. 如权利要求1所述的容器自适应伸缩方法,其特征在于,所述扩容步骤还包括:
    若当前无预热容器,则冷启动一个未运行的容器进行扩容。
  8. 一种服务器,其特征在于,所述服务器包括存储器和处理器,所述存储器上存储有容器自适应伸缩程序,所述容器自适应伸缩程序被所述处理器执行时实现如下步骤:
    设置步骤:设置最小容器数量值、最大容器数量值、触发容器伸缩的扩容资源使用率阈值及缩容资源使用率阈值;
    监控步骤:获取所述服务器的各当前执行容器的资源使用率;
    创建步骤:根据所述资源使用率及预设的创建规则创建预热容器;
    扩容步骤:当某个当前执行容器的资源使用率大于所述扩容资源使用率阈值,且小于所述最大容器数量值时,将创建的一所述预热容器激活成执行容器进行扩容;及
    缩容步骤:当某个当前执行容器的资源使用率在预设时间内大于所述最小容器数量值,且小于所述缩容资源使用率阈值时,将所述当前执行容器转为预热容器进行缩容。
  9. 如权利要求8所述的服务器,其特征在于,所述处理器运行所述容器自适应伸缩程序,还执行以下步骤:
    当所述资源使用率大于所述最大容器数量值时,停止容器扩容。
  10. 如权利要求8所述的服务器,其特征在于,所述创建规则的公式为:
    N=(最大容器数-当前执行容器数-当前预热容器数)*(资源使用率大于所述扩容资源使用率阈值的容器数/当前执行容器数)^n-当前预热容器数,其中,N为当前所需创建的预热容器数量,所述当前执行容器数表示处于运行状态的容器数量,n为一经验值,且n>1。
  11. 如权利要求10所述的服务器,其特征在于,所述创建规则还包括:
    当N>0时,对N向上取整,作为所述服务器当前所需创建的预热容器数量;及
    当N≤0时,维持当前执行容器的数量。
  12. 如权利要求8-11中任一项所述的服务器,其特征在于,所述创建步骤还包括:
    每隔预设周期根据所述资源使用率,按照所述创建规则创建预热容器。
  13. 如权利要求12所述的服务器,其特征在于,所述创建步骤还包括:
    若某个预热容器在所述预设周期内未被激活,则关闭所述预热容器。
  14. 如权利要求8所述的服务器,其特征在于,所述扩容步骤还包括:
    若当前无预热容器,则冷启动一个未运行的容器进行扩容。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有容器自适应伸缩程序,所述容器自适应伸缩程序可被一个或者多个处理器执行,以实现如下步骤:
    设置步骤:设置最小容器数量值、最大容器数量值、触发容器伸缩的扩容资源使用率阈值及缩容资源使用率阈值;
    监控步骤:获取所述服务器的各当前执行容器的资源使用率;
    创建步骤:根据所述资源使用率及预设的创建规则创建预热容器;
    扩容步骤:当某个当前执行容器的资源使用率大于所述扩容资源使用率阈值,且小于所述最大容器数量值时,将创建的一所述预热容器激活成执行容器进行扩容;及
    缩容步骤:当某个当前执行容器的资源使用率在预设时间内大于所述最小容器数量值,且小于所述缩容资源使用率阈值时,将所述当前执行容器转为预热容器进行缩容。
  16. 如权利要求15所述的计算机可读存储介质,其特征在于,所述扩容步骤还包括:
    当所述资源使用率大于所述最大容器数量值时,停止容器扩容。
  17. 如权利要求15所述的计算机可读存储介质,其特征在于,所述创建规则的公式为:
    N=(最大容器数-当前执行容器数-当前预热容器数)*(资源使用率大于所述扩容资源使用率阈值的容器数/当前执行容器数)^n-当前预热容器数,其中,N为当前所需创建的预热容器数量,所述当前执行容器数表示处于运行状态的容器数量,n为一经验值,且n>1。
  18. 如权利要求17所述的计算机可读存储介质,其特征在于,所述创建规则还包括:
    当N>0时,对N向上取整,作为所述服务器当前所需创建的预热容器数量;及
    当N≤0时,维持当前执行容器的数量。
  19. 如权利要求15-18中任一项所述的计算机可读存储介质,其特征在于,所述创建步骤还包括:
    每隔预设周期根据所述资源使用率,按照所述创建规则创建预热容器。
  20. 如权利要求19所述的计算机可读存储介质,其特征在于,所述创建步骤还包括:
    若某个预热容器在所述预设周期内未被激活,则关闭所述预热容器。
PCT/CN2019/116556 2019-01-23 2019-11-08 一种容器自适应伸缩方法、服务器及存储介质 WO2020151306A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910063715.4A CN109873718A (zh) 2019-01-23 2019-01-23 一种容器自适应伸缩方法、服务器及存储介质
CN201910063715.4 2019-01-23

Publications (1)

Publication Number Publication Date
WO2020151306A1 true WO2020151306A1 (zh) 2020-07-30

Family

ID=66917968

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/116556 WO2020151306A1 (zh) 2019-01-23 2019-11-08 一种容器自适应伸缩方法、服务器及存储介质

Country Status (2)

Country Link
CN (1) CN109873718A (zh)
WO (1) WO2020151306A1 (zh)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109873718A (zh) * 2019-01-23 2019-06-11 平安科技(深圳)有限公司 一种容器自适应伸缩方法、服务器及存储介质
CN110287003B (zh) * 2019-06-28 2020-04-21 北京九章云极科技有限公司 资源的管理方法和管理系统
CN112711506A (zh) * 2019-10-24 2021-04-27 阿里巴巴集团控股有限公司 资源组内应用实例的调整方法、装置、存储介质和处理器
CN111274576B (zh) * 2020-01-17 2022-08-02 山东浪潮科学研究院有限公司 智能合约运行环境的控制方法及系统、设备、介质
CN111338752B (zh) * 2020-02-14 2022-04-08 聚好看科技股份有限公司 容器调整方法及装置
CN111464616A (zh) * 2020-03-30 2020-07-28 招商局金融科技有限公司 自动调节应用负载服务数量的方法、服务器及存储介质
CN111431769A (zh) * 2020-03-30 2020-07-17 招商局金融科技有限公司 数据监控方法、服务器及存储介质
CN112363825A (zh) * 2020-10-16 2021-02-12 北京五八信息技术有限公司 一种弹性伸缩方法及装置
CN112543354B (zh) * 2020-11-27 2023-05-09 鹏城实验室 业务感知的分布式视频集群高效伸缩方法和系统
CN113032153B (zh) * 2021-04-12 2023-04-28 深圳赛安特技术服务有限公司 容器服务资源动态扩容方法、系统、装置及存储介质
CN113407112B (zh) * 2021-05-11 2023-02-10 浙江大华技术股份有限公司 扩容方法、电子设备及计算机可读存储介质
CN114138357A (zh) * 2021-10-29 2022-03-04 北京达佳互联信息技术有限公司 一种请求处理方法、装置、电子设备、存储介质及产品
CN115017186A (zh) * 2022-04-21 2022-09-06 北京火山引擎科技有限公司 一种任务处理方法、装置、设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810020A (zh) * 2014-02-14 2014-05-21 华为技术有限公司 虚拟机弹性伸缩方法及装置
CN105955662A (zh) * 2016-04-22 2016-09-21 浪潮(北京)电子信息产业有限公司 一种k-db数据表空间的扩容方法与系统
US20170223100A1 (en) * 2013-12-20 2017-08-03 Facebook, Inc. Self-adaptive control system for dynamic capacity management of latency-sensitive application servers
CN108769100A (zh) * 2018-04-03 2018-11-06 郑州云海信息技术有限公司 一种基于kubernetes容器数量弹性伸缩的实现方法及其装置
CN109067867A (zh) * 2018-07-30 2018-12-21 北京航空航天大学 面向数据中心负载监控的虚拟化容器服务弹性伸缩方法
CN109873718A (zh) * 2019-01-23 2019-06-11 平安科技(深圳)有限公司 一种容器自适应伸缩方法、服务器及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104243285B (zh) * 2014-09-19 2018-02-23 广州华多网络科技有限公司 一种消息推送的方法以及服务器
CN106227582B (zh) * 2016-08-10 2019-06-11 华为技术有限公司 弹性伸缩方法及系统
CN108459905B (zh) * 2017-02-17 2022-01-14 华为技术有限公司 资源池容量规划方法及服务器
US10445117B2 (en) * 2017-02-24 2019-10-15 Genband Us Llc Predictive analytics for virtual network functions

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170223100A1 (en) * 2013-12-20 2017-08-03 Facebook, Inc. Self-adaptive control system for dynamic capacity management of latency-sensitive application servers
CN103810020A (zh) * 2014-02-14 2014-05-21 华为技术有限公司 虚拟机弹性伸缩方法及装置
CN105955662A (zh) * 2016-04-22 2016-09-21 浪潮(北京)电子信息产业有限公司 一种k-db数据表空间的扩容方法与系统
CN108769100A (zh) * 2018-04-03 2018-11-06 郑州云海信息技术有限公司 一种基于kubernetes容器数量弹性伸缩的实现方法及其装置
CN109067867A (zh) * 2018-07-30 2018-12-21 北京航空航天大学 面向数据中心负载监控的虚拟化容器服务弹性伸缩方法
CN109873718A (zh) * 2019-01-23 2019-06-11 平安科技(深圳)有限公司 一种容器自适应伸缩方法、服务器及存储介质

Also Published As

Publication number Publication date
CN109873718A (zh) 2019-06-11

Similar Documents

Publication Publication Date Title
WO2020151306A1 (zh) 一种容器自适应伸缩方法、服务器及存储介质
CN108170503B (zh) 一种跨系统运行安卓应用的方法、终端及存储介质
CN109155782B (zh) 容器之间的进程间通信
EP3358463B1 (en) Method, device and system for implementing hardware acceleration processing
EP3274788B1 (en) Technologies for improved hybrid sleep power management
US9218042B2 (en) Cooperatively managing enforcement of energy related policies between virtual machine and application runtime
JP6132009B2 (ja) システム変更後におけるコンピューティング装置の適切な動作を確認するための方法及びシステム
US20100211769A1 (en) Concurrent Execution of a Smartphone Operating System and a Desktop Operating System
WO2020042818A1 (zh) 应用于浏览器的内存管理方法、装置、终端及存储介质
KR100832664B1 (ko) 장치의 동작 모드들 간 전이를 제공하는 시스템
US7162629B2 (en) Method to suspend-and-resume across various operational environment contexts
US20110219373A1 (en) Virtual machine management apparatus and virtualization method for virtualization-supporting terminal platform
JP2010129080A (ja) ソフトウェア実行システムおよびソフトウェアの実行方法
KR20110130435A (ko) 메모리 세그먼테이션 및 acpi 기반 컨텍스트 전환을 사용하는 운영 시스템 로딩
JP2008524686A (ja) コンピュータ装置においてアプリケーションを保守する方法
US20120227053A1 (en) Distributed resource management in a portable computing device
CN110928935B (zh) 数据的访问命令处理方法、装置和系统
US9811347B2 (en) Managing dependencies for human interface infrastructure (HII) devices
EP2524312A1 (en) System and method of controlling power in an electronic device
US11321077B1 (en) Live updating of firmware behavior
CN114035842A (zh) 固件配置方法、计算系统配置方法、计算装置以及设备
US8732811B2 (en) Systems and methods for implementing security services
WO2021086693A1 (en) Management of multiple physical function non-volatile memory devices
US11222119B2 (en) Technologies for secure and efficient native code invocation for firmware services
US7484083B1 (en) Method, apparatus, and computer-readable medium for utilizing BIOS boot specification compliant devices within an extensible firmware interface environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19910997

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19910997

Country of ref document: EP

Kind code of ref document: A1