WO2019134292A1 - 一种容器分配方法、装置、服务器及介质 - Google Patents

一种容器分配方法、装置、服务器及介质 Download PDF

Info

Publication number
WO2019134292A1
WO2019134292A1 PCT/CN2018/082356 CN2018082356W WO2019134292A1 WO 2019134292 A1 WO2019134292 A1 WO 2019134292A1 CN 2018082356 W CN2018082356 W CN 2018082356W WO 2019134292 A1 WO2019134292 A1 WO 2019134292A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
performance
container
utilization
score
Prior art date
Application number
PCT/CN2018/082356
Other languages
English (en)
French (fr)
Inventor
占帅兵
陈少杰
张文明
Original Assignee
武汉斗鱼网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 武汉斗鱼网络科技有限公司 filed Critical 武汉斗鱼网络科技有限公司
Publication of WO2019134292A1 publication Critical patent/WO2019134292A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a container allocation method, apparatus, server, and medium.
  • Containers can be used as a replacement for virtual machines to help developers build, migrate, deploy, and instantiate applications.
  • a container is a collection of processes that share an operating system instance but are independent of other processes in the server system. Containers do not require a full operating system, a feature that makes them lighter than virtual machines. Because the container can be started in seconds, the container can be expanded to meet the needs of the application with only a small amount of resources allocated.
  • Containers are often applied to microservices, each representing a service that is interconnected through a network. This architecture allows each module to be deployed and extended independently.
  • a container is a collection of processes, so the container also has to wait for it to run, and to exit three states.
  • the container configuration is defined and submitted to the container management tool, the container enters a wait state.
  • the container is distributed to a server through the scheduler, the container enters the running state. Finally, when the container ends, exits, is lost, and is terminated, it represents the end of the container. If the schedule fails or is not scheduled, the state of the container also represents the end state.
  • the existing container allocation methods include: random scheduling method, average allocation method and one-by-one allocation method.
  • the random scheduling method randomly selects a server from the schedulable server set as the starting node of the container to be scheduled;
  • the average allocation method is according to The number of containers allocated to each server in the server collection is preferentially allocated to a small number of servers, and the average allocation has been reached;
  • the allocation method is to allocate the container to the server with the largest allocated resources or the server with the smallest remaining resources. After all the servers are allocated, the next server is allocated to maximize the utilization of the stand-alone resources.
  • the stochastic scheduling method is too random and has not been considered for energy conservation and reliability.
  • the average allocation method requires each server to be in operation, which will result in greater energy consumption.
  • the one-by-one allocation method does not consider the state of the server running, which is likely to cause the server to load too high, resulting in downtime, loss of the container, and ultimately affect the quality of service.
  • the embodiment of the present application solves the technical problem of large energy consumption and poor reliability existing in the existing container allocation method by providing a container allocation method, device, server and medium.
  • a container dispensing method comprising:
  • the container is assigned to the target server for operation.
  • the optimal performance indicator is a CPU utilization rate of 60%-70%, or a memory utilization rate of 60%-70%.
  • determining, according to the predicted performance parameter and the preset optimal performance indicator, the performance score of each server including: calculating between the predicted performance parameter and the optimal performance indicator An absolute value of the difference; determining the performance score based on the absolute value of the difference, wherein the absolute value of the difference is inversely related to the performance score.
  • determining, according to the expected performance parameter and the preset optimal performance indicator, the performance score of each server including: querying a preset score list based on the predicted performance parameter, the score list Corresponding to the best performance indicator; determining, from the sub-list, a score corresponding to the predicted performance parameter as the performance score.
  • the determining, according to the performance score, the target server from the server cluster comprises: determining, as the target server, a server with the highest performance score in the server cluster.
  • the current performance parameter includes a current CPU usage and a current memory usage
  • the predicted performance parameter includes a utilization rate of the CPU after obtaining the container and a utilization rate of the memory after obtaining the container
  • the best performance indicator includes the optimal utilization of the CPU and the optimal utilization of the memory
  • the determining the performance score of each server according to the expected performance parameter and the preset optimal performance indicator including: Determining the performance parameter, calculating a utilization difference between the utilization rate of the CPU after each server obtains the container and the utilization rate of the memory after obtaining the container; according to the utilization difference value, the expected performance parameter, and Predetermining the best performance indicator, determining a performance score of each server, the performance score being inversely related to the utilization difference.
  • a container dispensing apparatus comprising:
  • An obtaining module configured to acquire current performance parameters of each server in the server cluster; the current performance parameter represents a current CPU utilization rate or a current memory utilization rate;
  • a calculating module configured to calculate, according to the current performance parameter, an expected performance parameter after each server obtains the container
  • a scoring module configured to determine a performance score of each server according to the predicted performance parameter and a preset optimal performance indicator; wherein the optimal performance indicator represents an optimal utilization of the CPU or an optimal memory Utilization rate
  • a determining module configured to determine a target server from the server cluster according to the performance score
  • An allocation module for distributing the container to the target server for operation.
  • the optimal performance indicator is a CPU utilization rate of 60%-70%, or a memory utilization rate of 60%-70%.
  • a distribution server comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor implementing the program to implement the method of the first aspect .
  • a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the method of the first aspect.
  • the method, the device, the server, and the medium provided by the embodiment of the present application acquire current performance parameters of each server in the server cluster, and calculate, according to the current performance parameter, an expected performance parameter after the server obtains the container, And determining, according to the predicted performance parameter and the preset optimal performance indicator, a performance score of each server, and determining a target server from the server cluster according to the performance score, and allocating the container Run to the target server. That is, when the container is allocated, the relationship between the expected performance parameter and the preset optimal performance index is considered. On the one hand, the energy consumption problem of the container evenly distributed is avoided, and on the other hand, the loss of the server load can be avoided by considering the optimal performance index. The container problem achieves energy savings and improved reliability.
  • FIG. 1 is a flow chart of a method for allocating containers in an embodiment of the present application
  • FIG. 2 is a schematic structural view of a container dispensing device according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a distribution server in an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a computer readable storage medium 400 according to an embodiment of the present application.
  • the embodiment of the present application solves the technical problem of large energy consumption and poor reliability of the existing container allocation method by providing a container allocation method, device, server and medium, and achieves the effects of energy saving and reliability improvement.
  • a container dispensing method comprising:
  • the container is assigned to the target server for operation.
  • the current performance parameters of each server in the server cluster are obtained, and based on the current performance parameters, the expected performance parameters of each server after obtaining the container are calculated, and then according to the predicted performance parameters and presets.
  • the best performance indicator determines the performance score of each server, and then determines a target server from the server cluster according to the performance score, and allocates the container to the target server for operation. That is, when the container is allocated, the relationship between the expected performance parameter and the preset optimal performance index is considered.
  • the energy consumption problem of the container evenly distributed is avoided, and on the other hand, the loss of the server load can be avoided by considering the optimal performance index.
  • the container problem achieves energy savings and improved reliability.
  • a container allocation method includes:
  • Step S101 Acquire current performance parameters of each server in the server cluster; the current performance parameter represents current CPU utilization or current memory utilization;
  • Step S102 calculating, according to the current performance parameter, an expected performance parameter after each server obtains the container;
  • Step S103 determining a performance score of each server according to the predicted performance parameter and a preset optimal performance indicator; wherein the optimal performance indicator represents an optimal utilization of the CPU or an optimal utilization of the memory. ;
  • Step S104 determining a target server from the server cluster according to the performance score
  • step S105 the container is allocated to the target server for operation.
  • the method may be applied to a distribution server, and the distribution server may be a computer device, a cloud, or a computer device group, which is not limited herein.
  • step S101 is executed to obtain current performance parameters of each server in the server cluster; the current performance parameter represents current CPU utilization or current memory utilization.
  • the distribution server that can set the management container allocation and scheduling sends a parameter acquisition instruction to the server cluster when receiving the request for the container to be allocated, so that the server in the server cluster is based on the parameter.
  • the fetch instruction uploads its own current performance parameters to the distribution server.
  • the current performance parameter includes a current CPU utilization rate and/or a current memory utilization rate.
  • step S102 is performed to calculate an expected performance parameter after each server obtains the container based on the current performance parameter.
  • the expected performance parameter includes an estimated utilization rate of the CPU after obtaining the container and/or an estimated utilization rate of the memory after obtaining the container.
  • the expected performance parameters are calculated in order to obtain the performance of the server after obtaining the container.
  • the load after the distribution of the container is avoided, and the reliability is ensured. Sex.
  • the first one based on the size of the container and empirical data.
  • the distribution server determines whether the container is started according to the size of the container to be allocated, the previously stored running log, or the corresponding data of the container size and the consumption load pre-stored by the technician according to his own experience.
  • the resource that is occupied by the runtime, so that the occupied resource is superimposed on the current performance parameter, and the expected performance parameter is obtained.
  • the distribution server attempts to run the container to obtain the resource occupancy of the container when the container is running, so as to superimpose the occupied resource on the current performance parameter to obtain the expected performance parameter.
  • the method for calculating the expected performance parameter after the container is obtained by the server is not limited to the above two types, and may be set as needed in the specific implementation process, and is not limited herein.
  • step S103 is performed to determine a performance score of each server according to the predicted performance parameter and a preset optimal performance indicator; wherein the optimal performance indicator represents optimal CPU utilization or memory Optimal utilization.
  • the inventor conducted research based on empirical data, and found that the optimal performance index is 60%-70% of the CPU utilization rate, or 60%-70% of the memory utilization rate. Specifically, when the server CPU utilization is 60%-70%, or the memory utilization is 60%-70%, the server frequency and running throughput are optimal, and the power consumption can be small. Achieve a large amount of operation, the highest relative resource utilization.
  • the score is calculated according to the predicted performance parameter and the preset optimal performance index.
  • the absolute value of the difference between the predicted performance parameter and the optimal performance indicator may be first calculated, and the performance score is determined according to the absolute value of the difference, where The absolute value of the difference is inversely related to the performance score.
  • the inverse correlation may be inversely proportional, or the performance score is equal to a preset constant minus the absolute value of the difference, which is not limited herein.
  • the performance score is equal to the preset constant minus the absolute value of the difference, and the preset constant is 100% as an example.
  • the expected performance parameter of the A server is 30% of the CPU usage, B.
  • the expected performance parameter of the server is 50% of the CPU usage, and the preset optimal performance indicator is 70%. Then calculate the performance score of the A server is 100%-
  • 0.6, and the performance score of the calculation B server is 100%-
  • 0.8.
  • the preset score list is firstly searched based on the predicted performance parameter, and the score list is related to the best performance indicator; and the score corresponding to the predicted performance parameter is determined from the part list as the performance score.
  • the score list may be a list of exact preset performance parameters corresponding to the scores, or may be a list of scores according to the range of expected performance parameters, which is not limited herein.
  • the list of scores is a list of scores set by the range of expected performance parameters, as shown in Table 1, the score is set to:
  • the expected performance parameter of the A server is 30% of the CPU usage
  • the expected performance parameter of the B server is 50% of the CPU usage.
  • the performance of the A server is 6 points, and the B server is used.
  • the performance score is 9 points.
  • the server when the server runs to the CPU utilization rate of 60%-70%, or the memory utilization rate is 60%-70%, the server can not only run stably and efficiently, but also avoid the loss of the container with excessive load. The problem, and the server energy saving ratio can also be guaranteed between 12% and 13%, achieving better energy saving effect. Further, when setting more than 90% of CPU or memory utilization, the score is zero, which can avoid container loss.
  • the current performance parameter includes the current CPU utilization rate and the current memory utilization rate
  • the estimated performance parameter includes the utilization rate of the CPU after obtaining the container and the utilization rate of the memory after obtaining the container
  • the performance indicator includes an optimal utilization of the CPU and an optimal utilization of the memory; and determining, according to the expected performance parameter and the preset optimal performance indicator, the performance score of each server, including:
  • the CPU and the memory with similar memory utilization are more energy-efficient. Therefore, in the task scheduling of this embodiment, it is preferred to allocate the container to the utilization rate of the CPU after obtaining the container and the utilization rate of the memory after obtaining the container.
  • the server with a small utilization difference that is, when calculating the performance score, introduces consideration of the utilization difference.
  • the inverse correlation may be inversely proportional, or the performance score is equal to a preset constant minus the absolute value of the difference, which is not limited herein.
  • step S104 is performed to determine the target server from the server cluster according to the performance score.
  • the determining the target server from the server cluster according to the performance score may be determining that the server with the highest performance score in the server cluster is the target server, so that the target The server is able to get closer to the preset optimal performance indicator after obtaining the container. If there are multiple servers with the same highest score, one server can be randomly determined as the target server.
  • reasonable container scheduling and management can effectively ensure efficient, reliable, and stable operation of the server, and reduce the probability of server downtime, which can also reduce the probability of container loss.
  • To improve server system utilization in the container cloud platform reduce the number of working servers and reduce data center energy consumption.
  • when the container is scheduled on the one hand, it is considered to allocate the container to the server with high resource utilization as much as possible, and reduce the number of servers occupied by the container.
  • the container in order to avoid excessive server load or downtime, generate unavailable resource fragments, affect the application resource quality and subsequent container rescheduling operations and other additional resource overhead issues, therefore, the container must be scheduled to ensure that the server is in the allocation container After that, it is close to the preset optimal performance index to ensure normal operation and cannot be overloaded.
  • the server According to the research server node CPU or memory utilization is controlled between 50% and 70%, the server can run stably and efficiently, and the server energy saving ratio can be guaranteed between 12% and 13%.
  • the server that prefers the CPU and the memory utilization rate in the container scheduling can further save energy.
  • the current performance parameters of each server in the server cluster are obtained, and based on the current performance parameters, the expected performance parameters of each server after obtaining the container are calculated, and then according to the predicted performance parameters and presets.
  • the best performance indicator determines the performance score of each server, and then determines a target server from the server cluster according to the performance score, and allocates the container to the target server for operation. That is, when the container is allocated, the relationship between the expected performance parameter and the preset optimal performance index is considered.
  • the energy consumption problem of the container evenly distributed is avoided, and on the other hand, the loss of the server load can be avoided by considering the optimal performance index.
  • the container problem achieves energy savings and improved reliability.
  • the present application provides a device corresponding to the first embodiment, as shown in the second embodiment.
  • the embodiment provides a container dispensing device, as shown in FIG. 2, comprising:
  • the obtaining module 201 is configured to obtain current performance parameters of each server in the server cluster; the current performance parameter represents a current CPU utilization rate or a current memory utilization rate;
  • the calculating module 202 is configured to calculate, according to the current performance parameter, an expected performance parameter after each server obtains the container;
  • the scoring module 203 is configured to determine a performance score of each server according to the predicted performance parameter and a preset optimal performance indicator, where the optimal performance indicator represents the optimal utilization of the CPU or the most memory Good utilization rate;
  • a determining module 204 configured to determine a target server from the server cluster according to the performance score
  • the distribution module 205 is configured to allocate the container to the target server for operation.
  • the method may be applied to a distribution server, and the distribution server may be a computer device, a cloud, or a computer device group, which is not limited herein.
  • the optimal performance indicator is that the CPU utilization rate is 60%-70%, or the memory utilization rate is 60%-70%.
  • the device described in this embodiment is a device for implementing the method for allocating a container in the first embodiment of the present application. Therefore, those skilled in the art can understand the method according to the method described in the first embodiment of the present application. The specific implementation of the device and its various variations, so the method in the embodiment of the present application is not described in detail herein. The devices used by those skilled in the art to implement the methods in the embodiments of the present application are all within the scope of the present application.
  • the present application provides a distribution server corresponding to the first embodiment. For details, see Embodiment 3.
  • the embodiment provides a distribution server, as shown in FIG. 3, including a memory 310, a processor 320, and a computer program 311 stored on the memory 310 and operable on the processor 320.
  • the processor 320 implements the computer program 311. The following steps:
  • the container is assigned to the target server for operation.
  • any one of the first embodiment can be implemented.
  • the distribution server described in this embodiment is a device for implementing the container allocation method in the first embodiment of the present application. Therefore, those skilled in the art can understand the embodiment according to the method described in the first embodiment of the present application. The specific implementation of the distribution server and its various variations, so the method in the embodiment of the present application is not described in detail herein. The devices used by those skilled in the art to implement the methods in the embodiments of the present application are all within the scope of the present application.
  • the present application provides a storage medium corresponding to the first embodiment. For details, see Embodiment 4.
  • the embodiment provides a computer readable storage medium 400. As shown in FIG. 4, a computer program 411 is stored thereon. When the computer program 411 is executed by the processor, the following steps are implemented:
  • the container is assigned to the target server for operation.
  • any one of the first embodiment can be implemented.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

一种容器分配方法、装置、服务器和介质,方法包括:获取服务器集群中的每个服务器的当前性能参数;所述当前性能参数表征当前CPU的利用率或当前内存的利用率(S101);基于所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数(S102);根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数;其中,所述最佳性能指标表征CPU的最佳利用率或内存的最佳利用率(S103);根据所述性能分数,从所述服务器集群中确定出目标服务器(S104);将所述容器分配至所述目标服务器进行运行(S105)。上述方法解决了现有技术中的容器分配方法存在的能量消耗大和可靠性差的技术问题,实现了节能和提高可靠性的效果。

Description

一种容器分配方法、装置、服务器及介质 技术领域
本发明涉及计算机技术领域,尤其涉及一种容器分配方法、装置、服务器及介质。
背景技术
容器可以作为虚拟机的一种替代品,它能够帮助开发者构建、迁移、部署和实例化应用。容器是进程的集合,这些进程共享了一个操作系统实例,但是独立于服务器系统中的其他进程。容器并不需要一个完整的操作系统,这个特性使得它们比虚拟机更加轻量。因为容器能够在数秒内启动,因此容器只用分配少量的资源,就能通过扩展来满足应用的需求。
容器经常被应用于微服务,每一容器代表一个服务,这些服务通过网络来进行互联。这种架构使得每一个模块都能够被独立地部署和扩展。容器是一组进程的集合,因此容器也有等待运行,运行中,退出三个状态。当容器配置被定义完成,提交到容器管理工具后,容器就进入等待状态。当容器经过调度器分配到某个服务器运行起来,容器就进入了运行状态。最后,当容器结束、退出、丢失以及被终止,都代表容器结束。如果调度失败或者没有得到调度,容器的状态也是代表结束状态。
现有的容器分配方法有:随机调度法、平均分配法和逐一分配法,其中,随机调度法即从可调度的服务器集合中随机选择一个服务器作为待调度容器的启动节点;平均分配法即按照服务器集合中每个服务器已分配的容器数量,优先分配给数量少的服务器,已达到平均分配;逐一分配法即将容器分配至已分配资源最大的服务器或者是剩下可分配资源最小的服务器上,在分配满一个服务器后,再分配下一个服务器,使单机资源利用率达到最高。
然而,随机调度法的随机性太强,在节能和可靠性上均未做考虑。平均分配法需要每个服务器均处于运行状态,会带来较大的能源消耗。而逐一分配法没有考虑服务器运行的状态,这样容易导致服务器负载过高导致宕机,容器丢失,最终影响服务质量。
可见,现有的容器分配方法在节能和可靠性上均不佳,存在能量消耗大和可靠性差的技术问题。
发明内容
本申请实施例通过提供一种容器分配方法、装置、服务器及介质,解决了现有的容器分配方法存在的能量消耗大和可靠性差的技术问题。
第一方面,提供一种容器分配方法,包括:
获取服务器集群中的每个服务器的当前性能参数;所述当前性能参数表征当前CPU的利用率或当前内存的利用率;
基于所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数;
根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数;其中,所述最佳性能指标表征CPU的最佳利用率或内存的最佳利用率;
根据所述性能分数,从所述服务器集群中确定出目标服务器;
将所述容器分配至所述目标服务器进行运行。
可选的,所述最佳性能指标为CPU的利用率为60%-70%,或内存的利用率为60%-70%。
可选的,所述根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数,包括:计算出所述预计性能参数与所述最佳性能指标之间的差值绝对值;根据所述差值绝对值确定所述性能分数,其中,所述差值绝对值与所述性能分数反相关。
可选的,所述根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数,包括:基于所述预计性能参数查询预设的分数列表,所述分数列表与所述最佳性能指标相关;从所述分列表中确定出所述预计性能参数对应的分数作为所述性能分数。
可选的,所述根据所述性能分数,从所述服务器集群中确定出目标服务器,包括:确定所述服务器集群中性能分数最高的服务器作为所述目标服务器。
可选的,所述当前性能参数包括当前CPU的利用率和当前内存的利用率;所述预计性能参数包括获得容器后的CPU的利用率和获得容器后的内存的利用率;所述预设的最佳性能指标包括CPU的最佳利用率和内存的最佳利用率;所述根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数,包括:根据所述预计性能参数,计算出所述每个服务器获得容器后的CPU的利用率与获得容器后的内存的利用率的利用率差值;根据所述利用率差值、所述预计性能参数和预设的所述最佳性能指标,确定所述每个服务器的性能分数,所述性能分数与所述利用率差值反相关。
第二方面,提供一种容器分配装置,包括:
获取模块,用于获取服务器集群中的每个服务器的当前性能参数;所述当前性能参数表征当前CPU的利用率或当前内存的利用率;
计算模块,用于基于所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数;
打分模块,用于根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数;其中,所述最佳性能指标表征CPU的最佳利用率或内存的最佳利用率;
确定模块,用于根据所述性能分数,从所述服务器集群中确定出目标服务器;
分配模块,用于将所述容器分配至所述目标服务器进行运行。
可选的,所述最佳性能指标为CPU的利用率为60%-70%,或内存的利用率为60%-70%。
第三方面,提供一种分配服务器,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现所述第一方面所述的方法。
第四方面,提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现所述第一方面所述的方法。
本申请实施例中提供的一个或多个技术方案,至少具有如下技术效果或优点:
本申请实施例提供的方法、装置、服务器及介质,获取服务器集群中的每个服务器的当前性能参数,并基于所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数,再根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数,进而根据所述性能分数,从所述服务器集群中确定出目标服务器,并将所述容器分配至所述目标服务器进行运行。即在分配容器时考虑预计性能参数与预设的最佳性能指标的关系,一方面避免容器平均分配的耗能问题,另一方面,通过考虑最佳性能指标能避免出现服务器负载过高的丢失容器问题,实现了节能和提高可靠性的效果。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例中容器分配方法的流程图;
图2为本申请实施例中容器分配装置的结构示意图;
图3为本申请实施例中分配服务器的结构示意图;
图4为本申请实施例中计算机可读存储介质400的结构示意图。
具体实施方式
本申请实施例通过提供容器分配方法、装置、服务器及介质,解决了现有的容器分配方法存在的能量消耗大和可靠性差的技术问题,实现了节能和提高可靠性的效果。
本申请实施例的技术方案为解决上述技术问题,总体思路如下:
一种容器分配方法,包括:
获取服务器集群中的每个服务器的当前性能参数;所述当前性能参数表征当前CPU的利用率或当前内存的利用率;
基于所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数;
根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数;其中,所述最佳性能指标表征CPU的最佳利用率或内存的最佳利用率;
根据所述性能分数,从所述服务器集群中确定出目标服务器;
将所述容器分配至所述目标服务器进行运行。
具体来讲,获取服务器集群中的每个服务器的当前性能参数,并基于所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数,再根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数,进而根据所述性能分数,从所述服务器集群中确定出目标服务器,并将所述容器分配至所述目标服务器进行运行。即在分配容器时考虑预计性能参数与预设的最佳性能指标的关系,一方面避免容器平均分配的耗能问题,另一方面,通过考虑最佳性能指标能避免出现服务器负载过高的丢失容器问题,实现了节能和提高可靠性的效果。
为了更好的理解上述技术方案,下面将结合说明书附图以及具体的实施方式对上述技术方案进行详细的说明。
实施例一
如图1所示,一种容器分配方法,包括:
步骤S101,获取服务器集群中的每个服务器的当前性能参数;所述当前性能参数表征当前CPU的利用率或当前内存的利用率;
步骤S102,基于所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数;
步骤S103,根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数;其中,所述最佳性能指标表征CPU的最佳利用率或内存的最佳利用率;
步骤S104,根据所述性能分数,从所述服务器集群中确定出目标服务器;
步骤S105,将所述容器分配至所述目标服务器进行运行。
在本申请实施例中,所述方法可以应用于分配服务器,所述分配服务器可以为计算机设备、云端或计算机设备组,在此不作限制。
下面,结合图1来详细介绍本实施例提供的容器分配方法的具体实施步骤:
首先,执行步骤S101,获取服务器集群中的每个服务器的当前性能参数;所述当前性能参数表征当前CPU的利用率或当前内存的利用率。
在本申请实施例中,可以设置管理容器分配和调度的分配服务器在接收到需要分配容器的请求时,发送参数获取指令至所述服务器集群,以使所述服务器集群中的服务器基于所述参数获取指令上传自身的当前性能参数至所述分配服务器。
其中,所述当前性能参数包括当前CPU的利用率和/或当前内存的利用率。
然后,执行步骤S102,基于所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数。
在本申请实施例中,所述预计性能参数包括预计的获得所述容器后的CPU的利用率和/或预计的获得所述容器后的内存的利用率。
具体来讲,计算出预计性能参数,是为了得到服务器获得容器后的性能情况,是为了保证容器调度时服务器在分配容器后,能够正常的运行,避免出现分配容器后的负载过高,保证可靠性。
在具体实施过程中,根据所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数的方法可以有多种,下面列举两种为例:
第一种,根据容器的大小和经验数据。
即分配服务器会根据需要分配的容器的大小,结合之前存储的运行日志,或者结合技术人员根据自身经验预先存储在所述分配服务器的容器大小与消耗负载的对应数据,来确定出所述容器启动运行时会占用的资源,从而将该占用的资源叠加到所述当前性能参数上,得到所述预计性能参数。
第二种,试运行。
即所述分配服务器会尝试运行所述容器,获得所述容器运行时的资源占用情况,从而将该占用的资源叠加到所述当前性能参数上,得到所述预计性能参数。
当然,根据所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数的方法不限于以上两种,在具体实施过程中可以根据需要设置,在此不作限制。
接下来,执行步骤S103,根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数;其中,所述最佳性能指标表征CPU的最佳利 用率或内存的最佳利用率。
在本申请实施例中,所述发明人根据经验数据,进行研究发现所述最佳性能指标为CPU的利用率为60%-70%,或内存的利用率为60%-70%。具体来讲,当服务器的CPU的利用率为60%-70%,或内存的利用率为60%-70%时,服务器的频率和运行吞吐量最优,能在功耗较小的情况下实现较大的运行量,相对资源利用率最高。
在本申请实施例中,根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数的方法可以有多种,下面列举三种为例:
第一种,根据所述预计性能参数和预设的最佳性能指标计算获得分数。
在本申请实施例中,可以是先计算出所述预计性能参数与所述最佳性能指标之间的差值绝对值,再根据所述差值绝对值确定所述性能分数,其中,所述差值绝对值与所述性能分数反相关。
在本申请实施例中,所述反相关可以是成反比,或者是所述性能分数等于预设常数减去所述差值绝对值,在此不作限制。
举例来讲,以所述性能分数等于预设常数减去所述差值绝对值,所述预设常数为100%为例,A服务器的所述预计性能参数为CPU占用率为30%,B服务器的所述预计性能参数为CPU占用率为50%,所述预设的最佳性能指标为70%。则计算A服务器的性能分数为100%-|70%-30%|=0.6,计算B服务器的性能分数为100%-|70%-50%|=0.8。
第二种,查表获得分数。
即先基于所述预计性能参数查询预设的分数列表,所述分数列表与所述最佳性能指标相关;从所述分列表中确定出所述预计性能参数对应的分数作为所述性能分数。
具体来讲,所述分数列表可以是精确的预设性能参数与分数一一对应的列表,也可以是按预计性能参数的范围设置分数的列表,在此不作限制。
举例来说,以所述分数列表是按预计性能参数的范围设置分数的列表为,如表1所示,分数设置为:
资源利用率范围 得分
60%~70% 10
50%~60% 9
70%~80% 8
40%~50% 7
30%~40% 6
80%~90% 5
20%~30% 4
10%~20% 3
其他 0
表1 预计性能参数得分规则
举例来说,A服务器的所述预计性能参数为CPU占用率为30%,B服务器的所述预计性能参数为CPU占用率为50%,查表可知A服务器的性能分数为6分,B服务器的性能分数为9分。
具体来讲,将服务器运行至CPU的利用率为60%-70%,或内存的利用率为60%-70%时,所述服务器不仅能够稳定高效地运行,避免出现负载过高的容器丢失问题,并且服务器节能比例也能够保证在12%到13%之间,实现较好的节能效果。进一步,设置90%以上的CPU或内存利用率时,分数为零,能避免容器丢失。
第三种,结合考虑CPU的利用率和内存利用率。
即所述当前性能参数包括当前CPU的利用率和当前内存的利用率;所述预计性能参数包括获得容器后的CPU的利用率和获得容器后的内存的利用率;所述预设的最佳性能指标包括CPU的最佳利用率和内存的最佳利用率;所述根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数,包括:
根据所述预计性能参数,计算出所述每个服务器获得容器后的CPU的利用率与获得容器后的内存的利用率的利用率差值;
根据所述利用率差值、所述预计性能参数和预设的所述最佳性能指标,确定所述每个服务器的性能分数,所述性能分数与所述利用率差值反相关。
具体来讲,据研究发现,CPU和内存利用率相近的服务器节能效果更好,故本实施例任务调度时偏好将容器分配至获得容器后的CPU的利用率与获得容器后的内存的利用率的利用率差值较小的服务器,即在计算性能分数时,引入对所述利用率差值的考量。
在本申请实施例中,所述反相关可以是成反比,或者是所述性能分数等于 预设常数减去所述差值绝对值,在此不作限制。
再下来,执行步骤S104,根据所述性能分数,从所述服务器集群中确定出目标服务器。
在本申请实施例中,所述根据所述性能分数,从所述服务器集群中确定出目标服务器,可以是确定所述服务器集群中性能分数最高的服务器作为所述目标服务器,以使得所述目标服务器在获得所述容器后能够更接近所述预设的最佳性能指标。如果有多个相同最高分的服务器,则可以从中随机确定一个服务器作为所述目标服务器。
具体来讲,在服务器集群中,合理的容器调度和管理能够有效得保证服务器高效可靠稳定的运行,并减少服务器宕机概率,这样也能够减少容器丢失的概率。为提高容器云平台中服务器系统利用率,减少工作服务器数量,减少数据中心能源消耗。本实施例在容器调度时,一方面,考虑尽量把容器分配到资源利用率高的服务器,减少容器所占服务器的数量。另一方面,为了避免出现服务器负载过高或宕机,产生不可用的资源碎片,影响应用服务质量和后面的容器重新调度运行等额外资源开销问题,因此,容器调度时必须保证服务器在分配容器后接近所述预设的最佳性能指标,以保证能够正常的运行,不能负载过高。据研究服务器节点CPU或内存的利用率控制在50%到70%之间,服务器能够稳定高效地运行,并且服务器节能比例也能够保证在12%到13%之间。另外,本实施例在容器调度时偏好CPU和内存利用率相近的服务器也能进一步节能。
即在容器调度时,当服务器集群中任务比较少的时候,尽量把容器分配到CPU和内存占用率高的节点上,减少集群中服务器的数量。当集群任务比较多的时候,尽量控制每个的资源利用率控制在预设的最佳性能指标左右,但是尽量不超过预设的最佳性能指标的值,利于保障集群中服务器运行质量。
具体来讲,获取服务器集群中的每个服务器的当前性能参数,并基于所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数,再根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数,进而根据所述性能分数,从所述服务器集群中确定出目标服务器,并将所述容器分配至所述目标服务器进行运行。即在分配容器时考虑预计性能参数与预设的最佳性能指标的关系,一方面避免容器平均分配的耗能问题,另一方面,通过考虑最佳性能指标能避免出现服务器负载过高的丢失容器问题,实现了节能和提高可靠性的效果。
基于同一发明构思,本申请提供了实施例一对应的装置,详见实施例二。
实施例二
本实施例提供一种容器分配装置,如图2所示,包括:
获取模块201,用于获取服务器集群中的每个服务器的当前性能参数;所述当前性能参数表征当前CPU的利用率或当前内存的利用率;
计算模块202,用于基于所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数;
打分模块203,用于根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数;其中,所述最佳性能指标表征CPU的最佳利用率或内存的最佳利用率;
确定模块204,用于根据所述性能分数,从所述服务器集群中确定出目标服务器;
分配模块205,用于将所述容器分配至所述目标服务器进行运行。
在本申请实施例中,所述方法可以应用于分配服务器,所述分配服务器可以为计算机设备、云端或计算机设备组,在此不作限制。
在本申请实施例中,所述最佳性能指标为CPU的利用率为60%-70%,或内存的利用率为60%-70%。
由于本实施例所介绍的装置为实施本申请实施例一中一种容器分配方法所采用的装置,故而基于本申请实施例一中所介绍的方法,本领域所属技术人员能够了解本实施例的装置的具体实施方式以及其各种变化形式,所以在此对于该装置如何实现本申请实施例中的方法不再详细介绍。只要本领域所属技术人员实施本申请实施例中的方法所采用的设备,都属于本申请所欲保护的范围。
基于同一发明构思,本申请提供了实施例一对应的分配服务器,详见实施例三。
实施例三
本实施例提供一种分配服务器,如图3所示,包括存储器310、处理器320及存储在存储器310上并可在处理器320上运行的计算机程序311,处理器320执行计算机程序311时实现以下步骤:
获取服务器集群中的每个服务器的当前性能参数;所述当前性能参数表征当前CPU的利用率或当前内存的利用率;
基于所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数;
根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数;其中,所述最佳性能指标表征CPU的最佳利用率或内存的最佳利用率;
根据所述性能分数,从所述服务器集群中确定出目标服务器;
将所述容器分配至所述目标服务器进行运行。
在具体实施过程中,处理器320执行计算机程序311时,可以实现实施例一中任一实施方式。
由于本实施例所介绍的分配服务器为实施本申请实施例一中一种容器分配方法所采用的设备,故而基于本申请实施例一中所介绍的方法,本领域所属技术人员能够了解本实施例的分配服务器的具体实施方式以及其各种变化形式,所以在此对于该服务器如何实现本申请实施例中的方法不再详细介绍。只要本领域所属技术人员实施本申请实施例中的方法所采用的设备,都属于本申请所欲保护的范围。
基于同一发明构思,本申请提供了实施例一对应的存储介质,详见实施例四。
实施例四
本实施例提供一种计算机可读存储介质400,如图4所示,其上存储有计算机程序411,该计算机程序411被处理器执行时实现以下步骤:
获取服务器集群中的每个服务器的当前性能参数;所述当前性能参数表征当前CPU的利用率或当前内存的利用率;
基于所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数;
根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数;其中,所述最佳性能指标表征CPU的最佳利用率或内存的最佳利用率;
根据所述性能分数,从所述服务器集群中确定出目标服务器;
将所述容器分配至所述目标服务器进行运行。
在具体实施过程中,该计算机程序411被处理器执行时,可以实现实施例一中任一实施方式。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一 个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (10)

  1. 一种容器分配方法,其特征在于,包括:
    获取服务器集群中的每个服务器的当前性能参数;所述当前性能参数表征当前中央处理器(Central Processing Unit,CPU)的利用率或当前内存的利用率;
    基于所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数;
    根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数;其中,所述最佳性能指标表征CPU的最佳利用率或内存的最佳利用率;
    根据所述性能分数,从所述服务器集群中确定出目标服务器;
    将所述容器分配至所述目标服务器进行运行。
  2. 如权利要求1所述的方法,其特征在于,所述最佳性能指标为CPU的利用率为60%-70%,或内存的利用率为60%-70%。
  3. 如权利要求1或2所述的方法,其特征在于,所述根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数,包括:
    计算出所述预计性能参数与所述最佳性能指标之间的差值绝对值;
    根据所述差值绝对值确定所述性能分数,其中,所述差值绝对值与所述性能分数反相关。
  4. 如权利要求1或2所述的方法,其特征在于,所述根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数,包括:
    基于所述预计性能参数查询预设的分数列表,所述分数列表与所述最佳性能指标相关;
    从所述分列表中确定出所述预计性能参数对应的分数作为所述性能分数。
  5. 如权利要求1或2所述的方法,其特征在于,所述根据所述性能分数,从所述服务器集群中确定出目标服务器,包括:
    确定所述服务器集群中性能分数最高的服务器作为所述目标服务器。
  6. 如权利要求1或2所述的方法,其特征在于:
    所述当前性能参数包括当前CPU的利用率和当前内存的利用率;
    所述预计性能参数包括获得容器后的CPU的利用率和获得容器后的内存的利用率;
    所述预设的最佳性能指标包括CPU的最佳利用率和内存的最佳利用率;
    所述根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数,包括:
    根据所述预计性能参数,计算出所述每个服务器获得容器后的CPU的利用 率与获得容器后的内存的利用率的利用率差值;
    根据所述利用率差值、所述预计性能参数和预设的所述最佳性能指标,确定所述每个服务器的性能分数,所述性能分数与所述利用率差值反相关。
  7. 一种容器分配装置,其特征在于,包括:
    获取模块,用于获取服务器集群中的每个服务器的当前性能参数;所述当前性能参数表征当前CPU的利用率或当前内存的利用率;
    计算模块,用于基于所述当前性能参数,计算出所述每个服务器获得容器后的预计性能参数;
    打分模块,用于根据所述预计性能参数和预设的最佳性能指标,确定所述每个服务器的性能分数;其中,所述最佳性能指标表征CPU的最佳利用率或内存的最佳利用率;
    确定模块,用于根据所述性能分数,从所述服务器集群中确定出目标服务器;
    分配模块,用于将所述容器分配至所述目标服务器进行运行。
  8. 如权利要求7所述的装置,其特征在于,所述最佳性能指标为CPU的利用率为60%-70%,或内存的利用率为60%-70%。
  9. 一种分配服务器,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现权利要求1-6任一所述的方法。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1-6任一所述的方法。
PCT/CN2018/082356 2018-01-08 2018-04-09 一种容器分配方法、装置、服务器及介质 WO2019134292A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810015248.3A CN108170517A (zh) 2018-01-08 2018-01-08 一种容器分配方法、装置、服务器及介质
CN201810015248.3 2018-01-08

Publications (1)

Publication Number Publication Date
WO2019134292A1 true WO2019134292A1 (zh) 2019-07-11

Family

ID=62517604

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/082356 WO2019134292A1 (zh) 2018-01-08 2018-04-09 一种容器分配方法、装置、服务器及介质

Country Status (2)

Country Link
CN (1) CN108170517A (zh)
WO (1) WO2019134292A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109408230B (zh) * 2018-10-10 2021-07-20 中国科学院计算技术研究所 基于能耗优化的Docker容器部署方法及系统
TWI695329B (zh) * 2019-04-01 2020-06-01 中華電信股份有限公司 一種建置於容器平台的資料碎片管理系統及方法
CN110035079B (zh) * 2019-04-10 2021-10-29 创新先进技术有限公司 一种蜜罐生成方法、装置及设备
CN110474966B (zh) 2019-07-22 2022-04-19 腾讯科技(深圳)有限公司 处理云平台资源碎片的方法及相关设备
CN110730135B (zh) * 2019-09-06 2022-12-09 平安普惠企业管理有限公司 一种提升服务器性能的方法、装置、存储介质和服务器
CN111666130A (zh) * 2020-06-03 2020-09-15 百度在线网络技术(北京)有限公司 一种容器均衡部署的方法、装置、电子设备及存储介质
CN112073532B (zh) * 2020-09-15 2022-09-09 北京火山引擎科技有限公司 一种资源分配的方法及装置
CN114745392A (zh) * 2022-04-29 2022-07-12 阿里云计算有限公司 流量调度方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899126A (zh) * 2015-06-12 2015-09-09 北京奇虎科技有限公司 对宿主机中容器进行本地实时监控的方法、装置及系统
CN106557353A (zh) * 2016-11-04 2017-04-05 天津轻工职业技术学院 一种容器承载业务应用的服务器性能指标评价方法
CN107071002A (zh) * 2017-03-22 2017-08-18 山东中创软件商用中间件股份有限公司 一种应用服务器集群请求调度方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102185759A (zh) * 2011-04-12 2011-09-14 田文洪 一种满足需求特性的多物理服务器负载均衡的方法及装置
CN103677957B (zh) * 2013-12-13 2016-10-19 重庆邮电大学 云数据中心基于多资源的高能效虚拟机放置方法
CN105488134A (zh) * 2015-11-25 2016-04-13 用友网络科技股份有限公司 大数据处理方法及大数据处理装置
CN106445629B (zh) * 2016-07-22 2019-05-21 平安科技(深圳)有限公司 一种负载均衡的方法及其装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899126A (zh) * 2015-06-12 2015-09-09 北京奇虎科技有限公司 对宿主机中容器进行本地实时监控的方法、装置及系统
CN106557353A (zh) * 2016-11-04 2017-04-05 天津轻工职业技术学院 一种容器承载业务应用的服务器性能指标评价方法
CN107071002A (zh) * 2017-03-22 2017-08-18 山东中创软件商用中间件股份有限公司 一种应用服务器集群请求调度方法及装置

Also Published As

Publication number Publication date
CN108170517A (zh) 2018-06-15

Similar Documents

Publication Publication Date Title
WO2019134292A1 (zh) 一种容器分配方法、装置、服务器及介质
CN107580023B (zh) 一种动态调整任务分配的流处理作业调度方法及系统
CN105159769B (zh) 一种适用于计算能力异构集群的分布式作业调度方法
CN102902587B (zh) 分布式任务调度方法、系统和装置
CN109564528B (zh) 分布式计算中计算资源分配的系统和方法
US20150312167A1 (en) Maximizing server utilization within a datacenter
KR20170021864A (ko) 분산 컴퓨터 시스템에 전력 할당의 변화가 있을 때 중단될 수 있고 중단될 수 없는 작업들을 관리하는 방법들 및 장치
CN107066332A (zh) 分布式系统及其调度方法和调度装置
KR20130136449A (ko) 데이터-센터 서비스의 제어된 자동 힐링
CN103067524A (zh) 一种基于云计算环境的蚁群优化计算资源分配方法
CN111258746A (zh) 资源分配方法和服务设备
US10606650B2 (en) Methods and nodes for scheduling data processing
CN108121599A (zh) 一种资源管理方法、装置及系统
CN109257399A (zh) 云平台应用程序管理方法及管理平台、存储介质
US20140019624A1 (en) Resource management method and management server
CN115658311A (zh) 一种资源的调度方法、装置、设备和介质
CN111240824B (zh) 一种cpu资源调度方法及电子设备
CN104320433A (zh) 数据处理方法和分布式数据处理系统
CN108153583A (zh) 任务分配方法及装置、实时计算框架系统
CN116028193B (zh) 一种混部集群的大数据任务动态高能效调度方法和系统
CN110275777B (zh) 一种资源调度系统
JP2007052542A (ja) 負荷分散処理システム及び装置
US20140047454A1 (en) Load balancing in an sap system
CN109144709A (zh) 一种处理大数据平台yarn数据分配不均衡的方法
Thai et al. Algorithms for optimising heterogeneous Cloud virtual machine clusters

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18897949

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18897949

Country of ref document: EP

Kind code of ref document: A1