CN116360994A - Scheduling method, device, server and storage medium of distributed heterogeneous resource pool - Google Patents

Scheduling method, device, server and storage medium of distributed heterogeneous resource pool Download PDF

Info

Publication number
CN116360994A
CN116360994A CN202310333537.9A CN202310333537A CN116360994A CN 116360994 A CN116360994 A CN 116360994A CN 202310333537 A CN202310333537 A CN 202310333537A CN 116360994 A CN116360994 A CN 116360994A
Authority
CN
China
Prior art keywords
resource
resource pool
score
pools
instance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310333537.9A
Other languages
Chinese (zh)
Inventor
岳海涛
叶奕洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Unicom Digital Technology Co Ltd
Unicom Cloud Data Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Unicom Digital Technology Co Ltd
Unicom Cloud Data Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd, Unicom Digital Technology Co Ltd, Unicom Cloud Data Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202310333537.9A priority Critical patent/CN116360994A/en
Publication of CN116360994A publication Critical patent/CN116360994A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a scheduling method, a device, a server and a storage medium of a distributed heterogeneous resource pool, wherein the method comprises the following steps: receiving a resource creation request sent by a user terminal, wherein the resource creation request comprises an instance specification; acquiring an instance specification family corresponding to the instance specification; screening the resource pools according to the resource attributes of the instance specification family, and generating a first resource pool list from the screened resource pools; screening the resource pools in the first resource pool list according to the resource inventory of the resource pools, and generating a second resource pool list from the screened resource pools; scoring the resource pools in the second resource pool list, obtaining a scoring result, and obtaining a target resource pool from the resource pools in the second resource pool list according to the scoring result; a resource instance is created on the target resource pool according to the resource creation request. The method and the device realize the screening of the proper resource pool from the plurality of resource pools in the distributed cloud scene, and also realize the scheduling of different resource pools in the same cloud pool.

Description

Scheduling method, device, server and storage medium of distributed heterogeneous resource pool
Technical Field
The present invention relates to the field of cloud computing technologies, and in particular, to a method, an apparatus, a server, and a storage medium for scheduling a distributed heterogeneous resource pool.
Background
In cloud computing, by virtualizing computing resources, storage resources and network resources to form a resource pool, the utilization rate of the resources can be improved, and the operation and maintenance management cost can be reduced. And with the development of cloud computing, the CPU (central processing unit ) architecture supported by the server is also diversified.
In the prior art, when a user creates a virtual machine, a method for scheduling resources based on a resource allocation policy and a resource scheduling policy comprises the following steps: determining information of available hosts in a resource pool according to a resource allocation strategy selected by a user, and generating a host list; screening hosts which accord with the virtual machine resource creation from the host list according to the resource scheduling strategy selected by the user to obtain a residual host list; calculating to obtain an optimized host and optimized storage in a residual host list, and distributing the optimized host and optimized storage to a virtual machine; when the cluster starts the distributed resource scheduling program, the list of the cluster and the rest host list form a comprehensive host list for selection; and calculating to obtain an optimized host and optimized storage in the comprehensive host list, and distributing the optimized host and optimized storage to the virtual machine.
However, the inventors have found that the prior art suffers from at least the following drawbacks: the resource scheduling method cannot adapt to the scene of the distributed cloud, and a proper resource pool cannot be screened out from a plurality of resource pools through a scheduling algorithm; in addition, the resource scheduling method in the prior art can not solve the resource scheduling problem of heterogeneous cloud caused by different central processor architectures of servers. Therefore, the creation of resources cannot be reasonably scheduled to the appropriate resource pools, and the problem of reasonable allocation of resources on each resource pool cannot be solved.
Disclosure of Invention
The invention provides a scheduling method, a scheduling device, a server and a storage medium for distributed heterogeneous resource pools, which are used for solving the problem that the establishment of resources cannot be reasonably scheduled to a proper resource pool and the problem of reasonable allocation of the resources on each resource pool in the prior art cannot be solved.
In a first aspect, the present invention provides a method for scheduling a distributed heterogeneous resource pool, including:
receiving a resource creation request sent by a user side, wherein the resource creation request comprises an instance specification;
acquiring an instance specification family corresponding to the instance specification;
screening the resource pools according to the resource attributes of the instance specification family, and generating a first resource pool list from the screened resource pools;
Screening the resource pools in the first resource pool list according to the resource inventory of the resource pools, and generating a second resource pool list from the screened resource pools;
scoring the resource pools in the second resource pool list and obtaining a scoring result, and obtaining a target resource pool from the resource pools in the second resource pool list according to the scoring result;
and creating a resource instance on the target resource pool according to the resource creation request.
In one possible design, the filtering the resource pool according to the resource attribute of the instance specification family, generating a first resource pool list from the resource pool obtained by filtering, includes: acquiring the type of the instance specification family, the central processing unit architecture attribute of the instance specification family and the disk type as the resource attribute of the instance specification family; screening a resource pool meeting the resource attribute of the instance specification family from the resource pools according to the resource attribute; and generating a first resource pool list from the screened resource pools.
In one possible design, the screening the resource pools in the first resource pool list according to the resource inventory of the resource pools, generating a second resource pool list from the screened resource pools, including: obtaining a resource inventory quantity in a resource pool through inventory; screening a resource pool from the resource pools in the first resource pool list according to the resource inventory quantity and a preset screening standard; wherein the preset screening standard is that the resource stock quantity of the resource pool meets the creation of at least one resource instance; and generating a second resource pool list from the screened resource pools.
In one possible design, the resource pools are multiple, and each resource pool includes multiple areas; correspondingly, the scoring the resource pools in the second resource pool list, and obtaining the target resource pool from the resource pools in the second resource pool list according to the scoring result comprises the following steps: filtering a plurality of special areas contained in each resource pool in the second resource pool list according to an example specification family corresponding to the example specification, and scoring the weight of the special areas obtained by filtering to obtain a first score of each resource pool; performing statistical scoring according to the number of cores of the central processing unit and the memory remained in each resource pool to obtain a second score of each resource pool; performing statistical scoring according to the network delay time of each resource pool to obtain a third score of each resource pool; and obtaining a final score of the resource pool according to the first score, the second score and the third score, and determining a target resource pool according to the final score.
In one possible design, the resource creation request further includes a weight value of each area; correspondingly, filtering a plurality of special areas contained in each resource pool in the second resource pool list according to the example specification family corresponding to the example specification, scoring the weight of the special area obtained by filtering, and obtaining a first score of each resource pool, wherein the first score comprises: filtering a plurality of special areas contained in each resource pool in the second resource pool list according to an instance specification family corresponding to the instance specification to obtain a plurality of target special areas meeting the instance specification; and acquiring the weight values of the plurality of target areas, and setting the maximum value in the weight values of the target areas as a first score.
In one possible design, the performing statistical scoring according to the number of cores and the memory of the central processing unit remaining in each resource pool to obtain a second score of each resource pool includes: the number of the virtual central processing units of the instance specification and the memory number of the instance specification are obtained, and the stock number of the virtual central processing units and the stock number of the memories of the resource pool are obtained; calculating a first ratio of the number of virtual central processing units inventory of the resource pool to the number of virtual central processing units of the instance specification; calculating a second ratio of the stock quantity of the memories to the memory quantity of the example specification; and determining the minimum value of the first ratio and the second ratio as the second score.
In one possible design, the performing statistical scoring according to the network delay condition of each resource pool to obtain a third score of each resource pool includes: obtaining network delay time of a resource pool, and subtracting the ratio of the network delay time to a second time value from the first time value to obtain a second network delay time; if the second network delay time is greater than 0, setting the second network delay time as a third score of the resource pool; and if the second network delay time is less than or equal to 0, setting the third score of the resource pool to 0.
In one possible design, the resource creation request further includes a weight ratio of the first score, a weight ratio of the second score, and a weight ratio of the third score; correspondingly, the obtaining the final score of the resource pool according to the first score, the second score and the third score, and determining the target resource pool according to the final score comprises the following steps: adding the first score, the second score and the third score to obtain the final score; or, acquiring the weight ratio of the first score, the weight ratio of the second score and the weight ratio of the third score from the resource creation request; adding the product of the weight ratio of the first score and the first score, the product of the weight ratio of the second score and the product of the weight ratio of the third score and the third score to obtain the final score; and determining the resource pool corresponding to the maximum final score as a target resource pool.
In a second aspect, the present invention provides a scheduling apparatus for a distributed heterogeneous resource pool, including:
the receiving module is used for receiving a resource creation request sent by a user side, wherein the resource creation request comprises an instance specification;
The acquisition module is used for acquiring an instance specification family corresponding to the instance specification;
the first screening module is used for screening the resource pools according to the resource attributes of the instance specification family, and generating a first resource pool list from the resource pools obtained by screening;
the second screening module is used for screening the resource pools in the first resource pool list according to the resource inventory quantity of the resource pools, and generating a second resource pool list from the screened resource pools;
the scoring module is used for scoring the resource pools in the second resource pool list and obtaining a scoring result, and a target resource pool is obtained from the resource pools in the second resource pool list according to the scoring result;
and the creation module is used for creating a resource instance on the target resource pool according to the resource creation request.
In a third aspect, the present invention provides a server comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes the computer-executable instructions stored by the memory, such that the at least one processor performs the scheduling method of the distributed heterogeneous resource pool as described above in the first aspect and the various possible designs of the first aspect.
In a fourth aspect, the present invention provides a computer storage medium, where computer executable instructions are stored, when executed by a processor, to implement the scheduling method of the distributed heterogeneous resource pool according to the first aspect and the various possible designs of the first aspect.
According to the scheduling method, the scheduling device, the server and the storage medium for the distributed heterogeneous resource pools, the resource creation request sent by the user side is received, the instance specification family corresponding to the instance specification is obtained according to the instance specification in the resource creation request, the resource pools are screened for the first time according to the resource attribute of the instance specification family, the resource pools are screened for the second time according to the resource inventory of the resource pools, the target resource pools are obtained by combining the scoring screening of the resource pools, the resource instance is created on the target resource pools according to the resource creation request, the proper resource pools are screened from the plurality of resource pools under the distributed cloud scene, the problem of reasonable resource allocation on each resource pool is solved, meanwhile, scheduling of different resource pools in the same cloud pool is realized, and the problem of resource scheduling of heterogeneous cloud is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it will be obvious that the drawings in the following description are some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is an application scenario schematic diagram of a scheduling method of a distributed heterogeneous resource pool provided by an embodiment of the present invention;
fig. 2 is a schematic system architecture diagram of a scheduling method of a distributed heterogeneous resource pool according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of a scheduling method of a distributed heterogeneous resource pool according to an embodiment of the present invention;
fig. 4 is a schematic flow chart II of a scheduling method of a distributed heterogeneous resource pool according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a scheduling apparatus for a distributed heterogeneous resource pool according to an embodiment of the present invention;
fig. 6 is a schematic hardware structure of a server according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In cloud computing, a resource pool is formed by virtualizing computing resources, storage resources and network resources, so that the utilization rate of the resources is improved, and the operation and maintenance management cost is reduced. With the development of cloud computing, the CPU architecture supported by the server is also various, and besides x86_64 (Intel/AMD 64, sea light, megacore), CPU architectures such as ARM64 (spread, fly), loongarch/MIPS (tornado), SW64 (shenwei) and the like are also supported. In the prior art, when a user creates a virtual machine, a method for scheduling resources based on a resource allocation policy and a resource scheduling policy comprises the following steps: determining information of available hosts in a resource pool according to a resource allocation strategy selected by a user, and generating a host list; screening hosts which accord with the virtual machine resource creation from the host list according to the resource scheduling strategy selected by the user to obtain a residual host list; calculating to obtain an optimized host and optimized storage in a residual host list, and distributing the optimized host and optimized storage to a virtual machine; when the cluster starts the distributed resource scheduling program, the list of the cluster and the rest host list form a comprehensive host list for selection; and calculating to obtain an optimized host and optimized storage in the comprehensive host list, and distributing the optimized host and optimized storage to the virtual machine. However, the inventors have found that the prior art suffers from at least the following drawbacks: the method cannot adapt to the scene of the distributed cloud, in the scene of the distributed cloud, not only one resource pool is available, and a plurality of resource pools can exist, and the proper resource pool is screened out from all the resource pools through a scheduling algorithm instead of screening a host list. The problem of resource scheduling of heterogeneous cloud cannot be solved, servers with different CPU architectures may exist under one resource pool, and the CPU architectures for the servers under different resource pools are different. Without reference to resource inventory, resource scheduling in cloud computing typically involves inventory problems for resources, which is also a limitation of the algorithm.
In order to solve the technical problems, the embodiment of the invention provides the following technical scheme: receiving a resource creation request sent by a user terminal, acquiring an instance specification family corresponding to the instance specification according to the instance specification in the resource creation request, screening a resource pool for the first time according to the resource attribute of the instance specification family, screening the resource pool for the second time according to the resource inventory of the resource pool, grading the screened resource pool, screening the resource pool again according to the grading result to obtain a target resource pool, and creating a resource instance on the target resource pool according to the resource creation request.
Fig. 1 is an application scenario schematic diagram of a scheduling method of a distributed heterogeneous resource pool provided by an embodiment of the present invention. As shown in fig. 1, a client 101 sends a resource creation request to a server 102, the server 102 receives the resource creation request and screens a target resource pool from a resource pool 103 according to the resource creation request, and then the server 102 creates a resource instance on the target resource pool according to the resource creation request.
Fig. 2 is a schematic system architecture diagram of a scheduling method of a distributed heterogeneous resource pool according to an embodiment of the present invention. As shown in fig. 2, in cloud computing, computing resources are classified according to the specification types of the example specifications to obtain an example specification family, where the example specification family may include a spread-type example specification family, a computing type example specification family, a bare metal type example specification family, and the like, and other types of example specification families are not listed one by one in fig. 2, but only a few of the specification types are described. Each instance specification family has various instance specifications, such as a spread-Peng general type (KS) instance specification and a spread-Peng memory type (KM) instance specification, a calculation type instance specification family has a general type (S) instance specification and a memory type (M) instance specification, and a bare metal type instance specification family has a GPU through type (G) instance specification and a GPU virtualization rendering type (VG) instance specification. Further types of example specification families include: network Enhanced (EN) instance specifications, local storage (L) instance specifications, bare metal (B) instance specifications, etc. Each instance specification will contain information about the computing resource, such as s1.Large4, which represents 2 virtual CPU numbers, 4G memory numbers. And the specification type of the instance specification family may further contain CPU architecture attributes, for example, the span type instance specification family can only perform instance creation on servers of the arm architecture, and creating instances on servers of other non-arm architectures may result in creation failures. The resource pool and the cloud area in fig. 2 are the same concept, and one cloud area is one resource pool, and a plurality of servers are formed under one resource pool. The CPU architectures of different servers are different and need to be distinguished, so that the CPU architectures of the types of the servers under the resource pool can be identified in the resource pool, and the distinction in the resource scheduling is convenient. Each resource pool comprises a plurality of special areas, and the resources on each resource pool are managed through the resource stock to manage the use amount and the residual amount of the resources.
Fig. 3 is a schematic flow chart of a scheduling method of a distributed heterogeneous resource pool according to an embodiment of the present invention, where an execution body of the embodiment may be a server or other computer devices, and the embodiment is not limited herein. As shown in fig. 3, the method includes:
s301: and receiving a resource creation request sent by the user terminal, wherein the resource creation request comprises an instance specification.
In this embodiment, the resource creation request sent by the user side includes an instance specification meeting the user requirement. Each instance specification contains information about computing resources, such as s1.Large4, which represents 2 virtual CPU numbers, 4G memory numbers. As shown in FIG. 2, example specifications are of various types, such as s1.large4, as well as ks1.large4, km 1.2Xlarge 8, and m1.2Xlarge 8, etc. Classifying the specification types of the example specifications yields the example specification family of fig. 2.
S302: an instance specification family corresponding to the instance specification is obtained.
In this embodiment, as shown in fig. 2, each instance specification corresponds to an instance specification family, for example, if the instance specification in the resource creation request sent by the user side is s1.Large4 or m1.2×large8, the corresponding instance specification family is a computing instance specification family. If the instance specification in the resource creation request sent by the user terminal is ks1.large4 or km 1.2xlarge 8, the corresponding instance specification family is the spread spectrum type instance specification family. If the instance specification in the resource creation request sent by the user terminal is g1.large4 or vg 1.2xlarge 4, the corresponding instance specification group is a bare metal type instance specification group.
When a user creates a virtual machine resource, the association relationship between the instance specification family and the instance specification is one-to-many, and according to the specific instance specification selected by the user, the instance specification family to which the resource to be created by the user belongs can be obtained.
S303: and screening the resource pools according to the resource attributes of the instance specification family, and generating a first resource pool list from the screened resource pools.
In this embodiment, the type of the instance specification family, the central processor architecture attribute of the instance specification family, and the disk type are obtained as the resource attribute of the instance specification family; screening a resource pool meeting the resource attribute of the instance specification family from the resource pools according to the resource attribute; and generating a first resource pool list from the screened resource pools.
Specifically, in step S302, the CPU architecture attribute of the instance specification family may be obtained after the instance specification family is determined, for example, the CPU architecture of the spread-spectrum type instance specification family (KS, KM) is an arm architecture, the CPU architecture of the computing type instance specification family (S, M) is an x86 architecture, and the CPU architecture of the bare metal type instance specification family (G, VG) is a GPU architecture. The CPU architecture properties of the instance specification family, such as the spread-Peng type specification type, may be determined from the type of instance specification family, which can only perform instance creation on servers of the arm architecture, otherwise creating instances on servers of other non-arm architectures may result in creation failures. And screening the resource pools according to the types of the instance specification families, the architecture properties of the central processing unit of the instance specification families and the disk types, determining which available resource pools exist, namely, the available resource pools of the resources created by the user can be met, and generating a resource pool list.
S304: and screening the resource pools in the first resource pool list according to the resource inventory of the resource pools, and generating a second resource pool list from the screened resource pools.
In this embodiment, the resource inventory amount in the resource pool is obtained by inventory; screening a resource pool from the resource pools in the first resource pool list according to a preset screening standard according to the resource inventory; wherein the preset screening criteria is that the resource inventory of the resource pool meets the creation of at least one resource instance; and generating a second resource pool list from the screened resource pools.
Specifically, the resources on each resource pool are limited, and the use amount and the remaining amount of the resources are managed through the resource inventory in the embodiment of the application. The preset screening standard is a hard resource condition such as CPU core number, memory size, disk size and the like in an instance specification in a resource creation request sent by a user terminal, and the resource pool which can at least meet the creation of one instance is filtered and screened from the resource pool in the first resource pool list according to the hard resource condition.
S305: and scoring the resource pools in the second resource pool list, obtaining a scoring result, and obtaining a target resource pool from the resource pools in the second resource pool list according to the scoring result.
In this embodiment, the resource pools in the second resource pool list are scored and screened by combining a scheduling policy, a power-calculation priority policy, a time delay priority policy, and the like. The user can select a proper scheduling method according to own business requirements to control the resource pools to which the resource instances are created. The system may score the list of pre-selected resource pools according to the scheduling policy, the power-calculation priority policy, and the time-delay priority policy selected by the user, and calculate which resource pools are most suitable for creating resources.
S306: a resource instance is created on the target resource pool according to the resource creation request.
In this embodiment, after determining the target resource pool in step S305, the resource instance may be created on the selected target resource pool according to the resource creation request sent by the user side.
In summary, according to the scheduling method for the distributed heterogeneous resource pool provided by the embodiment, by receiving the resource creation request sent by the user terminal, obtaining an instance specification family corresponding to the instance specification according to the instance specification in the resource creation request, performing first screening on the resource pool according to the resource attribute of the instance specification family, performing second screening on the resource pool according to the resource inventory of the resource pool, and finally screening by combining the scores of the resource pool to obtain a target resource pool, and creating a resource instance on the target resource pool according to the resource creation request, the method realizes screening of a proper resource pool from a plurality of resource pools in a distributed cloud scene, solves the problem of reasonable allocation of resources on each resource pool, simultaneously realizes scheduling on different resource pools in the same cloud pool, and solves the problem of resource scheduling of heterogeneous cloud.
Fig. 4 is a schematic flow chart II of a scheduling method of a distributed heterogeneous resource pool according to an embodiment of the present invention. In the embodiment of the invention, the number of the resource pools is multiple, and each resource pool comprises a plurality of special areas. On the basis of the embodiment provided in fig. 3, a specific implementation method for scoring the resource pools in the second resource pool list in step S305 and obtaining the scoring result, and obtaining the target resource pool from the resource pools in the second resource pool list according to the scoring result is described in detail. As shown in fig. 4, the method includes:
s401: and filtering a plurality of special areas contained in each resource pool in the second resource pool list according to an example specification family corresponding to the example specification, and scoring the weight of the special areas obtained by filtering to obtain a first score of each resource pool.
In this embodiment, the resource creation request further includes a weight value of each special area; filtering a plurality of special areas contained in each resource pool in the second resource pool list according to an instance specification family corresponding to the instance specification to obtain a plurality of target special areas meeting the instance specification; and acquiring weight values of the plurality of target areas, and setting the maximum value in the weight values of the target areas as a first score.
Specifically, the special areas under each resource pool are filtered according to the example specification family corresponding to the example specification, the special areas meeting the example specification of the user are obtained through filtering, scoring is carried out by combining the weights of the special areas, the weight scores of the special areas obtained through filtering are ordered to be the maximum value, and the scoring condition of the resource pool can be obtained. The specific scoring rules are as follows: setting n special areas under a certain resource pool, wherein the filtered special areas meeting the example specification in the resource creation request sent by the user terminal have i special areas, the weight of the special area 1 is a1, the weight of the special area 2 is a2, the weight of the special area i is ai, and the score w1 obtained by the resource pool according to the default scheduling strategy is the maximum value of a1 to ai, and the formula is as follows:
w1=MAX(a1,a2,...,ai)。
for example, there are three areas under the resource pool, the weight of the area 1 is 20, the weight of the area 2 is 30, the weight of the area 3 is 10, and the score of the resource pool is 20 if the area meeting the instance specification in the resource creation request sent by the user end is only the area 1 and the area 3.
S402: and carrying out statistical scoring according to the number of cores of the central processing unit and the memory remained in each resource pool to obtain a second score of each resource pool.
In this embodiment, the number of virtual cpus with the instance specifications and the number of memories with the instance specifications are obtained, and the inventory number of the virtual cpus and the inventory number of the memories in the resource pool are obtained; calculating a first ratio of the number of virtual central processing units inventory of the resource pool to the number of virtual central processing units of the instance specification; calculating a second ratio of the stock quantity of the memories to the memory quantity of the example specification; the minimum of the first ratio and the second ratio is determined as the second score.
Specifically, the resource pools are statistically scored according to how many CPU cores and memory remain in each resource pool, and the more the remaining instance resources are, the higher the score. The specific scoring rules are as follows: the resource pool is scored according to the CPU core number and the memory size of the example specification, and the score can be obtained by meeting the requirement of one example specification. Setting the number of virtual CPUs of example specifications in a resource creation request sent by a user terminal as M, the number of memories as N, setting the number of virtual CPUs of a resource pool as M and the number of memories as N, and expressing the score w2 obtained by the resource pool according to a calculation force priority strategy as the minimum value of M/M and N/N by a formula:
w2=MIN(M/m,N/n)。
for example, the instance specification in the resource creation request sent by the user side is s1.Large4, the number of virtual CPU cores is 2, and the memory size is 4G. When the stock of the resource pool is 300 virtual CPUs and the memory size is 4TB, at the moment, the resource pool can be provided with at most 150 s1.Large4 examples, and one more example, the stock of the virtual CPUs is insufficient, even if the memory is left, so the resource pool can be divided into 150 points.
S403: and carrying out statistical scoring according to the network delay time of each resource pool to obtain a third score of each resource pool.
In this embodiment, the network delay time of the resource pool is obtained, and the ratio of the network delay time to the second time value is subtracted from the first time value to obtain the second network delay time; if the second network delay time is greater than 0, setting the second network delay time as a third score of the resource pool; and if the second network delay time is less than or equal to 0, setting the third score of the resource pool to 0.
Specifically, the name of the third scoring method is time delay priority strategy, statistical scoring is performed according to the network delay condition of each resource pool, and the lower the network delay is, the higher the scoring is. The specific scoring rules are as follows: the network delay time of the resource pool is graded, the network delay time of the resource pool with 0ms can be graded 100 points, and the increment of 50ms is reduced by 1 point until the increment is reduced to 0 point.
For example, let the network delay time of the resource pool be T, the score w3 obtained by the resource pool according to the delay priority policy is 100 minus the maximum value between the difference value of the ratio of the network delay time T to 50 and 0, expressed by the formula:
w3=MAX(100-T/50,0)。
s404: and obtaining a final score of the resource pool according to the first score, the second score and the third score, and determining a target resource pool according to the final score.
In this embodiment, the resource creation request further includes a weight ratio of the first score, a weight ratio of the second score, and a weight ratio of the third score. Correspondingly, adding the first score, the second score and the third score to obtain a final score; or, obtaining the weight ratio of the first score, the weight ratio of the second score and the weight ratio of the third score from the resource creation request; adding the product of the weight ratio of the first score and the first score, the product of the weight ratio of the second score and the product of the weight ratio of the third score and the third score to obtain a final score; and determining the resource pool corresponding to the maximum final score as a target resource pool.
Specifically, the sum of the first score, the second score, and the third score may be directly taken as the final score. Let the total score of the resource pool be s, then the total score of the resource pool is:
s=w1+w2+w3,
where w1 is a first score, w2 is a second score, and w3 is a third score.
The respective weight duty ratios may also be set for the first score, the second score, and the third score according to the actual situation of the user item. The weight duty cycle may be set as follows: the weight ratio of the first score, the weight ratio of the second score and the weight ratio of the third score are all smaller than 1, and the sum of the weight ratio of the first score, the weight ratio of the second score and the weight ratio of the third score is equal to 1. Assuming that the weight ratio of the first score is v1, the weight ratio of the second score is v2, and the weight ratio of the third score is v3, the calculation formula of the total score s of the resource pool is as follows:
s=v1*w1+v1*w2+v1*w3,
Where w1 is a first score, w2 is a second score, and w3 is a third score. And ranking according to the total score of each resource pool, wherein the resource pool with the first total score ranking is the target resource pool.
In summary, according to the scheduling method for distributed heterogeneous resource pools provided in this embodiment, multiple special areas included in each resource pool are scored according to an instance specification family corresponding to an instance specification, statistical scoring is performed on each resource pool according to the number of cores of a central processor and a memory remaining in each resource pool, statistical scoring is performed according to network delay time of each resource pool, final score of each resource pool is determined according to each obtained score, and a target resource pool is determined according to the final score, so that the specified instance specification can be scheduled preferentially in the special areas of the relevant resource pool and the resource pool, and scenes such as east-west calculation, partial cloud resource pool priority selling and the like can be satisfied.
Fig. 5 is a schematic structural diagram of a scheduling apparatus for a distributed heterogeneous resource pool according to an embodiment of the present invention. As shown in fig. 5, the scheduling apparatus for a distributed heterogeneous resource pool includes: a receiving module 501, an obtaining module 502, a first screening module 503, a second screening module 504, a scoring module 505, and a creating module 506.
The receiving module 501 is configured to receive a resource creation request sent by a user side, where the resource creation request includes an instance specification.
An obtaining module 502, configured to obtain an instance specification family corresponding to the instance specification.
A first screening module 503, configured to screen the resource pool according to the resource attribute of the instance specification family, and generate a first resource pool list from the resource pool obtained by screening.
And a second screening module 504, configured to screen the resource pools in the first resource pool list according to the resource inventory amounts of the resource pools, and generate a second resource pool list from the resource pools obtained by screening.
And the scoring module 505 is configured to score the resource pools in the second resource pool list and obtain a scoring result, and obtain a target resource pool from the resource pools in the second resource pool list according to the scoring result.
A creation module 506 is configured to create a resource instance on the target resource pool according to the resource creation request.
In one possible implementation, the first filtering module 503 is specifically configured to obtain, as resource attributes of the instance specification family, a type of the instance specification family, a central processor architecture attribute of the instance specification family, and a disk type; screening a resource pool meeting the resource attribute of the instance specification family from the resource pools according to the resource attribute; and generating a first resource pool list from the screened resource pools.
In one possible implementation, the second screening module 504 is specifically configured to obtain, through inventory, a resource inventory amount in a resource pool; screening a resource pool from the resource pools in the first resource pool list according to a preset screening standard according to the resource inventory; wherein the preset screening criteria is that the resource inventory of the resource pool meets the creation of at least one resource instance; and generating a second resource pool list from the screened resource pools.
In one possible implementation manner, the number of the resource pools is multiple, and each resource pool comprises a plurality of special areas; correspondingly, the scoring module 505 is specifically configured to filter a plurality of areas included in each resource pool in the second resource pool list according to an instance specification family corresponding to the instance specification, and score weights of the areas obtained by filtering to obtain a first score of each resource pool; performing statistical scoring according to the number of cores of the central processing unit and the memory remained in each resource pool to obtain a second score of each resource pool; performing statistical scoring according to the network delay time of each resource pool to obtain a third score of each resource pool; and obtaining a final score of the resource pool according to the first score, the second score and the third score, and determining a target resource pool according to the final score.
In one possible implementation manner, the resource creation request further includes a weight value of each special area; correspondingly, the scoring module 505 is further specifically configured to filter, according to an instance specification family corresponding to the instance specification, a plurality of areas included in each resource pool in the second resource pool list, so as to obtain a plurality of target areas that meet the instance specification; and acquiring weight values of the plurality of target areas, and setting the maximum value in the weight values of the target areas as a first score.
In one possible implementation, the scoring module 505 is further specifically configured to obtain the number of virtual cpus with the instance specifications and the number of memories with the instance specifications, and obtain the number of virtual cpus in the resource pool and the number of memories in the resource pool; calculating a first ratio of the number of virtual central processing units inventory of the resource pool to the number of virtual central processing units of the instance specification; calculating a second ratio of the stock quantity of the memories to the memory quantity of the example specification; the minimum of the first ratio and the second ratio is determined as the second score.
In one possible implementation, the scoring module 505 is further specifically configured to obtain a network delay time of the resource pool, and subtract a ratio of the network delay time to a second time value from the first time value to obtain a second network delay time; if the second network delay time is greater than 0, setting the second network delay time as a third score of the resource pool; and if the second network delay time is less than or equal to 0, setting the third score of the resource pool to 0.
In one possible implementation, the resource creation request further includes a weight ratio of the first score, a weight ratio of the second score, and a weight ratio of the third score; accordingly, the scoring module 505 is further specifically configured to sum the first score, the second score, and the third score to obtain a final score; or, obtaining the weight ratio of the first score, the weight ratio of the second score and the weight ratio of the third score from the resource creation request; adding the product of the weight ratio of the first score and the first score, the product of the weight ratio of the second score and the product of the weight ratio of the third score and the third score to obtain a final score; and determining the resource pool corresponding to the maximum final score as a target resource pool.
The device provided in this embodiment may be used to implement the technical solution of the foregoing method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein again.
Fig. 6 is a schematic hardware structure of a server according to an embodiment of the present invention. As shown in fig. 6, the server of the present embodiment includes: at least one processor 601 and memory 602; wherein the method comprises the steps of
A memory 602 for storing computer-executable instructions;
A processor 601 for executing computer-executable instructions stored in a memory to implement the steps performed by the server in the above embodiments. Reference may be made in particular to the relevant description of the embodiments of the method described above.
Alternatively, the memory 602 may be separate or integrated with the processor 601.
When the memory 602 is provided separately, the server further comprises a bus 603 for connecting the memory 602 and the processor 601.
The embodiment of the invention also provides a computer storage medium, wherein computer execution instructions are stored in the computer storage medium, and when a processor executes the computer execution instructions, the scheduling method of the distributed heterogeneous resource pool is realized.
The embodiment of the invention also provides a computer program product, which comprises a computer program, and when the computer program is executed by a processor, the scheduling of the distributed heterogeneous resource pool is realized. The embodiment of the invention also provides a computer program product, which comprises a computer program, and when the computer program is executed by a processor, the scheduling method of the distributed heterogeneous resource pool is realized.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of modules is merely a logical function division, and there may be additional divisions of actual implementation, e.g., multiple modules may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to implement the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each module may exist alone physically, or two or more modules may be integrated in one unit. The units formed by the modules can be realized in a form of hardware or a form of hardware and software functional units.
The integrated modules, which are implemented in the form of software functional modules, may be stored in a computer readable storage medium. The software functional module is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform some of the steps of the methods described in the embodiments of the present application.
It should be understood that the above processor may be a central processing unit (Central Processing Unit, abbreviated as CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, abbreviated as DSP), application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, and may also be a U-disk, a removable hard disk, a read-only memory, a magnetic disk or optical disk, etc.
The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or one type of bus.
The storage medium may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuits, ASIC for short). It is also possible that the processor and the storage medium reside as discrete components in an electronic device or a master device.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (11)

1. The scheduling method of the distributed heterogeneous resource pool is characterized by comprising the following steps of:
receiving a resource creation request sent by a user side, wherein the resource creation request comprises an instance specification;
acquiring an instance specification family corresponding to the instance specification;
screening the resource pools according to the resource attributes of the instance specification family, and generating a first resource pool list from the screened resource pools;
screening the resource pools in the first resource pool list according to the resource inventory of the resource pools, and generating a second resource pool list from the screened resource pools;
scoring the resource pools in the second resource pool list and obtaining a scoring result, and obtaining a target resource pool from the resource pools in the second resource pool list according to the scoring result;
and creating a resource instance on the target resource pool according to the resource creation request.
2. The method of claim 1, wherein the filtering the resource pool according to the resource attribute of the instance specification family, generating the first resource pool list from the filtered resource pool, includes:
acquiring the type of the instance specification family, the central processing unit architecture attribute of the instance specification family and the disk type as the resource attribute of the instance specification family;
Screening a resource pool meeting the resource attribute of the instance specification family from the resource pools according to the resource attribute;
and generating a first resource pool list from the screened resource pools.
3. The method of claim 1, wherein the screening the resource pools in the first resource pool list according to the resource inventory amounts of the resource pools, and generating a second resource pool list from the screened resource pools, comprises:
obtaining a resource inventory quantity in a resource pool through inventory;
screening a resource pool from the resource pools in the first resource pool list according to the resource inventory quantity and a preset screening standard; wherein the preset screening standard is that the resource stock quantity of the resource pool meets the creation of at least one resource instance;
and generating a second resource pool list from the screened resource pools.
4. A method according to any one of claims 1 to 3, wherein the resource pool is plural and each resource pool comprises plural areas;
correspondingly, the scoring the resource pools in the second resource pool list, and obtaining the target resource pool from the resource pools in the second resource pool list according to the scoring result comprises the following steps:
Filtering a plurality of special areas contained in each resource pool in the second resource pool list according to an example specification family corresponding to the example specification, and scoring the weight of the special areas obtained by filtering to obtain a first score of each resource pool;
performing statistical scoring according to the number of cores of the central processing unit and the memory remained in each resource pool to obtain a second score of each resource pool;
performing statistical scoring according to the network delay time of each resource pool to obtain a third score of each resource pool;
and obtaining a final score of the resource pool according to the first score, the second score and the third score, and determining a target resource pool according to the final score.
5. The method of claim 4, wherein the resource creation request further includes a weight value for each zone;
correspondingly, filtering a plurality of special areas contained in each resource pool in the second resource pool list according to the example specification family corresponding to the example specification, scoring the weight of the special area obtained by filtering, and obtaining a first score of each resource pool, wherein the first score comprises:
filtering a plurality of special areas contained in each resource pool in the second resource pool list according to an instance specification family corresponding to the instance specification to obtain a plurality of target special areas meeting the instance specification;
And acquiring the weight values of the plurality of target areas, and setting the maximum value in the weight values of the target areas as a first score.
6. The method of claim 4, wherein said statistically scoring the number of cores and memory of the central processor remaining in each resource pool to obtain a second score for each resource pool comprises:
the number of the virtual central processing units of the instance specification and the memory number of the instance specification are obtained, and the stock number of the virtual central processing units and the stock number of the memories of the resource pool are obtained;
calculating a first ratio of the number of virtual central processing units inventory of the resource pool to the number of virtual central processing units of the instance specification; calculating a second ratio of the stock quantity of the memories to the memory quantity of the example specification;
and determining the minimum value of the first ratio and the second ratio as the second score.
7. The method of claim 4, wherein said statistically scoring the network delay profile for each of the resource pools to obtain a third score for each of the resource pools, comprising:
obtaining network delay time of a resource pool, and subtracting the ratio of the network delay time to a second time value from the first time value to obtain a second network delay time;
If the second network delay time is greater than 0, setting the second network delay time as a third score of the resource pool;
and if the second network delay time is less than or equal to 0, setting the third score of the resource pool to 0.
8. The method of claim 4, wherein the resource creation request further comprises a weight ratio of a first score, a weight ratio of a second score, and a weight ratio of a third score; the method comprises the steps of carrying out a first treatment on the surface of the
Correspondingly, the obtaining the final score of the resource pool according to the first score, the second score and the third score, and determining the target resource pool according to the final score comprises the following steps:
adding the first score, the second score and the third score to obtain the final score;
or, acquiring the weight ratio of the first score, the weight ratio of the second score and the weight ratio of the third score from the resource creation request;
adding the product of the weight ratio of the first score and the first score, the product of the weight ratio of the second score and the product of the weight ratio of the third score and the third score to obtain the final score;
and determining the resource pool corresponding to the maximum final score as a target resource pool.
9. A scheduling apparatus for a distributed heterogeneous resource pool, comprising:
the receiving module is used for receiving a resource creation request sent by a user side, wherein the resource creation request comprises an instance specification;
the acquisition module is used for acquiring an instance specification family corresponding to the instance specification;
the first screening module is used for screening the resource pools according to the resource attributes of the instance specification family, and generating a first resource pool list from the resource pools obtained by screening;
the second screening module is used for screening the resource pools in the first resource pool list according to the resource inventory quantity of the resource pools, and generating a second resource pool list from the screened resource pools;
the scoring module is used for scoring the resource pools in the second resource pool list and obtaining a scoring result, and a target resource pool is obtained from the resource pools in the second resource pool list according to the scoring result;
and the creation module is used for creating a resource instance on the target resource pool according to the resource creation request.
10. A server, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
The at least one processor executing computer-executable instructions stored in the memory causes the processor to perform the method of scheduling a distributed heterogeneous resource pool as claimed in any one of claims 1 to 8.
11. A computer storage medium having stored therein computer executable instructions which, when executed by a processor, implement the method of scheduling a distributed heterogeneous resource pool according to any of claims 1 to 8.
CN202310333537.9A 2023-03-30 2023-03-30 Scheduling method, device, server and storage medium of distributed heterogeneous resource pool Pending CN116360994A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310333537.9A CN116360994A (en) 2023-03-30 2023-03-30 Scheduling method, device, server and storage medium of distributed heterogeneous resource pool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310333537.9A CN116360994A (en) 2023-03-30 2023-03-30 Scheduling method, device, server and storage medium of distributed heterogeneous resource pool

Publications (1)

Publication Number Publication Date
CN116360994A true CN116360994A (en) 2023-06-30

Family

ID=86906625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310333537.9A Pending CN116360994A (en) 2023-03-30 2023-03-30 Scheduling method, device, server and storage medium of distributed heterogeneous resource pool

Country Status (1)

Country Link
CN (1) CN116360994A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117331678A (en) * 2023-12-01 2024-01-02 之江实验室 Heterogeneous computing power federation-oriented multi-cluster job resource specification computing method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117331678A (en) * 2023-12-01 2024-01-02 之江实验室 Heterogeneous computing power federation-oriented multi-cluster job resource specification computing method and system
CN117331678B (en) * 2023-12-01 2024-04-19 之江实验室 Heterogeneous computing power federation-oriented multi-cluster job resource specification computing method and system

Similar Documents

Publication Publication Date Title
CN111966500B (en) Resource scheduling method and device, electronic equipment and storage medium
CN111506404A (en) Kubernetes-based shared GPU (graphics processing Unit) scheduling method
CN112269641B (en) Scheduling method, scheduling device, electronic equipment and storage medium
CN107239329A (en) Unified resource dispatching method and system under cloud environment
CN111866054A (en) Cloud host building method and device, electronic equipment and readable storage medium
CN111603765B (en) Server distribution method, system and storage medium
CN114416352A (en) Computing resource allocation method and device, electronic equipment and storage medium
CN116360994A (en) Scheduling method, device, server and storage medium of distributed heterogeneous resource pool
CN116361010B (en) CPU resource allocation and scheduling optimization method for cloud S2500
WO2023226743A1 (en) Cloud service deployment method and apparatus, electronic device and storage medium
CN109471725A (en) Resource allocation methods, device and server
CN116881009A (en) GPU resource scheduling method and device, electronic equipment and readable storage medium
CN108228350A (en) A kind of resource allocation methods and device
CN108897858B (en) Distributed cluster index fragmentation evaluation method and device and electronic equipment
CN104657216B (en) The resource allocation methods and device of a kind of resource pool
CN113626173A (en) Scheduling method, device and storage medium
CN112860383A (en) Cluster resource scheduling method, device, equipment and storage medium
CN116483547A (en) Resource scheduling method, device, computer equipment and storage medium
US20150178115A1 (en) Optimal assignment of virtual machines and virtual disks using multiary tree
CN116339989A (en) Mixed part server, resource management method and device of mixed part server
CN113535087B (en) Data processing method, server and storage system in data migration process
CN111580975B (en) Memory optimization method and system for speech synthesis
CN116841720A (en) Resource allocation method, apparatus, computer device, storage medium and program product
CN112132425A (en) Performance distribution processing method, device, medium and terminal equipment
CN116431327B (en) Task current limiting processing method and fort machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination