WO2021259246A1 - 资源调度方法和装置、电子设备、计算机可读存储介质 - Google Patents

资源调度方法和装置、电子设备、计算机可读存储介质 Download PDF

Info

Publication number
WO2021259246A1
WO2021259246A1 PCT/CN2021/101501 CN2021101501W WO2021259246A1 WO 2021259246 A1 WO2021259246 A1 WO 2021259246A1 CN 2021101501 W CN2021101501 W CN 2021101501W WO 2021259246 A1 WO2021259246 A1 WO 2021259246A1
Authority
WO
WIPO (PCT)
Prior art keywords
resource
node
weight
virtual
path
Prior art date
Application number
PCT/CN2021/101501
Other languages
English (en)
French (fr)
Inventor
童遥
王海新
程希
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to US18/012,701 priority Critical patent/US20230267015A1/en
Priority to EP21829551.7A priority patent/EP4170491A4/en
Publication of WO2021259246A1 publication Critical patent/WO2021259246A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5094Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/501Performance criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5022Workload threshold
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/503Resource availability
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the embodiments of this application relate to the field of computer technology, and in particular to a resource scheduling method and device, electronic equipment, and computer-readable storage media.
  • Resource scheduling is the process of allocating and temporarily transferring resources from resource invitation to users. Optimizing resource scheduling management and dynamically reallocating resources can make more efficient use of available resources in the data center and achieve the goal of reducing energy consumption.
  • the traditional resource scheduling method cannot meet the needs of tenants and task differentiation in the multi-tenant scenario, and it is easy to produce jitter after the scheduling is completed, leading to secondary scheduling.
  • An embodiment of the present application provides a resource scheduling method, including: selecting an optimal path from the resource tag forest according to the weight of the path in the resource tag tree in the resource tag forest constructed in advance, wherein the resource tag forest includes: at least one Resource tag tree.
  • Each path of the resource tag tree includes the first node, the second node and the third node in turn from the root node to the leaf node.
  • the first node is the node corresponding to the physical resource corresponding to the tenant, and the second node belongs to The node corresponding to the user of the tenant, the third node is a node corresponding to the virtual resource deployed on the physical resource and managed by the user; and the task is scheduled to the third node through the optimal path.
  • An embodiment of the application provides an electronic device, including: at least one processor; and a memory, at least one program is stored in the memory, and when the at least one program is executed by the at least one processor, the at least one processor realizes the resource according to the application Scheduling method.
  • the embodiment of the present application provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the resource scheduling method according to the present application is implemented.
  • Figure 1 is a flowchart of a resource scheduling method provided by this application.
  • Figure 2 is a schematic diagram of the resource label book in the resource label forest provided by this application.
  • FIG. 3 is a schematic diagram of assigning corresponding weights to the first node and the third node in this application;
  • Figure 4 is a schematic diagram of selecting the optimal path for this application.
  • FIG. 5 is a block diagram of a resource scheduling device provided by this application.
  • FIG. 6 is a schematic diagram of the application of a resource scheduling device provided by this application.
  • Fig. 1 is a flowchart of a resource scheduling method provided by this application.
  • the resource scheduling method provided by the present application includes steps 100 to 101.
  • the optimal path is selected from the resource tag forest according to the weight of the path in the resource tag tree in the pre-constructed resource tag forest.
  • the resource tag forest includes: at least one resource tag tree, and each path of the resource tag tree From the root node to the leaf nodes, it includes: a first node, a second node, and a third node.
  • the first node is the node corresponding to the physical resource of the tenant
  • the second node is the node corresponding to the user belonging to the tenant
  • the third node is Nodes corresponding to virtual resources deployed on physical resources and managed by users.
  • step 101 the task is scheduled to the third node traversed by the optimal path.
  • the unscheduled task with the highest priority can be scheduled to the third node through the optimal path.
  • the priority of the task can be determined according to the importance of the task.
  • the method may further include: according to tenants, physical resources, The relationship between the user and the virtual resource constructs a resource tag forest; and each first node and the third node in each resource tag tree in the resource tag forest are assigned corresponding weights.
  • resource scheduling is realized based on a pre-constructed resource tag forest.
  • Different tenants correspond to different physical resources, users, and virtual resources.
  • the resource tag forest reflects the relationship between tenants, physical resources, users, and virtual resources.
  • the relationship between tenants namely, which physical resources the tenant corresponds to, which users under the tenant, and which virtual resources deployed on the physical resources are managed by users under the tenant, thus realizing the differentiated needs of tenants in the multi-tenant scenario, thereby improving resource scheduling Reasonableness.
  • each path in the resource tag tree reflects the relationship between tenants, physical resources, users, and virtual resources.
  • the relationship mentioned here refers to: which physical resources and which virtual resources the tenant corresponds to, namely , Which physical resources and which virtual resources belong to the tenant; which users are under the tenant, that is, which users belong to the tenant; which virtual resources are deployed on the physical resources; and which virtual resources are managed by which users.
  • the first node can be represented by a node with a hierarchical structure according to the hierarchical distribution of physical resources.
  • physical resources have two hierarchical structures, namely, a rack and a physical machine.
  • the physical machine belongs to the rack. Therefore, two levels of the first node can be set, and the first level is The first node is the node corresponding to the rack, and the first node of the second level is the node corresponding to the physical machine.
  • Physical resources may refer to racks, physical machines, etc.
  • virtual resources may refer to virtual machines, etc.
  • the resource tag forest includes two resource tag trees, namely resource tag tree 1 and resource tag tree 2.
  • Resource tag tree 1 corresponds to tenant 1
  • resource tag tree 2 corresponds to tenant 2.
  • the physical resources of tenant 1 include rack 1, physical machine 1, and physical machine 2.
  • a virtual machine 1 and a virtual machine 2 are deployed on the physical machine 1, and a virtual machine 3 is deployed on the physical machine 2.
  • User 1 manages virtual machine 1 and virtual machine 2, and user 2 manages virtual machine 3.
  • the physical resources of tenant 2 include rack 2, physical machine 3, and physical machine 4.
  • a virtual machine 4 is deployed on the physical machine 3, and a virtual machine 5 and a virtual machine 6 are deployed on the physical machine 4.
  • User 3 manages virtual machine 4, and user 4 manages virtual machine 5 and virtual machine 6.
  • the root node of the resource tag tree 1 is the node corresponding to the rack 1, and the leaf nodes include the node corresponding to the virtual machine 1, the node corresponding to the virtual machine 2, and the node corresponding to the virtual machine 3.
  • the first node includes a node corresponding to rack 1, a node corresponding to physical machine 1, and a node corresponding to physical machine 2.
  • the second node includes a node corresponding to user 1 and a node corresponding to user 2.
  • the third node includes a node corresponding to virtual machine 1, a node corresponding to virtual machine 2, and a node corresponding to virtual machine 3.
  • the root node of the resource tag tree 2 is the node corresponding to the rack 2, and the leaf nodes include the node corresponding to the virtual machine 4, the node corresponding to the virtual machine 5, and the node corresponding to the virtual machine 6.
  • the first node includes a node corresponding to rack 2, a node corresponding to physical machine 3, and a node corresponding to physical machine 4.
  • the second node includes a node corresponding to user 3 and a node corresponding to user 4.
  • the third node includes a node corresponding to virtual machine 4, a node corresponding to virtual machine 5, and a node corresponding to virtual machine 6.
  • the weight of the path can be determined according to the weights of the first node and the third node that the path passes.
  • the weight of the path may be the sum of the weights of the first node and the third node that the path traverses, or the weight of the path may be the weighted average of the weights of the first node and the third node that the path traverses.
  • other methods may also be used to calculate the weight of the path, and the specific calculation method is not used to limit the protection scope of the embodiments of the present application.
  • the weight of the first node refers to the weight of the node at the lowest level.
  • the node corresponding to physical machine 1 and the node corresponding to physical machine 2 belong to the next level of nodes of the node corresponding to rack 1.
  • the weight of the first node refers to physical machine 1.
  • the weight of the path including the node corresponding to rack 1, the node corresponding to physical machine 1, the node corresponding to user 1, and the node corresponding to virtual machine 1 is the weight of the node corresponding to physical machine 1.
  • the weight of the first node can be determined according to the CPU occupancy rate, memory occupancy rate and storage occupancy rate of the first node in the specified time period, and the CPU occupancy rate, memory occupancy rate and storage occupancy rate of the third node in the specified time period can be determined Determine the weight of the third node.
  • the weight of the first node can be the sum of the CPU usage, memory usage, and storage usage of the first node in a specified time period
  • the weight of the third node can be the CPU usage of the third node in the specified time period The sum of data rate, memory usage, and storage usage.
  • the weight of the first node may be the weighted average of the CPU occupancy rate, memory occupancy rate, and storage occupancy rate of the first node in a specified time period
  • the weight of the third node may be the third node in the specified time period. The weighted average of CPU usage, memory usage and storage usage.
  • the weight of the first node may be the sum of the first score corresponding to the CPU occupancy rate of the first node, the second score corresponding to the memory occupancy rate, and the third score corresponding to the storage occupancy rate in the specified time period.
  • the weight of the third node may be the sum of the first score corresponding to the CPU usage of the third node, the second score corresponding to the memory usage, and the third score corresponding to the storage usage in the specified time period.
  • the weight of the first node may be a weight of the first score corresponding to the CPU usage of the first node, the second score corresponding to the memory usage, and the third score corresponding to the storage usage in the specified time period.
  • Average value the weight of the third node can be the weight of the first score corresponding to the CPU usage of the third node, the second score corresponding to the memory usage, and the third score corresponding to the storage usage in the specified time period average value.
  • the CPU occupancy rate is between 0% and 20%, the first score is 1; the CPU occupancy rate is between 20%-40%, and the first score is 2; the CPU occupancy rate is between 40% and 60% During the period, the first score is 3; the CPU occupancy rate is between 60% and 80%, and the first score is 4; the CPU occupancy rate is between 80% and 100%, and the first score is 5.
  • the CPU occupancy rate and the first score may also have other correspondences, and the specific correspondence is not used to limit the protection scope of the embodiments of the present application.
  • the memory occupancy rate is between 0% and 20%, the second score is 1; the memory occupancy rate is between 20%-40%, and the second score is 2; the memory occupancy rate is between 40%-60%
  • the second score is 3; the memory usage is between 60% and 80%, and the second score is 4; the memory usage is between 80% and 100%, and the second score is 5.
  • the memory occupancy rate and the second score may also have other correspondences, and the specific correspondences are not used to limit the protection scope of the embodiments of the present application.
  • the third score is 1; the storage occupancy rate is between 20%-40%, and the third score is 2; the storage occupancy rate is between 40%-60%
  • the third score is 3; the storage occupancy rate is between 60% and 80%, and the third score is 4; the storage occupancy rate is between 80% and 100%, and the third score is 5.
  • the storage occupancy rate and the third score may also be other corresponding relationships, and the specific corresponding relationships are not used to limit the protection scope of the embodiments of the present application.
  • the first score corresponding to the CPU occupancy rate of the node corresponding to physical machine 1 in the specified time period is 1, the second score corresponding to the memory occupancy rate is 3, and the third score corresponding to the storage occupancy rate is 3. If the score is 4, then the weight of the node corresponding to physical machine 1 is 8; the other nodes can be deduced by analogy, so I won’t repeat them here.
  • the step of selecting the optimal path from the resource tag forest according to the weight of the path in the resource tag tree in the resource tag forest constructed in advance may include: traversing each resource tag tree in the resource tag forest For each path in, the path with the smallest weight is selected as the optimal path, and the weight of the path is determined according to the resource occupancy rate of the nodes that the path passes.
  • the specific optimal path selection method is not used to limit the protection scope of the embodiments of this application.
  • This application emphasizes that the resource scheduling method is based on the resource tag forest.
  • Tag forest reflects the relationship between tenants, physical resources, users and virtual resources, and realizes resource scheduling based on the differentiation of tenants.
  • the weight of the path is determined based on the resource occupancy rate of the nodes through which the path passes, and the path with the smallest weight is selected as the optimal path.
  • the smallest weight of the path means that the resource occupancy rate of the nodes that the path passes through is the smallest.
  • the path with the smallest weight is selected as the optimal path, and then the task is scheduled to the third node through which the optimal path passes, thereby improving resource utilization.
  • the resource scheduling method of the present application may further include: setting at least one of the CPU occupancy rate, memory occupancy rate, and storage occupancy rate to be greater than the first node of a preset threshold
  • the target user on the corresponding physical resource and the virtual resource managed by the target user are migrated to the physical resource corresponding to the first node with the smallest weight, and the relationship between the tenant, physical resource, user and virtual resource after migration is renewed. Construct a resource tag forest, and the target user is the user with the largest weight or the user with the second largest weight.
  • the preset threshold may be selected according to the actual situation. For example, the preset threshold may be selected as 80%, that is, at least one of the CPU usage rate, the memory usage rate, and the storage usage rate exceeds 80%.
  • the user with the largest weight can be selected as the target user first. If the physical resources corresponding to the first node with the smallest weight are not enough to support the virtual resources managed by the user with the largest weight, the user with the second largest weight can be selected as the target user .
  • the user's weight can be determined according to the weight of the node corresponding to the virtual resource managed by the user.
  • the weight of the user may be the sum of the weights of the nodes corresponding to the virtual resources managed by the user, or the weight of the users may be the weighted average of the weights of the nodes corresponding to the virtual resources managed by the user.
  • other methods may also be used to calculate the user's weight, and the specific calculation method is not used to limit the protection scope of the embodiments of the present application.
  • the resource scheduling method of the present application when the weight of the first node is greater than the preset threshold, the target user on the physical resource corresponding to the first node and the virtual resource managed by the target user are migrated to the first node with the smallest weight.
  • the resource occupancy rate of the first node whose weight is greater than the preset threshold is reduced, and the resource occupancy rate between different first nodes is more balanced, so that the resource scheduling will not be due to physical resources.
  • the resources cannot meet the business requirements and trigger the secondary scheduling, which reduces or avoids the secondary scheduling of resources.
  • the resource occupancy rate of physical resources is not high, and the resource occupancy rate of virtual resources on physical resources is not high, but the load of virtual resources has tidal characteristics, and tides occur at a certain point in time Phenomenon, that is, the resource occupancy rate of virtual resources may be very high.
  • the resource occupancy rate of virtual resources may be very high.
  • the resource scheduling of this application may further include: before the tidal phenomenon occurs in the virtual resource with tidal characteristics, clone the virtual resource that is the same as the virtual resource with tidal characteristics on the physical resource where the virtual resource with tidal characteristics is located, and the cloned virtual resource is the same as the existing virtual resource.
  • the communication addresses of the virtual resources with tidal characteristics are different, and the cloned virtual resources and the virtual resources with tidal characteristics share the task; and after the tidal phenomenon of the virtual resources with tidal characteristics ends, the cloned virtual resources are recovered.
  • the communication address may include at least one of the following: an Internet Protocol (IP, Internet Protocol) address, and a Media Access Control (MAC, Media Access Control) address.
  • IP Internet Protocol
  • MAC Media Access Control
  • the time period during which the tidal phenomenon of the virtual resource occurs can be obtained through historical operation and maintenance data analysis, and the peak value of the physical resource required by the virtual resource in the tidal scene can be calculated.
  • the same virtual resource is cloned before the tidal phenomenon occurs, and the cloned virtual resource and the original virtual resource are jointly responsible for scheduling to the virtual resource with tidal characteristics.
  • the task avoids the secondary scheduling triggered when the virtual resources cannot meet the business requirements when the tide phenomenon occurs, that is, the secondary scheduling of the resources is avoided.
  • the cloned virtual resources are recovered to avoid the waste of resource occupation.
  • the present application provides an electronic device, including: at least one processor; and a memory, at least one program is stored in the memory, and when the at least one program is executed by the at least one processor, the at least one processor implements any one according to the application Resource scheduling method.
  • a processor is a device with data processing capabilities, including but not limited to a central processing unit (CPU), etc.; a memory is a device with data storage capabilities, including but not limited to random access memory (RAM, more specifically such as SDRAM, DDR) Etc.), read-only memory (ROM), charged erasable programmable read-only memory (EEPROM), flash memory (FLASH).
  • RAM random access memory
  • ROM read-only memory
  • EEPROM charged erasable programmable read-only memory
  • FLASH flash memory
  • the processor and the memory can be connected to each other through a bus, and then connected to other components of the electronic device.
  • the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, any resource scheduling method according to the present application is implemented.
  • Figure 5 is a block diagram of a resource scheduling device provided by this application.
  • the resource scheduling device provided by this application includes an optimal path selection module 501 and a resource scheduling module 502.
  • the optimal path selection module 501 is configured to select the optimal path from the resource tag forest according to the weight of the path in the resource tag tree in the pre-constructed resource tag forest.
  • the resource tag forest includes: at least one resource tag tree, a resource tag tree
  • Each path of from the root node to the leaf node includes: a first node, a second node, and a third node.
  • the first node is the node corresponding to the physical resource corresponding to the tenant
  • the second node is the node corresponding to the user belonging to the tenant
  • the third node is a node corresponding to a virtual resource deployed on a physical resource and managed by a user.
  • the resource scheduling module 502 is used to schedule the task to the third node through which the optimal path passes.
  • the optimal path selection module 501 can be used to: traverse each path in each resource tag tree in the resource tag forest, select the path with the smallest weight as the optimal path, and determine the path according to the resource occupancy rate of the nodes passed by the path The weight of.
  • the weight of the path can be determined according to the weights of the first node and the third node that the path passes.
  • the weight of the first node can be determined according to the CPU occupancy rate, memory occupancy rate and storage occupancy rate of the first node in the specified time period, and the CPU occupancy rate, memory occupancy rate and storage occupancy rate of the third node in the specified time period can be determined Determine the weight of the third node.
  • the resource scheduling device may further include a resource tag forest construction module 503, which is used to set at least one of CPU occupancy rate, memory occupancy rate, and storage occupancy rate to be greater than a preset threshold for the target user and target on the physical resource corresponding to the first node
  • the virtual resource managed by the user is migrated to the physical resource corresponding to the first node with the smallest weight, and the resource tag forest is reconstructed according to the relationship between the tenant, physical resource, user and virtual resource after the migration, and the target user is the weight.
  • the weight of the user can be determined according to the weight of the node corresponding to the virtual resource managed by the user.
  • the resource tag forest construction module 503 can also be used to clone the same virtual resource as the virtual resource with tidal characteristics on the physical resource where the virtual resource with tidal characteristics is located before the tidal phenomenon occurs in the virtual resource with tidal characteristics.
  • the communication address of the virtual resource and the virtual resource with tidal characteristics are different, and the cloned virtual resource and the virtual resource with tidal characteristics share the task; and after the tidal phenomenon of the virtual resource with tidal characteristics ends, the cloned virtual resource is recovered resource.
  • the resource tag forest construction module 503 can also be used to construct a resource tag forest according to the relationship between tenants, physical resources, users, and virtual resources; The third node assigns the corresponding weight.
  • FIG. 6 is a schematic diagram of the application of a resource scheduling device provided by this application.
  • the system is divided into three layers: cloud computing basic platform layer, global resource scheduling layer, and application environment layer.
  • the bottom layer is the cloud computing basic platform layer, including: physical machines and virtual machines on physical machines.
  • the global resource scheduling layer includes the resource scheduling device according to the present invention.
  • the application environment layer includes: performance modules, system applications, utility functions and application extensions.
  • the performance module is used to monitor system performance indicators, such as the number of input and output operations per second (IOPS, Input Output Operations Per Second), the number of concurrent connections, and so on.
  • IOPS input and output operations per second
  • IOPS Input Output Operations Per Second
  • System applications refer to applications at the global level of the system, such as network management applications.
  • the utility function refers to the overall evaluation and scoring function of the application usage.
  • Application extension refers to the provision of cache, load balancing and other components for application use.
  • the resource tag forest construction module first tags the resource nodes and user nodes under all tenants, and calculates weights for all resource nodes, and then the optimal path selection module traverses all resource tag forests to find For the path with the smallest weight, the resource scheduling module schedules the task to the corresponding physical machine and virtual machine on the path with the smallest weight. Finally, the resource tag forest construction module evaluates the resource tag forest as a whole, if it finds CPU occupancy and memory occupancy Physical machine resource nodes or virtual machines with tidal characteristics whose at least one of the storage rate and the storage occupancy rate are greater than 80% are scheduled according to a preset rule.
  • Such software may be distributed on a computer-readable medium, and the computer-readable medium may include a computer storage medium (or a non-transitory medium) and a communication medium (or a transitory medium).
  • the term computer storage medium includes volatile and non-volatile implementations in any method or technology for storing information (such as computer-readable instructions, data structures, program modules, or other data). Sexual, removable and non-removable media.
  • Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, tapes, magnetic disk storage or other magnetic storage, or can be used Any other medium that can store desired information and can be accessed by a computer.
  • a communication medium usually contains computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本申请提供了一种资源调度方法和装置、电子设备、计算机可读存储介质。资源调度方法包括:根据预先构造的资源标签森林中的资源标签树中的路径的权值从资源标签森林中选择最优路径,其中,资源标签森林包括:至少一个资源标签树,资源标签树的每一条路径从根节点到叶节点依次包括:第一节点、第二节点和第三节点,第一节点为租户对应的物理资源对应的节点,第二节点为属于租户的用户对应的节点,第三节点为部署在物理资源上且被用户管理的虚拟资源对应的节点;以及将任务调度到最优路径所经过的第三节点上。

Description

资源调度方法和装置、电子设备、计算机可读存储介质 技术领域
本申请实施例涉及计算机技术领域,特别涉及一种资源调度方法和装置、电子设备、计算机可读存储介质。
背景技术
资源调度是将资源从资源邀约方向用户分配和暂时转移的过程。优化资源调度管理、动态重新分配资源,可以更加高效地利用数据中心里的可用资源,达到减少能耗的目的。传统的资源调度方法无法满足多租户场景下租户、任务差异化的需求,在调度完成后容易产生抖动,引发二次调度。
发明内容
本申请实施例提供一种资源调度方法,包括:根据预先构造的资源标签森林中的资源标签树中的路径的权值从资源标签森林中选择最优路径,其中,资源标签森林包括:至少一个资源标签树,资源标签树的每一条路径从根节点到叶节点依次包括:第一节点、第二节点和第三节点,第一节点为租户对应的物理资源对应的节点,第二节点为属于租户的用户对应的节点,第三节点为部署在物理资源上且被用户管理的虚拟资源对应的节点;以及将任务调度到最优路径所经过的第三节点上。
本申请实施例提供一种电子设备,包括:至少一个处理器;以及存储器,存储器上存储有至少一个程序,当至少一个程序被至少一个处理器执行,使得至少一个处理器实现根据本申请的资源调度方法。
本申请实施例提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现根据本申请的资源调度方法。
附图说明
图1为本申请提供的一种资源调度方法的流程图;
图2为本申请提供的资源标签森林中的资源标签书的示意图;
图3为本申请中为第一节点和第三节点赋予对应的权值的示意图;
图4为本申请选择最优路径的示意图;
图5为本申请提供的一种资源调度装置的组成框图;以及
图6为本申请提供的一种资源调度装置的应用示意图。
具体实施方式
为使本领域的技术人员更好地理解本申请的技术方案,下面结合附图对本申请提供的资源调度方法和装置、电子设备、计算机可读存储介质进行详细描述。
在下文中将参考附图更充分地描述示例实施例,但是所述示例实施例可以以不同形式来体现且不应当被解释为限于本文阐述的实施例。反之,提供这些实施例的目的在于使本申请透彻和完整,并将使本领域技术人员充分理解本申请的范围。
在不冲突的情况下,本申请各实施例及实施例中的各特征可相互组合。
如本文所使用的,术语“和/或”包括至少一个相关列举条目的任何和所有组合。
本文所使用的术语仅用于描述特定实施例,且不意欲限制本申请。如本文所使用的,单数形式“一个”和“该”也意欲包括复数形式,除非上下文另外清楚指出。还将理解的是,当本说明书中使用术语“包括”和/或“由……制成”时,指定存在所述特征、整体、步骤、操作、元件和/或组件,但不排除存在或添加至少一个其它特征、整体、步骤、操作、元件、组件和/或其群组。
除非另外限定,否则本文所用的所有术语(包括技术和科学术语)的含义与本领域普通技术人员通常理解的含义相同。还将理解,诸如那些在常用字典中限定的那些术语应当被解释为具有与其在相关技术以及本申请的背景下的含义一致的含义,且将不解释为具有理想化 或过度形式上的含义,除非本文明确如此限定。
图1为本申请提供的一种资源调度方法的流程图。
参照图1,本申请提供的资源调度方法包括步骤100至101。
在步骤100,根据预先构造的资源标签森林中的资源标签树中的路径的权值从资源标签森林中选择最优路径,资源标签森林包括:至少一个资源标签树,资源标签树的每一条路径从根节点到叶节点依次包括:第一节点、第二节点和第三节点,第一节点为租户对应的物理资源对应的节点,第二节点为属于租户的用户对应的节点,第三节点为部署在物理资源上且被用户管理的虚拟资源对应的节点。
在步骤101,将任务调度到最优路径所经过的第三节点上。
为了满足任务的差异性需求,可以将未调度的且优先级最高的任务调度到最优路径所经过的第三节点上。任务的优先级可以根据任务的重要程度确定。
在根据预先构造的资源标签森林中的资源标签树中的路径的权值从资源标签森林中选择最优路径的步骤(即,步骤100)之前,该方法还可以包括:根据租户、物理资源、用户和虚拟资源之间的关系构造资源标签森林;以及分别为资源标签森林中的每一个资源标签树中的每一个第一节点和第三节点赋予对应的权值。
根据本申请提供的资源调度方法,基于预先构造的资源标签森林实现了资源调度,不同租户对应不同的物理资源、用户和虚拟资源,资源标签森林体现了租户、物理资源、用户和虚拟资源之间的关系,即,租户对应哪些物理资源,租户下有哪些用户,物理资源上部署的哪些虚拟资源被租户下的用户管理,因此实现了多租户场景下租户的差异化需求,从而提高了资源调度的合理性。
需要说明的是,资源标签树中的每一条路径均体现了租户、物理资源、用户和虚拟资源之间的关系,这里所说的关系指的是:租户对应哪些物理资源和哪些虚拟资源,即,哪些物理资源和哪些虚拟资源是属于租户的;租户下有哪些用户,即,哪些用户是属于租户的;物理资源上部署有哪些虚拟资源;以及哪些虚拟资源被哪些用户管理。
第一节点根据物理资源的层次化分布可以采用具有层次化结构 的节点表示。例如,如图2所示,物理资源具有两个层次化结构,即,机架和物理机,物理机是属于机架的,因此,可以设置两个层次的第一节点,第一个层次的第一节点为机架对应的节点,第二个层次的第一节点为物理机对应的节点。物理资源可以是指机架、物理机等,虚拟资源可以是指虚拟机等。
例如,图2所示,资源标签森林中包括2个资源标签树,分别为资源标签树1和资源标签树2,资源标签树1对应租户1,资源标签树2对应租户2。
租户1的物理资源包括机架1、物理机1和物理机2。物理机1上部署有虚拟机1和虚拟机2,物理机2上部署有虚拟机3。租户1下面有用户1和用户2。用户1管理虚拟机1和虚拟机2,用户2管理虚拟机3。
租户2的物理资源包括机架2、物理机3和物理机4。物理机3上部署有虚拟机4,物理机4上部署有虚拟机5和虚拟机6。租户2下面有用户3和用户4。用户3管理虚拟机4,用户4管理虚拟机5和虚拟机6。
资源标签树1的根节点为机架1对应的节点,叶节点包括虚拟机1对应的节点、虚拟机2对应的节点和虚拟机3对应的节点。第一节点包括机架1对应的节点、物理机1对应的节点和物理机2对应的节点。第二节点包括用户1对应的节点和用户2对应的节点。第三节点包括虚拟机1对应的节点、虚拟机2对应的节点和虚拟机3对应的节点。
资源标签树2的根节点为机架2对应的节点,叶节点包括虚拟机4对应的节点、虚拟机5对应的节点和虚拟机6对应的节点。第一节点包括机架2对应的节点、物理机3对应的节点和物理机4对应的节点。第二节点包括用户3对应的节点和用户4对应的节点。第三节点包括虚拟机4对应的节点、虚拟机5对应的节点和虚拟机6对应的节点。
可以根据路径所经过的第一节点和第三节点的权值确定路径的权值。例如,路径的权值可以为路径所经过的第一节点和第三节点的 权值之和,或者路径的权值可以为路径所经过的第一节点和第三节点的权值的加权平均值。当然,还可以采用其他的方式计算路径的权值,具体的计算方式不用于限定本申请实施例的保护范围。
需要说明的是,当物理资源对应的节点之间存在多个层次之间的关系时,第一节点的权值是指处于最低层次的节点的权值。例如,在图2中,物理机1对应的节点和物理机2对应的节点属于机架1对应的节点的下一个层次的节点,在此情况下,第一节点的权值是指物理机1对应的节点的权值和物理机2对应的节点的权值。
例如,如图2所示,对于包括机架1对应的节点、物理机1对应的节点、用户1对应的节点和虚拟机1对应的节点的路径的权值为物理机1对应的节点的权值和虚拟机1对应的节点的权值之和,其他的路径以此类推。
可以根据指定时间段内第一节点的CPU占用率、内存占用率和存储占用率确定第一节点的权值,可以根据指定时间段内第三节点的CPU占用率、内存占用率和存储占用率确定第三节点的权值。
例如,第一节点的权值可以为指定时间段内第一节点的CPU占用率、内存占用率和存储占用率之和,第三节点的权值可以为指定时间段内第三节点的CPU占用率、内存占用率和存储占用率之和。
又例如,第一节点的权值可以为指定时间段内第一节点的CPU占用率、内存占用率和存储占用率的加权平均值,第三节点的权值可以为指定时间段内第三节点的CPU占用率、内存占用率和存储占用率的加权平均值。
再例如,第一节点的权值可以为指定时间段内第一节点的CPU占用率对应的第一分值、内存占用率对应的第二分值和存储占用率对应的第三分值之和,第三节点的权值可以为指定时间段内第三节点的CPU占用率对应的第一分值、内存占用率对应的第二分值和存储占用率对应的第三分值之和。
再例如,第一节点的权值可以为指定时间段内第一节点的CPU占用率对应的第一分值、内存占用率对应的第二分值和存储占用率对应的第三分值的加权平均值,第三节点的权值可以为指定时间段内第 三节点的CPU占用率对应的第一分值、内存占用率对应的第二分值和存储占用率对应的第三分值的加权平均值。
当然,还可以采用其他方式计算第一节点的权值和第三节点的权值,具体的计算方式不用于限定本申请实施例的保护范围。
例如,CPU占用率在0%到20%之间,第一分值为1;CPU占用率在20%-40%之间,第一分值为2;CPU占用率在40%-60%之间,第一分值为3;CPU占用率在60%-80%之间,第一分值为4;CPU占用率在80%-100%之间,第一分值为5。当然,CPU占用率和第一分值还可以是其他的对应关系,具体的对应关系不用于限定本申请实施例的保护范围。
例如,内存占用率在0%到20%之间,第二分值为1;内存占用率在20%-40%之间,第二分值为2;内存占用率在40%-60%之间,第二分值为3;内存占用率在60%-80%之间,第二分值为4;内存占用率在80%-100%之间,第二分值为5。当然,内存占用率和第二分值还可以是其他的对应关系,具体的对应关系不用于限定本申请实施例的保护范围。
例如,存储占用率在0%到20%之间,第三分值为1;存储占用率在20%-40%之间,第三分值为2;存储占用率在40%-60%之间,第三分值为3;存储占用率在60%-80%之间,第三分值为4;存储占用率在80%-100%之间,第三分值为5。当然,存储占用率和第三分值还可以是其他的对应关系,具体的对应关系不用于限定本申请实施例的保护范围。
例如,如图3所示,指定时间段内物理机1对应的节点的CPU占用率对应的第一分值为1,内存占用率对应的第二分值为3,存储占用率对应的第三分值为4,那么物理机1对应的节点的权值为8;其他的节点以此类推,这里不再赘述。
根据预先构造的资源标签森林中的资源标签树中的路径的权值从资源标签森林中选择最优路径的步骤(即,步骤100)可以包括:遍历资源标签森林中的每一个资源标签树中的每一条路径,选择权值最小的路径作为最优路径,根据路径所经过的节点的资源占用率确定 路径的权值。
例如,如图4所示,虚线所示的包括机架2对应的节点、物理机4对应的节点、用户4对应的节点、虚拟机5对应的节点的路径的权值为:5+6=11,是所有路径中权值最小的路径,可以将任务调度到该路径上。
当然,还可以采用其他的方式选择最优路径,具体的最优路径的选择方式不用于限定本申请实施例的保护范围,本申请强调的是,资源调度方式是基于资源标签森林进行的,资源标签森林体现了租户、物理资源、用户和虚拟资源之间的关系,基于租户的差异化实现了资源的调度。
根据本申请的资源调度方法,基于路径所经过的节点的资源占用率确定路径的权值,并且选择权值最小的路径作为最优路径。路径的权值最小意味着路径所经过的节点的资源占用率最小。选择权值最小的路径作为最优路径,进而将任务调度到最优路径所经过的第三节点上,从而提高资源的利用率。
在实际应用中有可能存在以下情况:物理资源的资源占用率很高,但是该物理资源上的虚拟资源的资源占用率很低,这种情况下进行资源调度时,有可能由于物理资源不能满足业务需求而触发二次调度,为了避免进行二次调度,本申请的资源调度方法还可以包括:将CPU占用率、内存占用率和存储占用率中的至少一项大于预设阈值的第一节点对应的物理资源上的目标用户和目标用户管理的虚拟资源,迁移到权值最小的第一节点对应的物理资源上,并且根据迁移后的租户、物理资源、用户和虚拟资源之间的关系重新构造资源标签森林,目标用户为权值最大的用户,或者权值第二大的用户。
可以根据实际情况来选择预设阈值,例如,选择预设阈值为80%,即,CPU占用率、内存占用率、存储占用率中至少有一项超过80%。
可以优先选择权值最大的用户作为目标用户,如果权值最小的第一节点对应的物理资源不足以支撑权值最大的用户管理的虚拟资源,则可以选择权值第二大的用户作为目标用户。
可以根据用户管理的虚拟资源对应的节点的权值确定用户的权 值。
例如,用户的权值可以为用户管理的虚拟资源对应的节点的权值之和,或者用户的权值可以为用户管理的虚拟资源对应的节点的权值的加权平均值。当然,还可以采用其他方式计算用户的权值,具体的计算方式不用于限定本申请实施例的保护范围。
根据本申请的资源调度方法,在第一节点的权值大于预设阈值时,将该第一节点对应的物理资源上的目标用户和目标用户管理的虚拟资源,迁移到权值最小的第一节点对应的物理资源上,使得权值大于预设阈值的第一节点的资源占用率降低,也使得不同的第一节点之间的资源占用率更加均衡,从而在进行资源调度时不会因为物理资源无法满足业务需求而触发二次调度,减少或避免了资源的二次调度。
在实际应用中还有可能存在以下情况:物理资源的资源占用率不高,物理资源上的虚拟资源的资源占用率也不高,但虚拟资源的负载存在潮汐特性,某个特定时间点发生潮汐现象,即,虚拟资源的资源占用率可能会很高,在发生潮汐现象时,虚拟资源有可能无法满足业务需求,从而触发资源的二次调度,为了避免进行二次调度,本申请的资源调度方法还可以包括:在存在潮汐特性的虚拟资源发生潮汐现象之前,在存在潮汐特性的虚拟资源所在的物理资源上,克隆出与存在潮汐特性的虚拟资源相同的虚拟资源,克隆的虚拟资源与存在潮汐特性的虚拟资源的通信地址不同,并且克隆的虚拟资源与存在潮汐特性的虚拟资源共同承担任务;以及在存在潮汐特性的虚拟资源发生的潮汐现象结束之后,回收克隆的虚拟资源。
通信地址可以包括以下至少之一:互联网协议(IP,Internet Protocol)地址、媒体访问控制(MAC,Media Access Control)地址。
可以通过历史运维数据分析获得虚拟资源的潮汐现象发生的时间段,并计算虚拟资源在潮汐场景下所需的物理资源的峰值。
根据本申请的资源调度方法,对于存在潮汐特性的虚拟资源,在发生潮汐现象之前,克隆出相同的虚拟资源,由克隆的虚拟资源和原来的虚拟资源共同承担调度到存在潮汐特性的虚拟资源的任务,避 免了发生潮汐现象时虚拟资源无法满足业务需求而触发的二次调度,即,避免了资源的二次调度。在发生的潮汐现象结束之后,回收克隆的虚拟资源,避免了资源的占用的浪费。
本申请提供一种电子设备,包括:至少一个处理器;以及存储器,存储器上存储有至少一个程序,当至少一个程序被至少一个处理器执行,使得至少一个处理器实现根据本申请的任意一种资源调度方法。
处理器为具有数据处理能力的器件,其包括但不限于中央处理器(CPU)等;存储器为具有数据存储能力的器件,其包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH)。
处理器和存储器可以通过总线相互连接,进而与电子设备的其它组件连接。
本申请提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现根据本申请的任意一种资源调度方法。
图5为本申请提供的一种资源调度装置的组成框图。
参照图5,本申请提供的资源调度装置包括最优路径选择模块501和资源调度模块502。
最优路径选择模块501用于根据预先构造的资源标签森林中的资源标签树中的路径的权值从资源标签森林中选择最优路径,资源标签森林包括:至少一个资源标签树,资源标签树的每一条路径从根节点到叶节点依次包括:第一节点、第二节点和第三节点,第一节点为租户对应的物理资源对应的节点,第二节点为属于租户的用户对应的节点,第三节点为部署在物理资源上且被用户管理的虚拟资源对应的节点。
资源调度模块502用于将任务调度到最优路径所经过的第三节点上。
最优路径选择模块501可以用于:遍历资源标签森林中的每一个资源标签树中的每一条路径,选择权值最小的路径作为最优路径, 根据路径所经过的节点的资源占用率确定路径的权值。
可以根据路径所经过的第一节点和第三节点的权值确定路径的权值。
可以根据指定时间段内第一节点的CPU占用率、内存占用率和存储占用率确定第一节点的权值,可以根据指定时间段内第三节点的CPU占用率、内存占用率和存储占用率确定第三节点的权值。
资源调度装置还可以包括资源标签森林构造模块503,用于将CPU占用率、内存占用率和存储占用率中的至少一项大于预设阈值的第一节点对应的物理资源上的目标用户和目标用户管理的虚拟资源,迁移到权值最小的第一节点对应的物理资源上,并且根据迁移后的租户、物理资源、用户和虚拟资源之间的关系重新构造资源标签森林,目标用户为权值最大的用户,或者权值第二大的用户。
可以根据用户管理的虚拟资源对应的节点的权值确定用户的权值。
资源标签森林构造模块503还可以用于在存在潮汐特性的虚拟资源发生潮汐现象之前,在存在潮汐特性的虚拟资源所在的物理资源上,克隆出与存在潮汐特性的虚拟资源相同的虚拟资源,克隆的虚拟资源与存在潮汐特性的虚拟资源的通信地址不同,并且克隆的虚拟资源与存在潮汐特性的虚拟资源共同承担任务;以及在存在潮汐特性的虚拟资源发生的潮汐现象结束之后,回收克隆的虚拟资源。
资源标签森林构造模块503还可以用于根据租户、物理资源、用户和虚拟资源之间的关系构造资源标签森林;以及分别为资源标签森林中的每一个资源标签树中的每一个第一节点和第三节点赋予对应的权值。
图6为本申请提供的一种资源调度装置的应用示意图。
如图6所示,系统分为三层:云计算基础平台层、全局资源调度层和应用环境层。最底层为云计算基础平台层,包括:物理机以及物理机上的虚拟机。全局资源调度层包括根据本发明的资源调度装置。应用环境层包括:性能模块、系统应用、效用函数和应用扩展。
性能模块用于监测系统性能指标,比如每秒输入输出操作数 (IOPS,Input Output Operations Per Second)、连接并发数等。
系统应用是指系统全局层面的应用,如网管应用。
效用函数是指对应用使用情况的整体评估和打分的函数。
应用扩展是指提供高速缓存、负载均衡等组件给应用使用。
在资源调度时,先由资源标签森林构造模块为所有租户下的资源节点、用户节点打标签,并为所有资源节点计算权值,然后由最优路径选择模块对所有资源标签森林进行遍历,找到权值最小的路径,资源调度模块将任务调度到权值最小的路径上的对应物理机和虚拟机,最后资源标签森林构造模块对资源标签森林进行整体评估,如果发现存在CPU占用率、内存占用率和存储占用率中的至少一项大于80%的物理机资源节点或有潮汐特性的虚拟机,则按预设规则进行调度。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其它数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其它存储器技术、CD-ROM、数字多功能盘(DVD)或其它光盘存储、磁盒、磁带、磁盘存储或其它磁存储器、或者可以用于存储期望的信息并且可以被计算机访问的任何其它的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其它传 输机制之类的调制数据信号中的其它数据,并且可包括任何信息递送介质。
本文已经公开了示例实施例,并且虽然采用了具体术语,但它们仅用于并仅应当被解释为一般说明性含义,并且不用于限制的目的。在一些实例中,对本领域技术人员显而易见的是,除非另外明确指出,否则可单独使用与特定实施例相结合描述的特征、特性和/或元素,或可与其它实施例相结合描述的特征、特性和/或元件组合使用。因此,本领域技术人员将理解,在不脱离由所附的权利要求阐明的本申请的范围的情况下,可进行各种形式和细节上的改变。

Claims (10)

  1. 一种资源调度方法,包括:
    根据预先构造的资源标签森林中的资源标签树中的路径的权值从所述资源标签森林中选择最优路径,其中,所述资源标签森林包括:至少一个资源标签树,所述资源标签树的每一条路径从根节点到叶节点依次包括:第一节点、第二节点和第三节点,所述第一节点为租户对应的物理资源对应的节点,所述第二节点为属于所述租户的用户对应的节点,所述第三节点为部署在所述物理资源上且被所述用户管理的虚拟资源对应的节点;以及
    将任务调度到所述最优路径所经过的第三节点上。
  2. 根据权利要求1所述的方法,其中,根据预先构造的资源标签森林中的资源标签树中的路径的权值从所述资源标签森林中选择最优路径的步骤包括:
    遍历所述资源标签森林中的每一个资源标签树中的每一条路径,选择权值最小的路径作为所述最优路径,
    其中,根据路径所经过的节点的资源占用率确定所述路径的权值。
  3. 根据权利要求1所述的方法,其中,根据路径所经过的第一节点和第三节点的权值确定所述路径的权值。
  4. 根据权利要求3所述的方法,其中,根据指定时间段内所述第一节点的CPU占用率、内存占用率和存储占用率确定所述第一节点的权值,并且
    根据指定时间段内所述第三节点的CPU占用率、内存占用率和存储占用率确定所述第三节点的权值。
  5. 根据权利要求1-4任一项所述的方法,还包括:
    将CPU占用率、内存占用率和存储占用率中的至少一项大于预设阈值的第一节点对应的物理资源上的目标用户和所述目标用户管理的虚拟资源,迁移到权值最小的第一节点对应的物理资源上,并且根据迁移后的租户、物理资源、用户和虚拟资源之间的关系重新构造所述资源标签森林,
    其中,所述目标用户为权值最大的用户,或者权值第二大的用户。
  6. 根据权利要求5所述的方法,其中,根据用户管理的虚拟资源对应的节点的权值确定所述用户的权值。
  7. 根据权利要求1-4任一项所述的方法,还包括:
    在存在潮汐特性的虚拟资源发生潮汐现象之前,在所述存在潮汐特性的虚拟资源所在的物理资源上,克隆出与所述存在潮汐特性的虚拟资源相同的虚拟资源,其中,克隆的虚拟资源与所述存在潮汐特性的虚拟资源的通信地址不同,并且所述克隆的虚拟资源与所述存在潮汐特性的虚拟资源共同承担所述任务;以及
    在所述存在潮汐特性的虚拟资源发生的潮汐现象结束之后,回收所述克隆的虚拟资源。
  8. 根据权利要求1-4任一项所述的方法,在根据预先构造的资源标签森林中的资源标签树中的路径的权值从所述资源标签森林中选择最优路径的步骤之前,该方法还包括:
    根据租户、物理资源、用户和虚拟资源之间的关系构造所述资源标签森林;以及
    分别为所述资源标签森林中的每一个资源标签树中的每一个第一节点和第三节点赋予对应的权值。
  9. 一种电子设备,包括:
    至少一个处理器;
    存储器,所述存储器上存储有至少一个程序,当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现根据权利要求1-8任意一项所述的资源调度方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现根据权利要求1-8任意一项所述的资源调度方法。
PCT/CN2021/101501 2020-06-23 2021-06-22 资源调度方法和装置、电子设备、计算机可读存储介质 WO2021259246A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/012,701 US20230267015A1 (en) 2020-06-23 2021-06-22 Resource scheduling method and apparatus, electronic device and computer readable storage medium
EP21829551.7A EP4170491A4 (en) 2020-06-23 2021-06-22 RESOURCE PLANNING METHOD AND APPARATUS, ELECTRONIC DEVICE AND COMPUTER READABLE STORAGE MEDIUM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010584041.5 2020-06-23
CN202010584041.5A CN113835823A (zh) 2020-06-23 2020-06-23 资源调度方法和装置、电子设备、计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021259246A1 true WO2021259246A1 (zh) 2021-12-30

Family

ID=78964210

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/101501 WO2021259246A1 (zh) 2020-06-23 2021-06-22 资源调度方法和装置、电子设备、计算机可读存储介质

Country Status (4)

Country Link
US (1) US20230267015A1 (zh)
EP (1) EP4170491A4 (zh)
CN (1) CN113835823A (zh)
WO (1) WO2021259246A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174582B (zh) * 2022-09-06 2022-11-18 中国中金财富证券有限公司 数据调度方法及相关装置
CN117539594A (zh) * 2024-01-10 2024-02-09 中国电子科技集团公司信息科学研究院 一种面向像素流程序并发渲染的负载均衡方法
CN117726149B (zh) * 2024-02-08 2024-05-03 天津大学 一种基于人工智能的智能制造资源配置方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8381220B2 (en) * 2007-10-31 2013-02-19 International Business Machines Corporation Job scheduling and distribution on a partitioned compute tree based on job priority and network utilization
CN103797463A (zh) * 2011-07-27 2014-05-14 阿尔卡特朗讯公司 用于在云环境中指派虚拟资源的方法和设备
US20180167487A1 (en) * 2016-12-13 2018-06-14 Red Hat, Inc. Container deployment scheduling with constant time rejection request filtering
CN110612705A (zh) * 2017-11-08 2019-12-24 华为技术有限公司 一种无服务器架构下业务部署的方法和函数管理平台
CN111274035A (zh) * 2020-01-20 2020-06-12 长沙市源本信息科技有限公司 边缘计算环境下的资源调度方法、装置和计算机设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10412012B2 (en) * 2015-09-22 2019-09-10 Arris Enterprises Llc Intelligent, load adaptive, and self optimizing master node selection in an extended bridge
US10491501B2 (en) * 2016-02-08 2019-11-26 Ciena Corporation Traffic-adaptive network control systems and methods
CN109039954B (zh) * 2018-07-25 2021-03-23 广东石油化工学院 多租户容器云平台虚拟计算资源自适应调度方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8381220B2 (en) * 2007-10-31 2013-02-19 International Business Machines Corporation Job scheduling and distribution on a partitioned compute tree based on job priority and network utilization
CN103797463A (zh) * 2011-07-27 2014-05-14 阿尔卡特朗讯公司 用于在云环境中指派虚拟资源的方法和设备
US20180167487A1 (en) * 2016-12-13 2018-06-14 Red Hat, Inc. Container deployment scheduling with constant time rejection request filtering
CN110612705A (zh) * 2017-11-08 2019-12-24 华为技术有限公司 一种无服务器架构下业务部署的方法和函数管理平台
CN111274035A (zh) * 2020-01-20 2020-06-12 长沙市源本信息科技有限公司 边缘计算环境下的资源调度方法、装置和计算机设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4170491A4 *

Also Published As

Publication number Publication date
EP4170491A4 (en) 2024-03-27
EP4170491A1 (en) 2023-04-26
US20230267015A1 (en) 2023-08-24
CN113835823A (zh) 2021-12-24

Similar Documents

Publication Publication Date Title
WO2021259246A1 (zh) 资源调度方法和装置、电子设备、计算机可读存储介质
US10657106B2 (en) Method, computing device, and distributed file system for placement of file blocks within a distributed file system
US10412021B2 (en) Optimizing placement of virtual machines
US10623481B2 (en) Balancing resources in distributed computing environments
CN106233276B (zh) 网络可访问块存储装置的协调准入控制
US10341208B2 (en) File block placement in a distributed network
CN108182105B (zh) 基于Docker容器技术的局部动态迁移方法及控制系统
Liu et al. An economical and SLO-guaranteed cloud storage service across multiple cloud service providers
CN104901989B (zh) 一种现场服务提供系统及方法
Gao et al. An energy-aware ant colony algorithm for network-aware virtual machine placement in cloud computing
CN105159775A (zh) 基于负载均衡器的云计算数据中心的管理系统和管理方法
CN108897606B (zh) 多租户容器云平台虚拟网络资源自适应调度方法及系统
US9817698B2 (en) Scheduling execution requests to allow partial results
CN108376103A (zh) 一种云平台的资源平衡控制方法及服务器
US20220300323A1 (en) Job Scheduling Method and Job Scheduling Apparatus
CN102932271A (zh) 负载均衡的实现方法和装置
CN111611076B (zh) 任务部署约束下移动边缘计算共享资源公平分配方法
CN106164888A (zh) 用于最小化工作负荷空闲时间和工作负荷间干扰的网络和存储i/o请求的排序方案
CN110597598B (zh) 一种云环境中的虚拟机迁移的控制方法
CN104702654B (zh) 基于视频云存储系统的存储与提取性能平衡的方法与装置
CN115658230A (zh) 一种云数据中心高效能容器编排方法及系统
WO2017045640A1 (zh) 一种数据中心内关联流的带宽调度方法及装置
Jung et al. Ostro: Scalable placement optimization of complex application topologies in large-scale data centers
Guo Ant colony optimization computing resource allocation algorithm based on cloud computing environment
CN110430236A (zh) 一种部署业务的方法以及调度装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21829551

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021829551

Country of ref document: EP

Effective date: 20230118

NENP Non-entry into the national phase

Ref country code: DE