CN116107689A - Cloud resource scheduling method, device, equipment and medium based on competition mechanism - Google Patents

Cloud resource scheduling method, device, equipment and medium based on competition mechanism Download PDF

Info

Publication number
CN116107689A
CN116107689A CN202211265572.3A CN202211265572A CN116107689A CN 116107689 A CN116107689 A CN 116107689A CN 202211265572 A CN202211265572 A CN 202211265572A CN 116107689 A CN116107689 A CN 116107689A
Authority
CN
China
Prior art keywords
task
competitive
inventory
cloud resource
resource scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211265572.3A
Other languages
Chinese (zh)
Inventor
臧云峰
安柯
郭瑱
严锦洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yovole Computer Network Co ltd
Original Assignee
Shanghai Yovole Computer Network Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yovole Computer Network Co ltd filed Critical Shanghai Yovole Computer Network Co ltd
Priority to CN202211265572.3A priority Critical patent/CN116107689A/en
Publication of CN116107689A publication Critical patent/CN116107689A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Control By Computers (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a cloud resource scheduling method, device, equipment and medium based on a competition mechanism, which are applied to the technical field of cloud computing and comprise the following steps: step 1: refreshing task states of the created competitive tasks at intervals of a first preset time; step 2: acquiring a task state as an ongoing competitive task at intervals of a second preset time, and finding out a plurality of competitive tasks meeting processing conditions from the task state as the ongoing competitive task; step 3: obtaining a target task according to the real-time weights of the multiple competing tasks; step 4: and carrying out cloud resource scheduling according to the identification information and the inventory requirement of the target task. According to the real-time weight of the competitive task, cloud resource scheduling is carried out, intelligent scheduling can be realized, and system resource waste is effectively avoided; the task state is tracked as the competing task in progress, so that more accurate scheduling can be realized.

Description

Cloud resource scheduling method, device, equipment and medium based on competition mechanism
Technical Field
The application relates to the technical field of cloud computing, in particular to a cloud resource scheduling method, device, equipment and medium based on a competition mechanism.
Background
The resource scheduling is one of key technologies in cloud computing, the success or failure of a cloud computing system can be determined to a great extent by the implementation of the resource scheduling, and the process of creating a virtual machine or container on a cloud platform is the process of cloud resource allocation or scheduling.
In the prior art, the use amount and quota of system resources, such as a central processing unit (Central Processing Unit, CPU), a memory, a disk and the like, are generally controlled directly through a control group, so that resource scheduling, such as creation of a container and the like, is realized. For example, when the Docker container is started, the purpose of resource control is achieved by setting a resource use weight and an upper limit. The Docker container will monitor its own resource usage such that the resource usage is below the upper resource quota limit. A plurality of Docker containers are run on a host, each container reaching a dynamic balance under the control of a control group. However, the quota is set in advance by adopting a high-allocation mode, so that the quota cannot be modified in the running process, and the system resource waste is easily caused.
In addition, in the prior art, only the number of running cloud resources is counted during cloud resource scheduling, however, creating a virtual machine or a container requires time, for example, when the next virtual machine is started, the previous virtual machine may be in the verification before starting, so that during the process of counting the number of virtual machines running on the cloud host, only the number of started virtual machines can be obtained, but the number of started virtual machines cannot be obtained, which easily causes confusion of starting results.
Therefore, a new technical solution for resource scheduling is needed.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide a cloud resource scheduling method, apparatus, device, and medium based on a contention mechanism, so as to solve the technical problem in the prior art that resources are wasted due to the inability to change quota in the cloud resource scheduling process by means of a control group, and the technical problem of starting confusion caused by only counting the number of running cloud resources.
The embodiment of the specification provides the following technical scheme:
the embodiment of the specification provides a cloud resource scheduling method based on a competition mechanism, which comprises the following steps:
step 1: refreshing task states of the created competitive tasks at intervals of a first preset time;
step 2: acquiring a task state as an ongoing competitive task at intervals of a second preset time, and finding out a plurality of competitive tasks meeting processing conditions from the task state as the ongoing competitive task;
step 3: obtaining a target task according to the real-time weights of the multiple competing tasks;
step 4: and carrying out cloud resource scheduling according to the identification information and the inventory requirement of the target task.
Preferably, step 3 comprises:
step 31: aiming at each competitive task, obtaining quantitative values of a plurality of influence factors of the competitive task and weighting values corresponding to each influence factor;
Step 32: determining the real-time weight of each competitive task according to the quantized value and the corresponding weighted value of each influencing factor;
step 33: and obtaining the real-time weights of the plurality of competing tasks according to the real-time weights of each competing task.
Preferably, step 3 further comprises:
step 34: and obtaining a preset number of competitive tasks as target tasks according to the real-time weights of the plurality of competitive tasks.
Preferably, the influencing factors include at least one of: user attribute information corresponding to the competitive task, priority information of an area corresponding to the competitive task, priority information of hardware configuration corresponding to the competitive task, and emergency degree information of the competitive task.
Preferably, the user attribute information includes a competence value of the user, and the method further includes:
step 5: acquiring characteristic information of users, adding a label to each user to obtain sample data, and marking the competitive power value of the user by the label;
step 6: inputting the sample data into a neural network model for training to obtain a competitive power evaluation model;
step 7: and obtaining the competitiveness value according to the competitiveness evaluation model.
Preferably, the identification information includes: region identification information and hardware identification information;
Step 4, including:
step 41: determining whether the inventory requirement is met according to the region identification information and the hardware identification information corresponding to the target task;
step 42: if not, cloud resource scheduling is not performed;
step 43: if yes, cloud resource scheduling is carried out according to the real-time weight corresponding to the target task, and resource records in the database are updated after scheduling is started and after scheduling is completed.
Preferably, step 4 further comprises:
step 44: and after the target task completes cloud resource scheduling, calling a cleaning interface to release cloud resources, and updating resource records in a database.
Preferably, the real-time weight is stored in a database, and is modified according to actual conditions before the competitive task is created and before the competitive task is scheduled;
or alternatively, the process may be performed,
the real-time weight is fixed when the competitive task is created, and the real-time weight is unchanged after the competitive task is created.
Preferably, step 41 comprises:
step 411: obtaining inventory information according to the query parameters, wherein the query information comprises area identification information, hardware identification information and competition type identification information, and the inventory information comprises residual inventory and floating inventory;
step 412: and determining whether the inventory requirement is met according to the inventory information.
Preferably, the area identification information is used for identifying areas, each area is provided with a plurality of nodes, and each area comprises a plurality of sub-areas;
step 4, further comprising:
step 45: if the target task designates a sub-region, acquiring inventory information of the sub-region according to sub-region identification information and hardware identification information corresponding to the target task, if the inventory requirement is not met, inquiring inventory information of other sub-regions belonging to the same region, and if the inventory requirement is met, performing cloud resource scheduling in the corresponding other sub-regions;
step 46: and if the target task designates a plurality of sub-areas, carrying out cloud resource scheduling according to the matching degree of the target task and each sub-area.
Preferably, step 46 comprises:
step 461: adding a label to each sub-region;
step 462: matching labels of a plurality of subareas corresponding to the area identification information of the target task with user information of the target task to obtain matching degree;
step 463: sequencing the corresponding subareas according to the matching degree from high to low, and inquiring the inventory information of the subareas according to the sequencing;
step 464: and carrying out cloud resource scheduling according to the inventory information and the inventory requirements.
The embodiment of the specification also provides a cloud resource scheduling device based on a competition mechanism, which comprises:
refresh module M1: refreshing task states of the created competitive tasks at intervals of a first preset time;
the searching module M2: acquiring a task state as an ongoing competitive task at intervals of a second preset time, and finding out a plurality of competitive tasks meeting processing conditions from the task state as the ongoing competitive task;
processing module M3: obtaining a target task according to the real-time weights of the multiple competing tasks;
scheduling module M4: and carrying out cloud resource scheduling according to the identification information and the inventory requirement of the target task.
Preferably, the processing module M3 comprises:
processing submodule M31: aiming at each competitive task, obtaining quantitative values of a plurality of influence factors of the competitive task and weighting values corresponding to each influence factor;
processing submodule M32: determining the real-time weight of each competitive task according to the quantized value and the corresponding weighted value of each influencing factor;
processing submodule M33: and obtaining the real-time weights of the plurality of competing tasks according to the real-time weights of each competing task.
Preferably, the processing module M3 further comprises:
Processing submodule M34: and obtaining a preset number of competitive tasks as target tasks according to the real-time weights of the plurality of competitive tasks.
Preferably, the influencing factors include at least one of: user attribute information corresponding to the competitive task, priority information of an area corresponding to the competitive task, priority information of hardware configuration corresponding to the competitive task, and emergency degree information of the competitive task.
Preferably, the user attribute information includes a competence value of the user, and the method further includes:
the acquisition module M5: acquiring characteristic information of users, adding a label to each user to obtain sample data, and marking the competitive power value of the user by the label;
training module M6: inputting the sample data into a neural network model for training to obtain a competitive power evaluation model;
evaluation module M7: and obtaining the competitiveness value according to the competitiveness evaluation model.
Preferably, the identification information includes: region identification information and hardware identification information;
the scheduling module M4 includes:
scheduling sub-module M41: determining whether the inventory requirement is met according to the region identification information and the hardware identification information corresponding to the target task;
scheduling sub-module M42: if not, cloud resource scheduling is not performed;
Scheduling sub-module M43: if yes, cloud resource scheduling is carried out according to the real-time weight corresponding to the target task, and resource records in the database are updated after scheduling is started and after scheduling is completed.
Preferably, the scheduling module M4 further comprises:
scheduling sub-module M44: and after the target task completes cloud resource scheduling, calling a cleaning interface to release cloud resources, and updating resource records in a database.
Preferably, the real-time weight is stored in a database, and is modified according to actual conditions before the competitive task is created and before the competitive task is scheduled;
or alternatively, the process may be performed,
the real-time weight is fixed when the competitive task is created, and the real-time weight is unchanged after the competitive task is created.
Preferably, the scheduling sub-module M41 comprises:
unit D411: obtaining inventory information according to the query parameters, wherein the query information comprises area identification information, hardware identification information and competition type identification information, and the inventory information comprises residual inventory and floating inventory;
unit D412: and determining whether the inventory requirement is met according to the inventory information.
Preferably, the area identification information is used for identifying areas, each area is provided with a plurality of nodes, and each area comprises a plurality of sub-areas;
The scheduling module M4 further includes:
scheduling sub-module M45: if the target task designates a sub-region, acquiring inventory information of the sub-region according to sub-region identification information and hardware identification information corresponding to the target task, if the inventory requirement is not met, inquiring inventory information of other sub-regions belonging to the same region, and if the inventory requirement is met, performing cloud resource scheduling in the corresponding other sub-regions;
scheduling sub-module M46: and if the target task designates a plurality of sub-areas, carrying out cloud resource scheduling according to the matching degree of the target task and each sub-area.
Preferably, the scheduling sub-module M46 comprises:
unit D461: adding a label to each sub-region;
unit D462: matching labels of a plurality of subareas corresponding to the area identification information of the target task with user information of the target task to obtain matching degree;
unit D463: sequencing the corresponding subareas according to the matching degree from high to low, and inquiring the inventory information of the subareas according to the sequencing;
unit D464: and carrying out cloud resource scheduling according to the inventory information and the inventory requirements.
The embodiment of the specification also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the contention-based cloud resource scheduling method described above.
The embodiments of the present disclosure also provide a computer storage medium storing computer executable instructions that when executed by a processor perform the above-described cloud resource scheduling method based on a contention mechanism.
Compared with the prior art, the beneficial effects that above-mentioned at least one technical scheme that this description embodiment adopted can reach include at least: according to the embodiment of the specification, cloud resource scheduling is performed according to the real-time weight of the competitive task, intelligent scheduling can be realized, and system resource waste is effectively avoided; the task state is tracked as the competing task in progress, so that more accurate scheduling can be realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a cloud resource scheduling method based on a contention mechanism according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present application are described in detail below with reference to the accompanying drawings.
Other advantages and effects of the present application will become apparent to those skilled in the art from the present disclosure, when the following description of the embodiments is taken in conjunction with the accompanying drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. The present application may be embodied or carried out in other specific embodiments, and the details of the present application may be modified or changed from various points of view and applications without departing from the spirit of the present application. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It is noted that various aspects of the embodiments are described below within the scope of the following claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the present application, one skilled in the art will appreciate that one aspect described herein may be implemented independently of any other aspect, and that two or more of these aspects may be combined in various ways. For example, apparatus may be implemented and/or methods practiced using any number and aspects set forth herein. In addition, such apparatus may be implemented and/or such methods practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should also be noted that the illustrations provided in the following embodiments merely illustrate the basic concepts of the application by way of illustration, and only the components related to the application are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided in order to provide a thorough understanding of the examples. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details.
In the prior art, the use amount and quota of system resources, such as a CPU, a memory, a disk and the like, are controlled directly through a control group, so that resource scheduling, such as creation of a container and the like, is realized. For example, when the Docker container is started, the purpose of resource control is achieved by setting a resource use weight and an upper limit. The Docker container will monitor its own resource usage such that the resource usage is below the upper resource quota limit. A plurality of Docker containers are run on a host, each container reaching a dynamic balance under the control of a control group. However, the quota is set in advance by adopting a high-allocation mode, so that the quota cannot be modified in the running process, and the system resource waste is easily caused.
In addition, in the prior art, only the number of running cloud resources is counted during cloud resource scheduling, however, creating a virtual machine or a container requires time, for example, when the next virtual machine is started, the previous virtual machine may be in the verification before starting, so that during the process of counting the number of virtual machines running on the cloud host, only the number of started virtual machines can be obtained, but the number of started virtual machines cannot be obtained, which easily causes confusion of starting results.
Based on this, the embodiment of the present specification proposes a processing scheme: the real-time weight of the competition type task is calculated, and the running competition type task is tracked, so that resource waste is effectively avoided, and more accurate scheduling is realized.
The following describes the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present disclosure provides a cloud resource scheduling method based on a contention mechanism, including:
step 1: and refreshing the task state of the created competitive task at intervals of a first preset time.
The setting of the first preset time is not limited in the embodiment of the present specification, and may be set according to actual situations, and may be set to 5 minutes, for example.
The task state of the contention-type task may include NEW (NEW), IN-PROGRESS (in_progress), completed (completed), and the like, and the contention-type task includes: region identification information and hardware identification information.
In the embodiment of the present specification, the competing tasks are used for cloud resource allocation.
In actual situations, firstly, a competition type task for cloud resource allocation is newly established based on user requirements, task states corresponding to the competition type tasks are marked, a corresponding task table is obtained, and then task states in the competition type tasks in the task table are refreshed at each first preset time.
Specifically, whether the task state corresponding to the competitive task needs to be updated is judged according to the current time and the starting time and the ending time of each competitive task.
Step 2: acquiring a task state as an ongoing competitive task at intervals of a second preset time, and finding out a plurality of competitive tasks meeting processing conditions from the task state as the ongoing competitive task.
The setting of the second preset time is not limited in the embodiment of the present specification, and may be set according to actual conditions, and may be set to 10 seconds, for example.
The setting of the processing conditions in the embodiments of the present disclosure is not limited, and may be set according to specific requirements, and exemplary, the processing conditions may be set as all competing tasks to be processed, and how the processing conditions select a part of competing tasks from all competing tasks to be processed may also be set.
In some embodiments, the processing condition is "instance_count-create_vms-running_vms >0, where instance_count is the number of instances, create_vms is the number of virtual machines in creation, and running_vms is the number of virtual machines in operation; every second preset time (e.g., 10 s), a competing task whose task state is IN PROGRESS ("IN_PROGRESS") is acquired, and a competing task whose instance_count-creating_vms-running_vms >0 is found from among the competing tasks. For example, 10 virtual machines (i.e., the number of instances is 10) need to be created, 2 in the current creation and 2 in the running, then 6 virtual machines need to be created again, i.e., competing tasks that meet the processing conditions include 6 tasks to create virtual machines.
Step 3: and obtaining the target task according to the real-time weights of the multiple competing tasks.
Specifically, after a plurality of competing tasks satisfying the processing conditions are found, the real-time weights of the competing tasks may be calculated in real time based on the influence factors.
Influencing factors in embodiments of the present description include at least one of: user attribute information corresponding to the competitive task, priority information of an area corresponding to the competitive task, priority information of hardware configuration corresponding to the competitive task, and emergency degree information of the competitive task.
Wherein, the user attribute information includes: user account number level, user credit value, user competitiveness value, etc.; the competitive capacity value of the user is used for indicating the competitive capacity of the user on the preemptive cloud resource, and optionally, characteristic information of each enterprise user can be collected, wherein the characteristic information comprises: enterprise type, business direction, enterprise scale, historical cloud resource use condition, historical cloud resource purchase record and the like, marking the competitive power value for each enterprise user, obtaining a competitive power evaluation model through sample training, inputting user data by using the competitive power evaluation model, and outputting the competitive power value corresponding to the user.
In the embodiment of the present specification, the user attribute information includes a competence value of a user, and the method further includes: step 5: acquiring characteristic information of users, adding a label to each user to obtain sample data, and marking the competitive power value of the user by the label; step 6: inputting the sample data into a neural network model for training to obtain a competitive power evaluation model; step 7: and obtaining the competitiveness value according to the competitiveness evaluation model.
Specifically, inputting the sample data into the neural network model for model training includes: firstly, collecting characteristic information of enterprise users corresponding to a plurality of enterprises respectively, such as the enterprise types, the business directions, the enterprise scales, the historical cloud resource use conditions, the historical cloud resource purchase records and the like; then, adding a label for each enterprise, namely marking the competitive power value of each enterprise; then, the characteristic information and the label of the enterprise user are used as sample data and are input into a pre-established initial neural network model for training, and a trained competitive power assessment model can be obtained; finally, based on the competitive power evaluation model, when the characteristic information of the enterprise user corresponding to one enterprise is input, the predicted competitive power value corresponding to the enterprise can be output.
Alternatively, the initial neural network model may be created using algorithms such as random forests or convolutional neural networks.
Further, emergency degree information can be set in the competitive task, and for the particularly urgent competitive task, the preset emergency degree of the competitive task can be designated to be the highest when the competitive task is created, and the competitive task can be scheduled most preferentially, so that the sudden task requirement can be met.
The above-mentioned various influencing factors can be comprehensively considered to determine the real-time weight of the competitive task, for example, different weighting values are set for various factors, and then the weighting values, that is, the real-time weights, are obtained by summation.
Specifically, step 3 includes: step 31: aiming at each competitive task, obtaining quantitative values of a plurality of influence factors of the competitive task and weighting values corresponding to each influence factor; step 32: determining the real-time weight of each competitive task according to the quantized value and the corresponding weighted value of each influencing factor; step 33: and obtaining the real-time weights of the plurality of competing tasks according to the real-time weights of each competing task.
Wherein each influencing factor is quantifiable, so the calculation of the weight value can be described by formula (1). As one example, equation (1) for calculating the weight value may be expressed as:
W k =(w 1 X 1 +w 2 X 2 +...+w n X n )/n;(1)
wherein W is k A weight representing a kth competitive task; n represents the number of quantifiable factors; w (w) i Representing the weighting coefficient corresponding to the i-th quantifiable factor; x is X i Representing the quantized value corresponding to the first quantifiable factor, i=1, 2, …, n. The quantifiable factors, that is, the above-mentioned influencing factors, may define a value rule or a quantization rule for each quantifiable factor in advance.
Further, step 3 further includes: step 34: and obtaining a preset number of competitive tasks as target tasks according to the real-time weights of the plurality of competitive tasks.
The preset number of settings is not limited in the embodiment of the present specification, and may be exemplified by 5.
Further, in the embodiment of the present disclosure, the real-time weights are stored in the database, and the real-time weights are modified according to the actual situation before the competitive task is created and before the competitive task is scheduled; alternatively, the real-time weight is fixed at the time of competitive task creation, and the real-time weight is unchanged after the competitive task is created.
In particular, the data related to the above-mentioned various influencing factors may be stored in advance in the database, read when needed, and the manager may modify the data in the database based on the need, thereby making it possible for the actual weight to be changed after the competitive task is created until before it is scheduled. Alternatively, the real-time weights may be directly assigned at the time of task creation, in which case the real-time weights are fixed after the competing tasks are created.
Step 4: and carrying out cloud resource scheduling according to the identification information and the inventory requirement of the target task.
Wherein the identification information includes: region identification information and hardware identification information.
Specifically, step 4 includes: step 41: determining whether the inventory requirement is met according to the region identification information and the hardware identification information corresponding to the target task; step 42: if not, cloud resource scheduling is not performed; step 43: if yes, cloud resource scheduling is carried out according to the real-time weight corresponding to the target task, and resource records in the database are updated after scheduling is started and after scheduling is completed.
Specifically, whether the inventory requirement is met or not is inquired according to the area identification information and the hardware identification information corresponding to the target task, if not, cloud resource scheduling is not performed, and if so, scheduling is performed according to the real-time weight corresponding to the target task from high to low. Depending on the type of resource, scheduling may be performed directly by the current device or by the current device instructing other devices to perform. After the scheduling is started, the resource records in the database need to be updated, and after the scheduling is finished, if the virtual machine creation is finished or the container creation is finished, the resource records in the database need to be updated; and when the task is finished, the cleaning interface is required to be called to release cloud resources, and the resource record in the database is required to be updated after the release is finished.
Wherein, step 41 comprises: step 411: obtaining inventory information according to the query parameters, wherein the query information comprises area identification information, hardware identification information and competition type identification information, and the inventory information comprises residual inventory and floating inventory; step 412: and determining whether the inventory requirement is met according to the inventory information.
Further, step 4 further includes: step 44: and after the target task completes cloud resource scheduling, calling a cleaning interface to release cloud resources, and updating resource records in a database.
Specifically, the implementation manner of inquiring the inventory information according to the area identification information and the hardware identification information comprises the following steps: inputting query parameters, wherein the query parameters comprise region identification information, hardware identification information and competition type identification information, and the competition type identification information is used for identifying whether the task is a competition type task or not; obtaining a query result, namely inventory information, wherein the inventory information comprises: the system comprises a residual stock and a floating stock, wherein the residual stock needs to be deducted for avoiding problems caused by concurrence; the floating stock is obtained when a normal virtual machine is created and the remaining stock does not meet the requirements. Optionally, if the query parameter includes indication information for indicating to return to the resource recycling list, the query result includes the resource recycling list. The region identification information is used for identifying regions, and a plurality of nodes are arranged in each region; the hardware identification information is used for identifying hardware configuration, such as a model, the performance of the hardware configuration is different when the hardware configuration is different, and the region and the hardware configuration can be designated when the competing task is created, so that the task can be executed by the nodes meeting the hardware configuration in the designated region.
Further, the area identification information is used for identifying areas, each area is provided with a plurality of nodes, and each area comprises a plurality of sub-areas; step 4, further comprising: step 45: if the target task designates a sub-region, acquiring inventory information of the sub-region according to sub-region identification information and hardware identification information corresponding to the target task, if the inventory requirement is not met, inquiring inventory information of other sub-regions belonging to the same region, and if the inventory requirement is met, performing cloud resource scheduling in the corresponding other sub-regions; step 46: and if the target task designates a plurality of sub-areas, carrying out cloud resource scheduling according to the matching degree of the target task and each sub-area.
Specifically, each region further includes a plurality of sub-regions, and the division of the sub-regions may be divided by geographic location, node type, node performance, customer-group properties, and the like. Each sub-region corresponds to sub-region identification information. If the target task designates a sub-area, inquiring the inventory information in the sub-area according to the sub-area identification information and the hardware identification information corresponding to the target task, and if the sub-area does not have inventory, inquiring the inventory information of other sub-areas in the same area; preferably, if a plurality of other subareas meet the inventory requirement, the subarea with the highest matching degree with the subarea can be selected for scheduling.
Wherein step 46 comprises: step 461: adding a label to each sub-region; step 462: matching labels of a plurality of subareas corresponding to the area identification information of the target task with user information of the target task to obtain matching degree; step 463: sequencing the corresponding subareas according to the matching degree from high to low, and inquiring the inventory information of the subareas according to the sequencing; step 464: and carrying out cloud resource scheduling according to the inventory information and the inventory requirements.
Specifically, labels can be added to each sub-region, the content of the labels can include the position, the number of nodes, the service object, the hardware level, the security level and the like of the sub-region, the labels of the sub-regions in the region corresponding to the region identification information are matched with user attribute information and/or user demand information corresponding to the target task according to the region identification information corresponding to the target task, the sub-regions are ordered according to the matching degree from high to low, then inventory information of the sub-regions is queried according to the ordering, if the ordering is the most front, namely, the available inventory in the sub-region with the highest matching degree is enough, then the sub-region is selected for resource scheduling, otherwise, whether the available inventory in the sub-region with the second ordering is enough is judged, and so on.
According to the embodiment of the specification, whether inventory requirements are met or not is inquired according to the region identification information and the hardware identification information corresponding to the target task; the competitive task weight may be calculated in real-time based on at least one impact factor after finding a plurality of competitive tasks that meet the processing conditions; each region further comprises a plurality of subregions, if the competition type task designates a subregion, the inventory information in the subregion is queried according to the subregion identification information and the hardware identification information corresponding to the task, and if the subregion does not have the inventory, the inventory of other subregions in the same region can be queried; preferably, if a plurality of other subareas meet the inventory requirement, the subarea with the highest matching degree with the subarea can be selected from the subareas for scheduling; and adding labels to each sub-region, matching the labels of each sub-region in the region corresponding to the region identification information with user attribute information and/or user demand information corresponding to the competitive task according to the region identification information corresponding to the target task, sequencing each sub-region according to the matching degree from high to low, and scheduling based on the sequencing result. By dynamically determining the weight and further considering the regional attribute and the hardware attribute, intelligent scheduling can be realized, system resource waste can be effectively avoided, the number of newly-built resources can be tracked, and more accurate scheduling can be realized by simultaneously counting the number of the resources in progress and the number of the resources in the new construction.
[ example 1 ]
The cloud resource scheduling method based on the competition mechanism provided by the embodiment of the specification can be used for design of a competition type cloud host.
The competing task logic is exemplified as follows:
step S1: newly creating a competition type task, when the competition type task exceeds the starting time and does not reach the ending time, marking the state as 'NEW', and judging that the ending time is larger than the current time is illegal.
Step S2: the timed task refreshes the task table state every 5 minutes, querying all states as newly built (NEW) and IN PROGRESS (IN PROGRESS) competing tasks.
Specifically: when the current time of the newly built (NEW) competing task is greater than the starting time, updating to 'IN_PROGRESS'; newly built (NEW) and IN PROGRESS (in_process) competing tasks, when the task end time is exceeded, are updated to 'fixed', and all running virtual machines are cleaned.
Step S3: the timing task acquires the competitive task with the status of 'IN_PROGRESS' every 10 seconds, finds out the competitive task with the instance_count-creating_vms-running_vms >0, and arranges according to the real-time weight, and takes at most 5 records at a time, namely at most 5 competitive tasks at a time. Wherein instance_count represents the instance number; creating_vms represents the virtual machine number in new construction; running_vms represents the number of virtual machines in progress.
Specifically, a plurality of competing tasks meeting the processing conditions are obtained, namely tasks needing to be processed are competing tasks meeting the processing conditions, for example, if one task needs to create 10 virtual machines, 2 in the current creation and 2 in the running, 6 tasks need to be created to be completed; calculating weight values corresponding to the competitive tasks, namely real-time weights, for example, for each competitive task, firstly obtaining quantization values of quantifiable factors corresponding to the competitive tasks, then calculating the weight values corresponding to the competitive tasks based on a formula (1), then sequencing the competitive tasks according to the weight from high to low, and taking the tasks with the highest sequencing preset number, for example, taking 5 competitive tasks with the highest weight.
Further, inquiring inventory information for each selected competitive task according to the area identification information and the hardware identification information, if no inventory exists, not scheduling, and if the inventory exists, scheduling according to the first step with the highest weight, and only creating 1 machine or creating multiple machines at a time. After dispatching, updating the virtual machine number in the new construction, and adding 1 to the virtual machine number in the new construction; the hardware identification information may be a model identification.
Further, receiving a virtual machine creating end message, updating the number of virtual machines in progress and the newly built virtual machine number, adding 1 to the number of virtual machines in progress, subtracting 1 from the number of virtual machines in new, and when the updated task state is 'fixed', calling a cleaning interface to delete the virtual machine.
And finally, receiving a deletion end message of the virtual machine, and updating the number of the virtual machines in progress to reduce the number of the virtual machines in progress by 1. And after deletion, the quota is not returned, and the quota is not deducted by the competitive virtual machine, wherein quota refers to the card number quota of the graphics processor (Graphics Processing Unit, GPU).
The calculation inventory logic is exemplified as follows: according to the regional identification information and the hardware identification information, inquiring the inventory information, creating a competitive virtual machine to only calculate an available inventory, wherein the available inventory=the remaining inventory-the prepared inventory, and creating a normal virtual machine to calculate a floating inventory when the available inventory does not meet the requirement, wherein the floating inventory=the available inventory+the inventory occupied by the competitive virtual machine. Specifically: the inventory calculation method occupied by the competing host comprises the following steps: a) Acquiring all the adaptive nodes according to the region identification information and the hardware identification information, and counting a central processing unit (Central Processing Unit, CPU) and a random access memory (Random Access Memory, RAM) occupied by the competitive virtual machine according to node grouping; b) Creating virtual machine arrangement: checking the stock, and returning to-be-recovered virtual machine list when the floating stock needs to be released; when normal virtual machines are created and only floating stock meets the requirements, the competitive virtual machines are released, the stock released by the competitive virtual machines is counted into the prepared stock, and when other virtual machines need to be created, the prepared stock is removed; c) Creating a virtual machine; d) And releasing the preparation stock according to the virtual machine list.
Further, an inventory query example is as follows: inputting parameters: area identification information, hardware identification information, competition type identification information, whether recovery virtual machine list information is provided, and the like; return parameters: and (5) remaining stock and floating stock, and recovering a virtual machine list. The realization logic: according to the regional identification information and the hardware identification information, inquiring the inventory information, establishing a competitive virtual machine to only calculate an available inventory, wherein the available inventory=the remaining inventory-the prepared inventory, and establishing a normal virtual machine to calculate a floating inventory when the available inventory does not meet the processing requirement, wherein the floating inventory=the available inventory+the inventory occupied by the competitive host. If the floating stock meets the requirement and the information of the virtual machine list to be recovered is required to be provided, the virtual machine list to be recovered is returned at the same time, the node which can release the most stock is selected firstly, then the node is used for inquiring the releasable competitive virtual machine, the virtual machine which is just adapted is preferentially selected for deleting, if not, a plurality of virtual machines are sequentially deleted.
[ example 2 ]
The cloud resource scheduling method based on the competition mechanism provided by the embodiment of the specification can be used for intelligent computing competition type container design.
First, explanation is made for terms that may be used:
the management node represents a client-oriented application service node.
The working node represents a k8s cluster node, the k8s cluster is a group of node computers running a containerized application program, one k8s cluster is composed of a plurality of nodes, each node can create a corresponding number of POD containers, the PODs are minimum scheduling units in Kubernetes, one POD encapsulates one container, a plurality of containers can also be encapsulated, and the containers in the PODs share storage, a network and the like. That is, the entire POD can be regarded as a virtual machine, and then each container corresponds to a process running in the virtual machine. All containers in the same POD are agreed to be arranged and scheduled, kubernetes (k 8 s), which refers to an open-source Linux container automation operation and maintenance platform.
In the GPU card stock, the total stock represents the total amount of node stock; the common occupied inventory represents an inventory occupied by a common container; the common reserved inventory represents reserved inventory occupied by common containers, including: reserved stock in the process of ordinary POD creation or startup; the competitive occupancy inventory represents an inventory occupied by a competitive container; the contention reserved inventory represents reserved inventory occupied by a contention type container, comprising: the reserved stock in the process of creating or starting up the competitive POD; net inventory = total inventory-common occupied inventory-common reserved inventory-competing occupied inventory-competing reserved inventory; mao Kucun = total inventory-common occupied inventory-common reserved inventory; node inventory means that each node has counted the inventory of each index of the complaint.
The competing task logic is exemplified as follows: first, an administrator manages a background newly created competitive task, for example, a management node creates a competitive task based on user needs, including but not limited to: the number of containers to be created, area identification information, hardware identification information, and indication information of whether to compete or not, etc.; then, when the competing task exceeds the starting time and does not reach the ending time, the state is marked as 'IN_PROGRESS', when the competing task does not reach the starting time, the state is marked as 'NEW', and the judgment that the ending time is larger than the current time is illegal; then, timing tasks, refreshing task list states every 5 minutes, and inquiring all competing tasks with 'NEW' and 'IN_PROGRESS', wherein the current time of the 'NEW' task is more than the task starting time and is updated to be 'IN_PROGRESS'; tasks of 'NEW' and 'in_progress', when the task end time is exceeded, updating to 'fixed', and cleaning all PODs; finally, timing tasks, taking the task of 'IN_PROGRESS' every 30 minutes, finding the task of instance_count-creating_instances-running_instances >0, and arranging the tasks according to the weight from big to small.
Further, the POD containers of m (m is more than or equal to 1) stations can be created at one time in the same area by scheduling according to the weight from large to small. For example, only 1 POD is created in the same area at a time, the net stock of the machine type corresponding to the competitive task is obtained sequentially according to the area identification information and the hardware identification information, and if the net stock is greater than or equal to 0, the requirement of the competitive task is met.
Optionally, when the area includes a plurality of subareas, it is sequentially determined, based on priority information of each subarea, whether the net inventory of the corresponding model meets the requirement of the competitive task according to the priority from high to low. After at least one working node meeting the requirements of the competitive task is found, the management node sends identification information of the at least one working node and a POD generation command to a working node cluster, such as a k8s cluster, the working node cluster generates the POD generation command, the management node selects one node meeting the conditions from the working nodes as the POD generation node and sends the POD generation command to the working nodes, and the working node updates new tasks of the competitive task after scheduling, so that the number of the new tasks is increased by 1.
Further, receiving a POD creation end message, updating the new task number and the ongoing task number of the competing task, adding 1 to the new task number, subtracting 1 to the ongoing task number, and calling a cleaning interface to delete the POD when the updated task state is 'fixed'.
And finally, receiving a deletion end message of the virtual machine, and updating the number of the ongoing tasks of the competing task to reduce the number of the tasks by 1.
Illustratively, a generic container POD is created: the management node checks the gross stock of the machine type selected by the user, returns a to-be-recovered competitive POD list when the floating stock is required to be released, and if the stock is enough, the user can submit to create the POD; the management node receives a POD creating request of a user, and if a competition type POD list to be recovered exists, the competition type containers need to be recovered first, and the inventory is released; the management node locks the inventory to prevent the released inventory from being occupied by other containers; the management node selects one node meeting the condition from the working nodes as a POD generating node and sends a POD generating command to the working nodes.
The embodiment of the specification also provides a cloud resource scheduling device based on a competition mechanism, which comprises:
refresh module M1: and refreshing the task state of the created competitive task at intervals of a first preset time.
The searching module M2: acquiring a task state as an ongoing competitive task at intervals of a second preset time, and finding out a plurality of competitive tasks meeting processing conditions from the task state as the ongoing competitive task.
Processing module M3: and obtaining the target task according to the real-time weights of the multiple competing tasks.
Wherein, processing module M3 includes: processing submodule M31: aiming at each competitive task, obtaining quantitative values of a plurality of influence factors of the competitive task and weighting values corresponding to each influence factor; processing submodule M32: determining the real-time weight of each competitive task according to the quantized value and the corresponding weighted value of each influencing factor; processing submodule M33: and obtaining the real-time weights of the plurality of competing tasks according to the real-time weights of each competing task.
Optionally, the processing module M3 further includes: processing submodule M34: and obtaining a preset number of competitive tasks as target tasks according to the real-time weights of the plurality of competitive tasks.
Influencing factors in embodiments of the present description include at least one of: user attribute information corresponding to the competitive task, priority information of an area corresponding to the competitive task, priority information of hardware configuration corresponding to the competitive task, and emergency degree information of the competitive task.
The user attribute information includes a competitive capacity value of the user, and the method further includes: the acquisition module M5: acquiring characteristic information of users, adding a label to each user to obtain sample data, and marking the competitive power value of the user by the label; training module M6: inputting the sample data into a neural network model for training to obtain a competitive power evaluation model; evaluation module M7: and obtaining the competitiveness value according to the competitiveness evaluation model.
In the embodiment of the specification, the real-time weight is stored in a database, and the real-time weight is modified according to actual conditions before the competitive task is created and before the competitive task is scheduled; alternatively, the real-time weight is fixed at the time of competitive task creation, and the real-time weight is unchanged after the competitive task is created.
Scheduling module M4: and carrying out cloud resource scheduling according to the identification information and the inventory requirement of the target task.
Wherein the identification information includes: region identification information and hardware identification information; the scheduling module M4 includes: scheduling sub-module M41: determining whether the inventory requirement is met according to the region identification information and the hardware identification information corresponding to the target task; scheduling sub-module M42: if not, cloud resource scheduling is not performed; scheduling sub-module M43: if yes, cloud resource scheduling is carried out according to the real-time weight corresponding to the target task, and resource records in the database are updated after scheduling is started and after scheduling is completed.
Optionally, the scheduling module M4 further includes: scheduling sub-module M44: and after the target task completes cloud resource scheduling, calling a cleaning interface to release cloud resources, and updating resource records in a database.
Wherein, the scheduling sub-module M41 includes: unit D411: obtaining inventory information according to the query parameters, wherein the query information comprises area identification information, hardware identification information and competition type identification information, and the inventory information comprises residual inventory and floating inventory; unit D412: and determining whether the inventory requirement is met according to the inventory information.
Further, the region identification information is used for identifying regions, each region is provided with a plurality of nodes, and each region comprises a plurality of sub-regions; the scheduling module M4 further includes: scheduling sub-module M45: if the target task designates a sub-region, acquiring inventory information of the sub-region according to sub-region identification information and hardware identification information corresponding to the target task, if the inventory requirement is not met, inquiring inventory information of other sub-regions belonging to the same region, and if the inventory requirement is met, performing cloud resource scheduling in the corresponding other sub-regions; scheduling sub-module M46: and if the target task designates a plurality of sub-areas, carrying out cloud resource scheduling according to the matching degree of the target task and each sub-area.
Wherein, the scheduling sub-module M46 includes: unit D461: adding a label to each sub-region; unit D462: matching labels of a plurality of subareas corresponding to the area identification information of the target task with user information of the target task to obtain matching degree; unit D463: sequencing the corresponding subareas according to the matching degree from high to low, and inquiring the inventory information of the subareas according to the sequencing; unit D464: and carrying out cloud resource scheduling according to the inventory information and the inventory requirements.
The embodiment of the specification also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the contention-based cloud resource scheduling method described above.
The embodiments of the present disclosure also provide a computer storage medium storing computer executable instructions that when executed by a processor perform the above-described cloud resource scheduling method based on a contention mechanism.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment focuses on differences from other embodiments. In particular, for the product embodiments described later, since they correspond to the methods, the description is relatively simple, and reference is made to the description of parts of the system embodiments.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily conceivable by those skilled in the art within the technical scope of the present application should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. The cloud resource scheduling method based on the competition mechanism is characterized by comprising the following steps of:
step 1: refreshing task states of the created competitive tasks at intervals of a first preset time;
step 2: acquiring the task state as the competing task in progress at intervals of a second preset time, and finding out a plurality of competing tasks meeting processing conditions from the task state as the competing task in progress;
step 3: obtaining a target task according to the real-time weights of the competitive tasks;
step 4: and carrying out cloud resource scheduling according to the identification information and the inventory requirement of the target task.
2. The cloud resource scheduling method based on the contention mechanism according to claim 1, wherein the step 3 includes:
step 31: obtaining quantitative values of a plurality of influence factors of the competitive task and weighting values corresponding to the influence factors of the competitive task aiming at each competitive task;
step 32: determining the real-time weight of each competitive task according to the quantized value and the corresponding weighted value of each influencing factor;
step 33: and obtaining the real-time weights of a plurality of competing tasks according to the real-time weights of each competing task.
3. The cloud resource scheduling method based on the contention mechanism according to claim 1, wherein the step 3 further includes:
step 34: and obtaining a preset number of competitive tasks as the target tasks according to the real-time weights of the competitive tasks.
4. The cloud resource scheduling method based on a contention mechanism according to claim 2, wherein the influencing factors include at least one of: the method comprises the steps of user attribute information corresponding to the competitive task, priority information of an area corresponding to the competitive task, priority information of hardware configuration corresponding to the competitive task and emergency degree information of the competitive task.
5. The contention mechanism-based cloud resource scheduling method according to claim 4, wherein the user attribute information includes a contention capability value of a user, the method further comprising:
step 5: obtaining characteristic information of users, adding a label to each user to obtain sample data, wherein the label identifies the competitive power value of the user;
step 6: inputting the sample data into a neural network model for training to obtain a competitive power evaluation model;
Step 7: and obtaining the competitiveness value according to the competitiveness evaluation model.
6. The cloud resource scheduling method based on a contention mechanism according to claim 1, wherein the identification information includes: region identification information and hardware identification information;
the step 4 includes:
step 41: determining whether the inventory requirement is met according to the region identification information and the hardware identification information corresponding to the target task;
step 42: if not, not carrying out the cloud resource scheduling;
step 43: if yes, carrying out cloud resource scheduling according to the real-time weight corresponding to the target task, and updating resource records in a database after scheduling is started and after scheduling is completed.
7. The cloud resource scheduling method based on the contention mechanism according to claim 6, wherein the step 4 further includes:
step 44: and after the target task completes the cloud resource scheduling, a cleaning interface is called to release cloud resources, and the resource records in the database are updated.
8. The cloud resource scheduling method based on a contention mechanism according to claim 7, wherein the real-time weight is stored in the database, the real-time weight being modified according to actual conditions before the contention-type task is created and before being scheduled;
Or alternatively, the process may be performed,
the real-time weight is fixed when the competitive task is created, and the real-time weight is unchanged after the competitive task is created.
9. The cloud resource scheduling method based on the contention mechanism according to claim 6, wherein said step 41 includes:
step 411: obtaining inventory information according to query parameters, wherein the query information comprises the region identification information, the hardware identification information and competition type identification information, and the inventory information comprises residual inventory and floating inventory;
step 412: and determining whether the inventory requirement is met according to the inventory information.
10. The contention-based cloud resource scheduling method according to claim 9, wherein the region identification information is used to identify regions, each of which has a plurality of nodes disposed therein, each of which includes a plurality of sub-regions;
the step 4 further includes:
step 45: if the target task designates one subarea, acquiring the inventory information of the subarea according to the subarea identification information and the hardware identification information corresponding to the target task, if the inventory requirement is not met, inquiring the inventory information of other subareas in the same area, and if the inventory requirement is met, performing cloud resource scheduling in the corresponding other subareas;
Step 46: and if the target task designates a plurality of subareas, carrying out cloud resource scheduling according to the matching degree of the target task and each subarea.
11. The cloud resource scheduling method based on the contention mechanism according to claim 10, wherein said step 46 includes:
step 461: adding a label to each subarea;
step 462: matching labels of a plurality of sub-areas corresponding to the area identification information of the target task with user information of the target task to obtain matching degree;
step 463: sorting the corresponding subareas according to the matching degree from high to low, and inquiring the inventory information of the subareas according to the sorting;
step 464: and carrying out cloud resource scheduling according to the inventory information and the inventory requirements.
12. A cloud resource scheduling device based on a contention mechanism, comprising:
refresh module M1: refreshing task states of the created competitive tasks at intervals of a first preset time;
the searching module M2: acquiring the task state as the competing task in progress at intervals of a second preset time, and finding out a plurality of competing tasks meeting processing conditions from the task state as the competing task in progress;
Processing module M3: obtaining a target task according to the real-time weights of the competitive tasks;
scheduling module M4: and carrying out cloud resource scheduling according to the identification information and the inventory requirement of the target task.
13. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the contention-based cloud resource scheduling method according to any of claims 1-11.
14. A computer storage medium storing computer executable instructions which when executed by a processor perform the contention mechanism based cloud resource scheduling method according to any one of claims 1 to 11.
CN202211265572.3A 2022-10-17 2022-10-17 Cloud resource scheduling method, device, equipment and medium based on competition mechanism Pending CN116107689A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211265572.3A CN116107689A (en) 2022-10-17 2022-10-17 Cloud resource scheduling method, device, equipment and medium based on competition mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211265572.3A CN116107689A (en) 2022-10-17 2022-10-17 Cloud resource scheduling method, device, equipment and medium based on competition mechanism

Publications (1)

Publication Number Publication Date
CN116107689A true CN116107689A (en) 2023-05-12

Family

ID=86260443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211265572.3A Pending CN116107689A (en) 2022-10-17 2022-10-17 Cloud resource scheduling method, device, equipment and medium based on competition mechanism

Country Status (1)

Country Link
CN (1) CN116107689A (en)

Similar Documents

Publication Publication Date Title
CN110928689B (en) Self-adaptive resource management method and device for distributed reinforcement learning training
CN108519911A (en) The dispatching method and device of resource in a kind of cluster management system based on container
CN106708622A (en) Cluster resource processing method and system, and resource processing cluster
CN108009016A (en) A kind of balancing resource load control method and colony dispatching device
WO2015065366A1 (en) Process model catalog
CN113037877B (en) Optimization method for time-space data and resource scheduling under cloud edge architecture
CN113807714B (en) Method, apparatus, device, storage medium and program product for resource allocation
CN104158841B (en) Computational resource allocation method
CN111813523A (en) Duration pre-estimation model generation method, system resource scheduling method, device, electronic equipment and storage medium
CN108021435A (en) A kind of cloud computing task stream scheduling method with fault-tolerant ability based on deadline
CN110162407A (en) A kind of method for managing resource and device
CN113515382B (en) Cloud resource allocation method and device, electronic equipment and storage medium
CN108509280A (en) A kind of Distributed Calculation cluster locality dispatching method based on push model
CN114518945A (en) Resource scheduling method, device, equipment and storage medium
CN109740870A (en) The resource dynamic dispatching method that Web is applied under cloud computing environment
CN117519930A (en) Method and device for executing batch tasks and electronic equipment
CN116866440A (en) Cluster node selection scheduling method and device, electronic equipment and storage medium
CN116107689A (en) Cloud resource scheduling method, device, equipment and medium based on competition mechanism
CN109495595A (en) IP address distribution method, device, communication system and storage medium
CN116932198A (en) Resource scheduling method, device, electronic equipment and readable storage medium
CN115689201A (en) Multi-criterion intelligent decision optimization method and system for enterprise resource supply and demand allocation
CN114296872A (en) Scheduling method and device for container cluster management system
CN113344392A (en) Enterprise project comprehensive management method and system
Ontuzheva et al. Simulation modelling of the heterogeneous distributed information processing systems
CN115599557B (en) Scheduler system considering dynamic change of importance degree of task

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination