CN112039709B - Resource scheduling method, device, equipment and computer readable storage medium - Google Patents
Resource scheduling method, device, equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN112039709B CN112039709B CN202010910247.2A CN202010910247A CN112039709B CN 112039709 B CN112039709 B CN 112039709B CN 202010910247 A CN202010910247 A CN 202010910247A CN 112039709 B CN112039709 B CN 112039709B
- Authority
- CN
- China
- Prior art keywords
- node
- list
- condition
- information
- resource
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0654—Management of faults, events, alarms or notifications using network fault recovery
- H04L41/0663—Performing the actions predefined by failover planning, e.g. switching to standby network elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0631—Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application provides a resource scheduling method, a device, equipment and a computer readable storage medium, wherein the method comprises the following steps: storing state information of a plurality of node resources in a node resource pool into a database, and determining a resource scheduling strategy according to a target task, wherein the resource scheduling strategy comprises a first condition set based on the target index parameter information and a second condition set based on the target root node list information; listing the child node information of the node resource meeting the first condition into a first list, and listing the child node information of the node resource not meeting the first condition into a second list; listing the child node information of the node resource meeting the second condition in the second list into a third list; and comparing the third list with a preset node resource list, and listing the child node information in the third list and the preset node resource list into the first list.
Description
Technical Field
The application relates to the technical field of cloud computing, in particular to a resource scheduling method, device, equipment and a computer readable storage medium.
Background
A cloud computing platform generally refers to a system that provides computing, networking, and storage capabilities for users based on services of hardware resources and software resources. At present, many task processing projects or resource management need to use the scheduling function of the cloud computing platform, but a plurality of scheduling functions of the commonly used cloud computing platform are separated, and a user is required to perform complex matching operation in the resource scheduling process, so that more complex resource scheduling service requirements cannot be supported.
Disclosure of Invention
In view of this, embodiments of the present application provide a resource scheduling method, apparatus, device, and computer-readable storage medium to solve the problems in the related art, and the technical solutions are as follows:
in a first aspect, an embodiment of the present application provides a resource scheduling method, including:
storing state information of a plurality of node resources in a node resource pool into a database, wherein the state information of the node resources comprises root node information, child node information and index parameter information corresponding to the child nodes of the node resources;
determining a resource scheduling strategy according to a target task, wherein the target task comprises target index parameter information and target root node list information, and the resource scheduling strategy comprises a first condition set based on the target index parameter information and a second condition set based on the target root node list information;
listing the child node information of the node resource meeting the first condition into a first list, and listing the child node information of the node resource not meeting the first condition into a second list; listing the child node information of the node resource meeting the second condition in the second list into a third list; and comparing the third list with a preset node resource list, and listing the child node information in the third list and the preset node resource list into the first list.
In an embodiment, the first condition comprises: and index parameter information corresponding to the child nodes in the node resources corresponds to the target index parameters.
In another embodiment, the second condition comprises: child node information in the node resources corresponds to root nodes in the target root node list.
In another embodiment, said listing child node information of node resources that do not meet said first condition in a second list comprises:
adding a plurality of child nodes under a designated root node to the second list if a first child node does not meet the first condition and the first child node corresponds to the designated root node,
and if the second child node does not meet the first condition and does not correspond to the designated root node, adding a plurality of child nodes under a plurality of root nodes in the database into the second list.
In another embodiment, the resource scheduling method further includes: receiving a node strategy adding or modifying instruction, selecting a child node needing to set a strategy, setting a first screening condition whether to start a white list for the child node needing to set the strategy, setting a second screening condition whether to correspond to the target index parameter for index parameter information corresponding to the child node needing to set the strategy, setting a sorting condition according to priority sorting for the child node needing to set the strategy, which accords with the first screening condition and the second screening condition, and generating a node strategy tree according to the first screening condition, the second screening condition and the sorting condition.
In another embodiment, the resource scheduling method further includes:
and returning child node information of the node resources which do not meet the second condition and simultaneously meet the node policy tree in the second list to the second list.
In another embodiment, before comparing the third list with a preset node resource list, the method further comprises:
and sequencing the child nodes in the third list according to the descending order of the priority.
In another embodiment, the resource scheduling method further includes:
if no effective child node exists in the third list, sending out alarm information; and/or the presence of a gas in the gas,
and if the child nodes in the third list and the preset node resource list are invalid child nodes, sending out alarm information.
In another embodiment, the resource scheduling method further includes:
and periodically monitoring the state of the node resources in the node resource pool, and updating the state information of the node resources in the database.
In a second aspect, an embodiment of the present application further provides a resource scheduling apparatus, including:
the resource storage module is used for storing the state information of a plurality of node resources in a node resource pool into a database, wherein the state information of the node resources comprises root node information and child node information of the node resources and index parameter information corresponding to the child nodes;
the resource scheduling system comprises a strategy determining module, a resource scheduling module and a resource scheduling module, wherein the strategy determining module is used for determining a resource scheduling strategy according to a target task, the target task comprises target index parameter information and target root node list information, and the resource scheduling strategy comprises a first condition set based on the target index parameter information and a second condition set based on the target root node list information;
the node processing module is used for listing the child node information of the node resource meeting the first condition into a first list and listing the child node information of the node resource not meeting the first condition into a second list; the node processing module is further configured to list child node information of the node resource meeting the second condition in the second list into a third list; the node processing module is further configured to compare the third list with a preset node resource list, and list child node information in the third list and the preset node resource list at the same time into the first list.
In a third aspect, an embodiment of the present application further provides an apparatus, including: a processor and a memory, the memory having stored therein instructions that are loaded and executed by the processor to implement the method as described above.
In a fourth aspect, the present application further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the above-mentioned method.
The advantages or beneficial effects in the above technical solution at least include: according to the resource scheduling scheme of the embodiment of the application, the sub-node information of a plurality of node resources can be scheduled from the node resource pool, the acquired sub-node information of the node resources is screened through the resource scheduling strategy and the preset resource list, and finally the list of the required node resource information is obtained.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
Fig. 1 is a flowchart of a resource scheduling method according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a node resource structure model of a scheduling project according to an embodiment of the present application.
Fig. 3 is a flowchart of node resource scheduling according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a resource scheduling apparatus according to an embodiment of the present application.
Fig. 5 is a block diagram of an apparatus for implementing a resource scheduling method according to an embodiment of the present application.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
Fig. 1 is a flowchart of a resource scheduling method according to an embodiment of the present application. As shown in fig. 1, the resource scheduling method may include the following processes:
step S1, storing the state information of a plurality of node resources in a node resource pool into a database, wherein the state information of the node resources comprises root node information and child node information of the node resources and index parameter information corresponding to the child nodes;
in the embodiment of the application, the node resources stored in the database in the node resource pool are changed in real time, the state of the node resources in the node resource pool is periodically monitored, and the state information of the node resources in the database is updated. Monitoring equipment (Monitor Agent) deployed by a near resource child node of an application program Interface (REST API) based on representational state transfer periodically monitors a plurality of node resources, selects resources related to a project scheduling policy, stores state information of the node resource pool including the node resources related to the project scheduling policy into a database, and sends and regularly updates monitoring information in the database, so that the real-time performance of data can be ensured. Specifically, the process of monitoring the node resources in real time specifically includes: the timing platform periodically calls a monitoring API (application programming interface) of the monitoring equipment, the monitoring equipment executes a back-end resource, such as an execution state query sent by Virtual Machine software (VMware), and sends the state information of the queried node resource to the scheduling system, and the scheduling system performs standard check on the state information of the node resource and updates the state information into a database. The state information of the node resources is sent according to the convention project resource hierarchical structure specification, and the project resource hierarchical structure is set in the registration project of the scheduling project in the database. The database provides an external REST API interface, the state information of the node resources in the database is called according to the definition of the relevant node resources in the interface parameters, and the state information of a plurality of node resources in the node resource pool of the scheduling item corresponding to the database is dynamically loaded.
In the embodiment of the present application, before step S1, the method further includes setting resource information related to the scheduling item in the database.
In an embodiment of the present application, the resource information related to the scheduling item includes a scheduling item for determining a scheduling type, and a resource node list of a plurality of node resources in the scheduling item. The scheduling item is used to deploy a version and a scheduling type, and the scheduling type may be a Kernel-based Virtual Machine (KVM) or Virtual Machine software (VMWARW). The scheduling item comprises an item name, an item resource hierarchical structure, an item scheduling strategy and a monitoring API (application programming interface); the resource node list comprises node names, node real-time monitoring information, a node strategy tree, belonging item keys (keys), node monitoring addresses, label cache and the like. The nodes monitor information in real time, and based on the hierarchical tree structure of the actual resource pool, an API (application programming interface) is required to be configured for periodic monitoring and updating so as to ensure the real-time performance of data. The node strategy tree is used for setting different screening strategies for all child nodes and state information contained in the node real-time monitoring information.
Fig. 2 is a schematic diagram of a node resource structure model of a scheduling project according to an embodiment of the present application, where the same hierarchy among multiple node resources of the same scheduling project is the same, each level of child nodes refines the resource of the previous level of child nodes, the resource is the state information of the child nodes of each level, the state information of the child nodes is verified through a policy, and whether the child nodes are allowed to point to a target list is verified and judged.
The following describes a specific configuration flow of the node policy tree in detail:
in the embodiment of the application, a node policy adding or modifying instruction is received, a monitoring API returns a child node monitored in real time and state information corresponding to the child node, a child node to be set with a policy is selected, a first screening condition whether a white list is enabled is set for the child node to be set with the policy, a second screening condition whether the index parameter information corresponding to the child node to be set with the policy corresponds to a target index parameter is set, a sorting condition sorted according to priority is set for the child node to be set with the policy meeting the first screening condition and the second screening condition, a node policy tree is generated according to the first screening condition, the second screening condition and the sorting condition, and the user configuration is required to be subjected to normalization check before the node policy tree is generated. The node policy tree specifically includes: setting judgment basis for the state information, setting fusing and screening basis for the child nodes, setting priority judgment basis for the target child nodes screened out finally under the node, selecting to enable/disable white list logic, and the like.
After the state information of the node resource is acquired, the resource scheduling policy needs to be confirmed, and the state information of the node resource is further screened through the resource scheduling policy, which is specifically described as follows:
step S2, receiving a target task issued by a user, and determining a resource scheduling policy according to the target task, wherein the target task includes target index parameter information and target root node list information, and the resource scheduling policy includes a first condition set based on the target index parameter information and a second condition set based on the target root node list information.
In the embodiment of the application, the target index parameter information includes a resource scheduling policy, the node policy tree is a default policy, the resource scheduling policy in the target task is merged with the node policy tree, and at this time, the node policy tree is updated to form an optimal policy of the target task.
Further, the first condition includes: and index parameter information corresponding to the child nodes in the node resources corresponds to the target index parameters. Specifically, in the embodiment of the present application, if the target index parameter includes a tag of a set format character string, at this time, if the index parameter information corresponding to the child node in the node resource includes the same tag in the target index parameter, the child node information of the node resource conforms to the first condition; otherwise, the child node information of the node resource does not meet the first condition. Preferably, the format setting string is a string spliced by fields of scheduling type, node ID, user ID and/or buffering time. The user is allowed to set a label, the same label being scheduled to the same scheduling object.
The second condition includes: child node information in the node resources corresponds to root nodes in the target root node list. Specifically, in the embodiment of the present application, a root node of the target root node list includes a target child node under a corresponding root node, and when child node information in the node resource is the target child node, the child node information of the node resource meets the second condition; otherwise, the child node information of the node resource does not meet the second condition.
Step S3, listing child node information of the node resource meeting the first condition into a first list, and listing child node information of the node resource not meeting the first condition into a second list; listing the child node information of the node resource meeting the second condition in the second list into a third list; and comparing the third list with a preset node resource list, and listing the child node information in the third list and the preset node resource list into the first list.
In an embodiment of the present application, the child node information of the node resource that does not meet the first condition is listed in a second list, and it is further required to determine whether the child node information of the node resource corresponds to a designated root node:
if the first child node does not meet the first condition and the first child node corresponds to the designated root node, adding a plurality of child nodes under the designated root node into a second list; and if the second child node does not meet the first condition and does not correspond to the designated root node, adding a plurality of child nodes under a plurality of root nodes in the database into a second list.
In the embodiment of the application, the child node information of the node resource meeting the second condition in the second list is listed in a third list, and the third list which can be dispatched by the target task and is decreased according to the priority can be determined by combining the configured resource dispatching strategy.
In the embodiment of the present application, the child node information of the node resource that does not meet the second condition and simultaneously meets the node policy tree in the second list may also be returned to the second list. And meanwhile, in the process of returning the child node information of the node resource, updating the child nodes in the user white list to the target child nodes under the corresponding root nodes.
In the embodiment of the application, after the screening of the child node information of all the node resources in the second list is completed, the third list is further screened according to the preset node resource list. The preset node resource list is a scheduling target range designated by a user, the user is allowed to designate the preset node resource list, child nodes of the preset node resource list are verified in the third list, the third list is adjusted, and the first list meeting the requirements of the user is returned.
In an embodiment of the application, before comparing the third list with a preset node resource list, the method further includes: and sequencing the child nodes in the third list according to the descending order of the priority.
The process of screening the third list according to the preset node resource list comprises the following steps:
and listing the child node information in the third list and the preset node resource list at the same time into the first list, and sorting according to a priority sorting strategy, wherein the third list and the preset node resource list are both effective scheduling targets. The prioritization policy may be to sort in a descending manner of priority.
In the embodiment of the application, if no effective child node exists in the third list and no effective scheduling target exists, notifying according to an early warning strategy and sending out warning information; and/or if the child nodes in the third list and the preset node resource list are invalid child nodes at the same time, namely the third list and the preset node resource list have no effective scheduling target, notifying according to an early warning strategy and sending out warning information. After the alarm information is sent out, the back-end personnel can be reminded to maintain the node resource pool.
In the embodiment of the present application, the third list may also be sorted directly according to the priority sorting policy without screening the third list through the preset node resource list, and the child node information of the node resources in the third list is returned to the first list.
Various implementations of the embodiments of the present application are described above, and specific processing procedures of the embodiments of the present application are described below by specific examples. Fig. 3 is a flowchart of node resource scheduling according to an embodiment of the present application, and referring to fig. 3, a specific process is as follows:
a) judging whether target index parameter information in the target task contains a label, if so, checking whether a child node in the node resource contains the label, and if so, entering the step l);
b) judging whether the child nodes in the node resources are designated root nodes or not: if so, adding (push) the root node into a subnode queue (subnode queue); if not, all the root nodes in the node resource pool are pushed to the subnode queue;
c) if the subnode queue is not empty, executing step d); if the subnode queue is empty, executing step g);
d) traversing child nodes in the subnode queue according to the First-in First-out (FIFO) sequence;
e) if the child node is the target child node, pushing the child node to a result queue (result queue); otherwise, judging whether the child node starts to be fused or not, if so, judging whether the child node is in a white list or not, and if not, discarding the child node; if the child node is in the white list, further judging whether a strategy exists, if the strategy exists and the node strategy tree contains the child node, traversing a plurality of screening strategies in the node strategy tree, and if all the screening strategies pass, adding the child node into the child node queue; otherwise, discarding the child node;
f) d-e is circulated until the subnode queue is empty;
g) checking whether a valid child node exists in the result queue to be selected, if not, giving an alarm, and ending the process; if the valid child nodes exist in the to-be-selected result queue, entering the step h);
h) judging whether the child node exists in a preset node resource list or not, and if so, entering the step i; if not, entering step j;
i) acquiring and calculating a preset node resource list and a to-be-selected result queue to form a new result queue, judging whether the new result queue is effective or not, and entering a step j if the new result queue is effective; if the result is invalid, alarming and ending the process;
j) sequencing the effective new result queue according to a priority sequencing strategy; wherein the ordering strategy is derived from a node resource strategy tree;
k) adding the effective new result queue update into the target cache result;
l) returning the cache result to the target range, and ending the process.
After the scheduling process is completed, the child node information of the plurality of node resources which pass the verification of the resource scheduling policy and the node policy tree, are located in the preset node resource list and are effective scheduling targets is obtained, and a first list is formed by the child node information of the plurality of node resources, so that a user can schedule the resources in the first list for service processing.
Based on the foregoing resource scheduling method, the present application further provides a resource scheduling apparatus, and fig. 4 is a schematic structural diagram of the resource scheduling apparatus according to the embodiment of the present application. Referring to fig. 4, including:
the resource scheduling apparatus 1 includes:
a resource storing module 100, configured to store state information of multiple node resources in a node resource pool into a database, where the state information of the node resources includes root node information and child node information of the node resources, and index parameter information corresponding to the child nodes;
a policy determining module 200, configured to determine a resource scheduling policy according to a target task, where the target task includes target index parameter information and target root node list information, and the resource scheduling policy includes a first condition set based on the target index parameter information and a second condition set based on the target root node list information;
a node processing module 300, configured to list child node information of the node resource meeting the first condition into a first list, and list child node information of the node resource not meeting the first condition into a second list; the node processing module 300 is further configured to list child node information of the node resource meeting the second condition in the second list into a third list; the node processing module 300 is further configured to compare the third list with a preset node resource list, and list child node information in the third list and the preset node resource list at the same time into the first list.
The first condition includes: and index parameter information corresponding to the child nodes in the node resources corresponds to the target index parameters.
The second condition includes: child node information in the node resources corresponds to root nodes in the target root node list.
The node processing module 300 includes:
a first adding unit 310, configured to add a plurality of child nodes under a designated root node to the second list if a first child node does not meet the first condition and the first child node corresponds to the designated root node,
a second adding unit 320, configured to add, to the second list, a plurality of child nodes under the plurality of root nodes in the database if a second child node does not meet the first condition and the second child node does not correspond to the specified root node.
The resource scheduling apparatus 1 further includes:
the policy tree configuration module 400 is configured to receive a node policy adding or modifying instruction, select a child node to which a policy needs to be set, set a first screening condition on whether a white list is enabled for the child node to which the policy needs to be set, set a second screening condition on whether index parameter information corresponding to the child node to which the policy needs to be set corresponds to the target index parameter, set a sorting condition according to priority sorting for the child node to which the policy needs to be set that meets the first screening condition and the second screening condition, and generate a node policy tree according to the first screening condition, the second screening condition, and the sorting condition.
The node processing module 300 is further configured to return child node information of the node resource that does not meet the second condition and meets the node policy tree in the second list to the second list.
The node processing module 300 is further configured to sort the child nodes in the third list according to a descending order of priority.
The resource scheduling apparatus 1 further includes:
an alarm module 500, configured to send an alarm message when no valid child node exists in the third list; and/or sending out alarm information under the condition that the child nodes in the third list and the preset node resource list are invalid child nodes at the same time.
The resource scheduling apparatus 1 further includes:
a monitoring module 600, configured to periodically monitor a state of a node resource in the node resource pool;
a resource updating module 700, configured to update the state information of the node resource in the database.
The functions of the modules in the embodiment of the present application may refer to the corresponding descriptions in the above method, and are not described herein again.
Fig. 5 is a block diagram of an apparatus for implementing a resource scheduling method according to an embodiment of the present application. As shown in fig. 5, the apparatus includes: a memory 510 and a processor 520, the memory 510 having stored therein computer programs that are executable on the processor 520. The processor 520, when executing the computer program, implements the resource scheduling method in the above embodiments. The number of the memory 510 and the processor 520 may be one or more.
The apparatus further comprises:
the communication interface 530 is used for communicating with an external device to perform data interactive transmission.
If the memory 510, the processor 520, and the communication interface 530 are implemented independently, the memory 510, the processor 520, and the communication interface 530 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
Optionally, in an implementation, if the memory 510, the processor 520, and the communication interface 530 are integrated on a chip, the memory 510, the processor 520, and the communication interface 530 may complete communication with each other through an internal interface.
Embodiments of the present application provide a computer-readable storage medium, which stores a computer program, and when the program is executed by a processor, the computer program implements the method provided in the embodiments of the present application.
The embodiment of the present application further provides a chip, where the chip includes a processor, and is configured to call and execute the instruction stored in the memory from the memory, so that the communication device in which the chip is installed executes the method provided in the embodiment of the present application.
An embodiment of the present application further provides a chip, including: the system comprises an input interface, an output interface, a processor and a memory, wherein the input interface, the output interface, the processor and the memory are connected through an internal connection path, the processor is used for executing codes in the memory, and when the codes are executed, the processor is used for executing the method provided by the embodiment of the application.
It should be understood that the processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be an advanced reduced instruction set machine (ARM) architecture supported processor.
Further, optionally, the memory may include a read-only memory and a random access memory, and may further include a nonvolatile random access memory. The memory may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may include a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available. For example, Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the present application are generated in whole or in part when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process. And the scope of the preferred embodiments of the present application includes other implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. All or part of the steps of the method of the above embodiments may be implemented by hardware that is configured to be instructed to perform the relevant steps by a program, which may be stored in a computer-readable storage medium, and which, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module may also be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present application, and these should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (16)
1. A method for scheduling resources, comprising:
storing state information of a plurality of node resources in a node resource pool into a database, wherein the state information of the node resources comprises root node information, child node information and index parameter information corresponding to the child nodes of the node resources;
determining a resource scheduling strategy according to a target task, wherein the target task comprises target index parameter information and target root node list information, and the resource scheduling strategy comprises a first condition set based on the target index parameter information and a second condition set based on the target root node list information;
listing the child node information of the node resource meeting the first condition into a first list, and listing the child node information of the node resource not meeting the first condition into a second list; listing the child node information of the node resource meeting the second condition in the second list into a third list; comparing the third list with a preset node resource list, and listing child node information in the third list and the preset node resource list at the same time into the first list;
wherein the second condition comprises: the root nodes of the target root node list comprise target child nodes under the corresponding root nodes, and when the child node information in the node resources is the target child nodes, the child node information of the node resources meets a second condition;
the listing of child node information of node resources that do not meet the first condition into a second list includes:
if the first child node does not meet the first condition and the first child node corresponds to the designated root node, adding a plurality of child nodes under the designated root node to the second list;
if the second child node does not meet the first condition and does not correspond to the designated root node, adding a plurality of child nodes under a plurality of root nodes in the database into a second list;
after the information of the child nodes of the node resources meeting the second condition in the second list is listed in a third list, the method further includes:
and determining a third list which can be dispatched by the target task and is decreased according to the priority by combining the configured resource dispatching strategy.
2. The method according to claim 1,
the first condition includes: and index parameter information corresponding to the child nodes in the node resources corresponds to the target index parameters.
3. The method for scheduling resources according to claim 1, further comprising: receiving a node strategy adding or modifying instruction, selecting a child node needing to set a strategy, setting a first screening condition whether to start a white list for the child node needing to set the strategy, setting a second screening condition whether to correspond to the target index parameter for index parameter information corresponding to the child node needing to set the strategy, setting a sorting condition according to priority sorting for the child node needing to set the strategy, which accords with the first screening condition and the second screening condition, and generating a node strategy tree according to the first screening condition, the second screening condition and the sorting condition.
4. The method for scheduling resources according to claim 3, further comprising:
and returning child node information of the node resources which do not meet the second condition and simultaneously meet the node policy tree in the second list to the second list.
5. The method of claim 1, wherein before comparing the third list with a preset list of node resources, the method further comprises:
and sequencing the child nodes in the third list according to the descending order of the priority.
6. The method for scheduling resources according to claim 1, further comprising:
if no effective child node exists in the third list, sending out alarm information; and/or the presence of a gas in the gas,
and if the child nodes in the third list and the preset node resource list are invalid child nodes, sending out alarm information.
7. The method for scheduling resources according to claim 1, further comprising:
and periodically monitoring the state of the node resources in the node resource pool, and updating the state information of the node resources in the database.
8. A resource scheduling apparatus, comprising:
the resource storage module is used for storing the state information of a plurality of node resources in a node resource pool into a database, wherein the state information of the node resources comprises root node information, child nodes and index parameter information corresponding to the child nodes of the node resources;
the resource scheduling system comprises a strategy determining module, a resource scheduling module and a resource scheduling module, wherein the strategy determining module is used for determining a resource scheduling strategy according to a target task, the target task comprises target index parameter information and target root node list information, and the resource scheduling strategy comprises a first condition set based on the target index parameter information and a second condition set based on the target root node list information;
the node processing module is used for listing the child node information of the node resource meeting the first condition into a first list and listing the child node information of the node resource not meeting the first condition into a second list; the node processing module is further configured to list child node information of the node resource meeting the second condition in the second list into a third list; the node processing module is further configured to compare the third list with a preset node resource list, and list child node information in the third list and the preset node resource list at the same time into the first list;
wherein the second condition comprises: the root nodes of the target root node list comprise target child nodes under the corresponding root nodes, and when the child node information in the node resources is the target child nodes, the child node information of the node resources meets a second condition;
the node processing module is used for adding a plurality of sub-nodes under the appointed root node into a second list if the first sub-node does not accord with the first condition and the first sub-node corresponds to the appointed root node when the sub-node information of the node resource which does not accord with the first condition is listed into the second list;
if the second child node does not meet the first condition and does not correspond to the designated root node, adding a plurality of child nodes under a plurality of root nodes in the database into a second list;
the resource scheduling device is further configured to: and determining a third list which can be dispatched by the target task and is decreased according to the priority by combining the configured resource dispatching strategy.
9. The resource scheduling apparatus of claim 8,
the first condition includes: and index parameter information corresponding to the child nodes in the node resources corresponds to the target index parameters.
10. The apparatus for scheduling resources according to claim 8, further comprising:
the strategy tree configuration module is used for receiving a node strategy adding or modifying instruction, selecting a child node needing to be set with a strategy, setting a first screening condition whether to start a white list for the child node needing to be set with the strategy, setting a second screening condition whether to correspond to the target index parameter for the index parameter information corresponding to the child node needing to be set with the strategy, setting a sorting condition according to priority sorting for the child node needing to be set with the strategy, which accords with the first screening condition and the second screening condition, and generating a node strategy tree according to the first screening condition, the second screening condition and the sorting condition.
11. The apparatus according to claim 10, wherein the node processing module is further configured to return child node information of the node resource that does not meet the second condition and meets the node policy tree in the second list to the second list.
12. The apparatus of claim 8, wherein the node processing module is further configured to sort the child nodes in the third list in descending order of priority.
13. The apparatus for scheduling resources according to claim 8, further comprising:
the warning module is used for sending warning information under the condition that no effective child node exists in the third list; and/or sending out alarm information under the condition that the child nodes in the third list and the preset node resource list are invalid child nodes at the same time.
14. The apparatus for scheduling resources according to claim 8, further comprising:
the monitoring module is used for periodically monitoring the state of the node resources in the node resource pool;
and the resource updating module is used for updating the state information of the node resources in the database.
15. An apparatus, comprising: a processor and a memory, the memory having stored therein instructions that are loaded and executed by the processor to implement the method of any of claims 1 to 7.
16. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010910247.2A CN112039709B (en) | 2020-09-02 | 2020-09-02 | Resource scheduling method, device, equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010910247.2A CN112039709B (en) | 2020-09-02 | 2020-09-02 | Resource scheduling method, device, equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112039709A CN112039709A (en) | 2020-12-04 |
CN112039709B true CN112039709B (en) | 2022-01-25 |
Family
ID=73591206
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010910247.2A Active CN112039709B (en) | 2020-09-02 | 2020-09-02 | Resource scheduling method, device, equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112039709B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112489746B (en) * | 2020-12-08 | 2024-07-05 | 深圳平安智慧医健科技有限公司 | Task pushing method and device for data management, electronic equipment and storage medium |
CN112925812A (en) * | 2021-03-01 | 2021-06-08 | 武汉蔚来能源有限公司 | Data access processing method and device and computer storage medium |
CN113988751A (en) * | 2021-10-26 | 2022-01-28 | 北京沃东天骏信息技术有限公司 | Method for requesting resource, electronic device and storage medium |
CN114138484A (en) * | 2021-11-30 | 2022-03-04 | 中国电信股份有限公司 | Resource allocation method, device and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7810098B2 (en) * | 2004-03-31 | 2010-10-05 | International Business Machines Corporation | Allocating resources across multiple nodes in a hierarchical data processing system according to a decentralized policy |
CN102984012A (en) * | 2012-12-10 | 2013-03-20 | 青岛海信传媒网络技术有限公司 | Management method and system for service resources |
CN107038069A (en) * | 2017-03-24 | 2017-08-11 | 北京工业大学 | Dynamic labels match DLMS dispatching methods under Hadoop platform |
CA2981842A1 (en) * | 2017-03-01 | 2018-09-01 | The Toronto-Dominion Bank | Resource allocation based on resource distribution data from child node |
CN110362391A (en) * | 2019-06-12 | 2019-10-22 | 北京达佳互联信息技术有限公司 | Resource regulating method, device, electronic equipment and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103701894A (en) * | 2013-12-25 | 2014-04-02 | 浙江省公众信息产业有限公司 | Method and system for dispatching dynamic resource |
CN107291545B (en) * | 2017-08-07 | 2019-12-10 | 星环信息科技(上海)有限公司 | Task scheduling method and device for multiple users in computing cluster |
-
2020
- 2020-09-02 CN CN202010910247.2A patent/CN112039709B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7810098B2 (en) * | 2004-03-31 | 2010-10-05 | International Business Machines Corporation | Allocating resources across multiple nodes in a hierarchical data processing system according to a decentralized policy |
CN102984012A (en) * | 2012-12-10 | 2013-03-20 | 青岛海信传媒网络技术有限公司 | Management method and system for service resources |
CA2981842A1 (en) * | 2017-03-01 | 2018-09-01 | The Toronto-Dominion Bank | Resource allocation based on resource distribution data from child node |
CN107038069A (en) * | 2017-03-24 | 2017-08-11 | 北京工业大学 | Dynamic labels match DLMS dispatching methods under Hadoop platform |
CN110362391A (en) * | 2019-06-12 | 2019-10-22 | 北京达佳互联信息技术有限公司 | Resource regulating method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112039709A (en) | 2020-12-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112039709B (en) | Resource scheduling method, device, equipment and computer readable storage medium | |
CN108418862B (en) | Micro-service management method and system based on artificial intelligence service cloud platform | |
CN112513814B (en) | Task scheduling method and device | |
US20190324772A1 (en) | Method and device for processing smart contracts | |
US11356282B2 (en) | Sending cross-chain authenticatable messages | |
US7865887B2 (en) | Context based event handling and execution with prioritization and interrupt management | |
CN110704186A (en) | Computing resource allocation method and device based on hybrid distribution architecture and storage medium | |
US8117641B2 (en) | Control device and control method for information system | |
CN113377348B (en) | Task adjusting method applied to task engine, related device and storage medium | |
JP2008524719A (en) | Method and apparatus for supporting soft real-time operation | |
US10747573B2 (en) | Virtualized execution across distributed nodes | |
US8584144B2 (en) | Determining the processing order of a plurality of events | |
US20230115707A1 (en) | Orchestration of virtualization technology and application implementation | |
CN103294558B (en) | A kind of MapReduce dispatching method supporting dynamic trust evaluation | |
US20130346992A1 (en) | Computing system, method for controlling thereof, and computer-readable recording medium having computer program for controlling thereof | |
US20230153100A1 (en) | Method and apparatus for managing model file in inference application | |
CN106909457A (en) | EMS memory management process and device | |
CN111124382A (en) | Attribute assignment method and device in Java and server | |
CN112579319A (en) | Service calling method and device based on LRU Cache optimization | |
CN116302448B (en) | Task scheduling method and system | |
US7086060B2 (en) | Method for programmatic representation and enforcement of resource controls | |
Sabouri et al. | Scheduling and analysis of real-time software families | |
CN111861292A (en) | Waybill number generation method, apparatus, server and storage medium | |
CN114791985A (en) | Domain name matching method and device and prefix tree updating method and device | |
CN115330487A (en) | Product recommendation method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |