CN113391891B - Load balancing resource scheduling method based on Rete and character string pattern matching algorithm - Google Patents

Load balancing resource scheduling method based on Rete and character string pattern matching algorithm Download PDF

Info

Publication number
CN113391891B
CN113391891B CN202110551965.XA CN202110551965A CN113391891B CN 113391891 B CN113391891 B CN 113391891B CN 202110551965 A CN202110551965 A CN 202110551965A CN 113391891 B CN113391891 B CN 113391891B
Authority
CN
China
Prior art keywords
resource
node
nodes
available
called
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110551965.XA
Other languages
Chinese (zh)
Other versions
CN113391891A (en
Inventor
夏飞
袁国泉
赵然
冒佳明
商林江
赵新建
范磊
张颂
王翀
张利
许良杰
陈璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Jiangsu Electric Power Co Ltd
Global Energy Interconnection Research Institute
Anhui Jiyuan Software Co Ltd
Information and Telecommunication Branch of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
State Grid Jiangsu Electric Power Co Ltd
Global Energy Interconnection Research Institute
Anhui Jiyuan Software Co Ltd
Information and Telecommunication Branch of State Grid Jiangsu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Jiangsu Electric Power Co Ltd, Global Energy Interconnection Research Institute, Anhui Jiyuan Software Co Ltd, Information and Telecommunication Branch of State Grid Jiangsu Electric Power Co Ltd filed Critical State Grid Jiangsu Electric Power Co Ltd
Priority to CN202110551965.XA priority Critical patent/CN113391891B/en
Publication of CN113391891A publication Critical patent/CN113391891A/en
Application granted granted Critical
Publication of CN113391891B publication Critical patent/CN113391891B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • G06F16/90344Query processing by using string matching techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a load balancing resource scheduling method based on a Rete and character string pattern matching algorithm, which is characterized in that for a Docker container, when Pod is scheduled to a proper node of a cluster in a Kubernetes cluster, a group of available nodes is screened out by using the character string pattern matching algorithm; and then the resource is matched to the optimal node by using a Rete algorithm. The method of the invention finds out all the objects and strategies matched with each mode through the network screening method, and can fundamentally improve the efficiency of strategy matching, thereby shortening the resource calling time.

Description

Load balancing resource scheduling method based on Rete and character string pattern matching algorithm
Technical Field
The invention relates to the technical field of resource scheduling, in particular to a load balancing resource scheduling method based on a Rete and character string pattern matching algorithm.
Background
The Kubernetes is an open source platform for automatic deployment, capacity expansion and operation and maintenance of the container clusters, can quickly and effectively respond to user demands, quickly and predictably deploy applications, quickly expand applications, seamlessly butt-joint new application functions, save resources, optimize the use of hardware resources and provide a complete open source scheme for container arrangement management.
Pod is made up of one or more containers (e.g., docker containers) and has the ability to share storage/network/UTS/PIDs, as well as running container specifications. And in Kubernetes Pod is the smallest atomic unit that can be scheduled. Briefly, pod is a collection of containers that share a network/storage (Kubernetes implements a shared set of namespaces to replace each container's respective NS to implement this capability) within Pod so they can communicate internally through Localhost.
The Rete algorithm is an efficient method for comparing a large number of pattern sets with a large number of object sets, and all objects and rules matching each pattern are found out by a network screening method. A Rete network is formed to realize the function of pattern matching, and the efficiency of system pattern matching is improved by utilizing the characteristics of regular time redundancy and structural similarity. The Rete algorithm is well established and is widely used.
The Chinese patent publication No. CN111143059A, "improved Kubernetes resource scheduling method", proposes an improved resource scheduling scheme, which mainly comprises the following two steps: (1) The problem of insufficient distribution prediction of quality of service QoS class by using a quality of service optimization scheduling BalanceQoSPriority improvement platform; (2) The pre-screening and the preferential Priority pre-screening of the pre-selection algorithm are weighted to achieve the comprehensive realization of the two optimization targets. But the patent focused on scoring and ranking the nodes. The invention patent of China with publication number CN108108223A, namely a container management platform based on Kubernetes, provides a container management platform based on Kubernetes, wherein the invention manages bottom-layer Ali cloud resources and container resources through a unified platform, and manages user information through the connection of the platform and a unified authority management system. The existing resource scheduling method is concentrated on the aspect of service matching accuracy, ignores the rapidity of service matching, greatly reduces the matching efficiency of the traditional service matching method along with the rapid increase of manufacturing resources, and is difficult to meet the demands.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a load balancing resource scheduling method based on a Rete and character string pattern matching algorithm, which fundamentally improves the efficiency of resource calling and shortens the time of resource calling.
The technical scheme adopted by the invention is as follows:
the invention provides a load balancing resource scheduling method based on a Rete and character string pattern matching algorithm, which is suitable for scheduling Pod to nodes of a Kubernetes cluster and comprises the following steps:
forming a node main string according to the node resource table, and forming a mode string to be matched according to the Pod resource scheduling request;
matching the mode string to be matched with the node main string to obtain an available node;
selecting an optimal node for the acquired available node by adopting a Rete algorithm;
the Pod is deployed to the best node.
Further, the method comprises the steps of,
a resource table is maintained for each node through a scheduler, and each item of resource attribute state data of the node is recorded by a resource table item.
Further, the forming the node main string according to the node resource table includes:
and converting each item of resource attribute state data of the nodes into predefined characters, and arranging the predefined characters according to the fixed sequence of the resource table items to form a main string of each node.
Further, the forming a pattern string to be matched according to the Pod resource scheduling request includes:
converting each resource attribute request in the Pod resource scheduling request into corresponding characters, wherein the resource attribute request adopts x i X is a resource attribute request, and a subscript i is the position of the resource attribute in a resource table;
all the resource attribute requests are converted into characters and are arranged according to the sequence in the resource list items to form a pattern string to be matched.
Further, the matching the pattern string to be matched with the node main string to obtain the available node includes:
finding the position of the character in the node main string through the subscript of the character in the pattern string to be matched;
comparing the character in the pattern string to be matched with the character in the same position in the node main string;
if all characters in the pattern string to be matched are consistent with the comparison in the node main string, the node belongs to an available node, and the node is output and stored in a list of the available node.
Further, the method comprises the steps of,
if the available nodes are not matched, the Pod resource scheduling request waits until the available nodes are searched again in the next scheduling period.
Further, the selecting the best node for the acquired available node by adopting a Rete algorithm includes:
importing the available nodes into a fact set to construct a discrimination network;
matching facts, namely available nodes in a fact set;
and executing the connection operation until one rule matching is finished.
Further, the fact matching means that the attribute of the fact is judged, if the constraint condition is met, the fact is continuously transmitted to the subsequent node, otherwise, the subsequent transmission is abandoned, and the matching process is stopped;
the attribute of the fact is denoted by code A, B, C:
a=0, indicating that after the resource is tuned into the available node k, the available node is full, and the available node k is not available;
a=1, indicating that after the resource is tuned into the available node k, the available node is not fully loaded and the available node k is available;
b=0, which means that all available nodes need to be queued without idle nodes on the basis that the nodes are available after the resource is called into the available nodes;
b=1, meaning that all available nodes do not need queuing on the basis of the availability of the node after the resource call is made to the available node;
b=2, which means that, based on availability of the node after resource call into the available node, part of the available nodes need to be queued, and the remaining available nodes do not need to be queued;
c=0, which indicates that the resource weight coefficient to be called is low;
c=1, which indicates that the resource weight coefficient to be called is high and the resource weight coefficient to be called is preferential;
the constraint conditions are as follows: the nodes are not fully loaded.
Further, the rule is as follows:
rule 1: a=0, after the resource is tuned into the node k, the node is fully loaded, and the node k is not available;
rule 2: a=1, b=0, c=0, the weight coefficient of the resource to be called is low, and no idle node exists, all the resources are queued up from high to low according to the weight priority order, and the resource with low weight coefficient waits for the other resource with high weight coefficient to be called and then is called;
rule 3: a=1, b=1, c=0, the resource weight coefficient to be called is low, all nodes are idle, and the resources are sequentially called to idle nodes from high to low according to the priority of the resource weight priority;
rule 4: a=1, b=2, c=0, the weight coefficient of the resource to be called is low, part of nodes need to be queued, part of nodes are idle, the resource with low weight coefficient waits for the node needing to be queued, and when the node finishes calling the resource to be idle, the resource with low weight coefficient is called;
rule 5: a=1, b=0, c=1, the weight coefficient of the resource to be called is high, no idle node exists, all the resources are queued up from high to low according to the weight priority order, and the resource with high weight coefficient is called preferentially;
rule 6: a=1, b=1, c=1, the resource weight coefficient to be called is high, all nodes are idle, all idle nodes call the resources to the nodes in sequence according to the priority of the resource weight from high to low, and the resource with high weight coefficient is called preferentially;
rule 7: a=1, b=2, c=1, the weight coefficient of the resource to be called is high, part of nodes need to be queued, part of nodes are idle, and the resource with high weight coefficient is preferentially called to idle nodes.
The beneficial effects of the invention are as follows:
the method of the invention uses a character string pattern matching algorithm to reduce the matching times of the pattern string and the main string as much as possible so as to achieve the purpose of quick matching, a batch of available nodes are quickly matched, and the optimal nodes are matched by using a Rete algorithm for the screened available nodes. By finding out all the objects and rules matching each mode through the network screening method, the policy matching efficiency can be fundamentally improved, and the resource calling time is shortened.
Drawings
FIG. 1 is a flow chart of a load balancing resource scheduling method based on a Rete algorithm and a character string pattern matching algorithm in the invention;
FIG. 2 is a schematic diagram of a node invocation rule in the present invention.
Detailed Description
The invention is further described below. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
The invention provides a load balancing resource scheduling method based on a Rete algorithm and a character string pattern matching algorithm.
Referring to fig. 1, the specific implementation process of the present invention is as follows:
forming the node main string according to the node resource table, and forming a mode string to be matched according to the Pod resource scheduling request;
matching the pattern string with the node main string by adopting a character string pattern matching algorithm to obtain available nodes;
selecting an optimal node for the acquired available node by adopting a Rete algorithm;
the Pod is deployed to the best node.
In the embodiment of the invention, a scheduler maintains a resource table for each node and receives a resource scheduling request, and the resource table records various resource attribute state data corresponding to the node, specifically: whether the GPU, the CPU and the memory are fully loaded or not, and the port can be used.
In the embodiment of the invention, each item of resource attribute state data of the node is converted into corresponding characters, and the main strings s of each node are formed by arranging the resource attribute state data according to the fixed sequence of the resource table items.
In the embodiment of the invention, according to the resource scheduling request of the Pod, the Pod resource scheduling request comprises a plurality of resource attribute requests, the resource attribute requests are converted into corresponding characters, and the positions of each resource attribute in a table, such as x, are recorded i X isAnd the resource attribute requirement, i, is the position of the attribute in the resource list item, converts all the resource attribute requests into corresponding characters and forms a pattern string t to be matched according to the sequence in the resource list item.
In the embodiment of the invention, the mode string t is matched with the node main string s, if the main string contains all the resources in the mode string, the node is output and stored in the list of the available nodes. If no node is available, the Pod resource scheduling request waits until the next scheduling period to search for the available node again.
In the embodiment of the invention, in the process of matching the character strings of the available nodes, the position of the character in the main string is found through the subscript of the character in the mode string, then the character in the mode string is compared with the character in the main string, and if all the characters in the mode string are consistent with the comparison in the main string, the node belongs to the available nodes.
In the embodiment of the invention, all the nodes are fast matched with available nodes which are screened and output, the available nodes are stored in a list, and the information of the available nodes in the list is imported into a pictures set to construct a discrimination network.
Traversing all the face E-faces, and entering the discrimination network from the root node in the form of a triplet. For example: (node k, whether the resource needs queuing, queuing value Q). And setting codes and rules according to whether the node is overloaded after adding the resource, whether the node needs queuing and the weight priority of the resource as judgment basis, and matching the optimal node by using a Rete algorithm.
In the embodiment of the invention, three codes A, B and C are set for the situation after the node joins the resource, and all the optional nodes are represented by the three codes and the corresponding values to form an alpha network.
As shown in table 1, a=0, indicating that after the resource is tuned into node k, the node is fully loaded and node k is not available; a=1, which indicates that after the resource is tuned into node k, the node is not fully loaded and node k is available; b=0, which indicates that the batch of available nodes need to be queued and no idle node exists on the basis that the nodes are available after the resource is called into the node; b=1, which indicates that no queuing is required for this batch of available nodes based on the availability of the nodes after the resource call-in node; b=2, which indicates that, based on availability of the node after the resource is called into the node, part of the nodes need to be queued, and part of the nodes do not need to be queued; c=0, which indicates that the resource weight coefficient to be called is low; c=1, which indicates that the resource weight coefficient to be called is higher and the call is to be prioritized.
TABLE 1 charging center matching code table
In the embodiment of the present invention, the setting of the calling rule refers to fig. 2, which is specifically as follows:
rule 1: a=0, after the resource is tuned into the node k, the node is fully loaded, and the node k is not available;
rule 2: a=1, b=0, c=0, the weight coefficient of the resource to be called is lower, no idle node exists, all the resources are queued up from high to low according to the weight priority order, the resource with lower weight coefficient waits for the resource with higher weight coefficient to be called, and then the call is carried out;
rule 3: a=1, b=1, c=0, the resource weight coefficient to be called is lower, all nodes are idle, all idle nodes call the resources to the nodes in sequence according to the priority of the resource weight from high to low, and the resource with lower weight coefficient waits for the resource call with higher weight coefficient to call;
rule 4: a=1, b=2 and c=0, the weight coefficient of the resource to be called is lower, part of nodes need to be queued, part of nodes are idle, queuing is not needed, the resource with lower weight coefficient waits for the node needing to be queued, and when the node calls the resource to be idle, the resource with lower weight coefficient is called;
rule 5: a=1, b=0, c=1, the weight coefficient of the resource to be called is higher, no idle node exists, all the resources are queued up from high to low according to the priority order of the weight, and the resource with higher weight coefficient is called preferentially;
rule 6: a=1, b=1 and c=1, the resource weight coefficient to be called is higher, all nodes are idle, all idle nodes call the resources to the nodes in sequence according to the priority of the resource weight from high to low, and the resource with higher weight coefficient is called preferentially;
rule 7: a=1, b=2, c=1, the weight coefficient of the resource to be called is higher, part of nodes need to be queued, part of nodes are idle, queuing is not needed, and the resource with higher weight coefficient is preferentially called to the idle nodes.
In the embodiment of the invention, a Rete algorithm is adopted, and the optimal node is selected based on a set rule as follows:
and a step a, importing the nodes to be matched into a fact set, constructing a discrimination network, and traversing facts in all the fact sets, namely a batch of available nodes after the matching is completed through a character string pattern matching algorithm.
Where facts represent the multiple relationships between objects and between object attributes, often represented in the form of triples, entering the discrimination network from the root node in the form of triples, for example: (node k, whether the resource needs queuing, queuing value Q). The root node representation is a virtual node, which is an entry that builds the entire Rete network, and which allows all facts to pass through and pass on to the type node as successor to the root node.
And b, if the fact set is not empty, selecting one fact for processing. Traversing the type node, if the fact is matched with the type node, transmitting the fact to a subsequent node of the node, judging the attribute of the fact (state data of the node defined in table 1, specifically, whether the node needs to be queued or not, and the weight coefficient level of the node) by the selected node, if the constraint condition (such as that the node is not fully loaded) is met, transmitting the fact to the subsequent node continuously, otherwise, discarding the subsequent transmission, and stopping the matching process.
The type node is used for selecting the type of the fact, and transmitting the fact conforming to the type of the node to the subsequent alpha node, wherein the type node is the state data of the node in the embodiment of the invention, and specifically comprises the following steps: full load condition of the node, whether the node needs queuing or not, and weight coefficient level of the node. The selection node filters each fact so that the fact reaches the appropriate alpha node along the Rete network.
Step c, selecting a first node (type node) of the alpha network to operate, selecting a type of a fact, and setting a code number A=0 in the alpha node 1 if the fact is that after the fact is that a resource is called into a node k, the node is fully loaded; if the fact is that after the resource is called into the node k, the node is not fully loaded, the code number A=1 is set in the alpha node 2; if the fact is that the available nodes need to be queued on the basis of availability of the nodes after the resource is called into the nodes, and no idle node exists, the code number B=0 is set in the alpha node 3; if the fact is that the available nodes do not need queuing on the basis of the availability of the nodes after the resource is called into the node, the code number B=1 is set in the alpha node 4; if the fact is that on the basis that the node is available after the resource is called into the node, part of the nodes need to be queued, and part of the nodes do not need to be queued, the code number B=2 is set in the alpha node 5; if the fact is that the resource weight coefficient to be called is lower, setting a code number C=0 in the alpha node 6; if the fact is that the resource weight coefficient to be called is high and the call is to be preferentially called, the code number c=1 is set in the alpha node 7. Through which the next node of the alpha network is entered until the alpha storage area is entered. Otherwise, jumping to the next judging path.
The alpha network is also a mode network, which is formed by rules in a rule base, records the test condition of each domain of each mode, each test condition corresponds to a domain node of the network, and all domains of each mode are connected in turn to form a matching chain of the mode network.
And d, adding the result of the alpha storage area into the beta storage area, if the result is not the terminal node, detecting whether the fact that the condition is met exists in another input set, executing the connection operation if the condition is met, and entering the next beta storage area to repeatedly execute c. When a=0, it indicates that after the resource is tuned into node k, node k is fully loaded and unavailable, and the fact does not satisfy the condition, and rule 1 is directly formed; when a=1, indicating that after the resource is tuned into the node k, the node k is not fully loaded and available, the fact satisfies the condition, the connection operation is continuously performed, and when b=0, c=0; forming rule 2; when b=1, c=0, rule 3 is formed; when b=2, c=0, rule 4 is formed; rule 5 is formed when b=0, c=1; when b=1, c=1, rule 6 is formed; rule 7 is formed when b=2, c=1. If the other input set does not meet the condition, return to b. If the node is a terminal node, the method is executed and added to the fact set.
Wherein an end node of a rule is an end node representing the end of a rule match, when a fact or tuple is passed to the end node, indicating that the rule corresponding to the end node is activated. The beta storage area is part of a beta network having two types of node beta storage areas and connecting nodes. The former mainly stores the collection after connection is completed; the latter comprises two input ports, which are respectively input with two sets to be matched, and the two sets are transmitted to the next node by the connecting node for combining work. The connection node is a node serving as a connection operation, and corresponds to a table connection operation of the database.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the claims.

Claims (8)

1. The load balancing resource scheduling method based on the Rete and character string pattern matching algorithm is suitable for scheduling Pod to nodes of a Kubernetes cluster, and is characterized by comprising the following steps:
forming a node main string according to the node resource table, and forming a mode string to be matched according to the Pod resource scheduling request;
matching the pattern string to be matched with the node main string to obtain an available node, including: finding the position of the character in the node main string through the subscript of the character in the pattern string to be matched; comparing the character in the pattern string to be matched with the character in the same position in the node main string; if all characters in the pattern string to be matched are consistent with the comparison in the node main string, the node belongs to an available node, and the node is output and stored in a list of the available node;
selecting an optimal node for the acquired available node by adopting a Rete algorithm;
the Pod is deployed to the best node.
2. The load balancing resource scheduling method based on the Rete and string pattern matching algorithm according to claim 1, wherein,
a resource table is maintained for each node through a scheduler, and each item of resource attribute state data of the node is recorded by a resource table item.
3. The load balancing resource scheduling method based on Rete and string pattern matching algorithm according to claim 2, wherein the forming the node main string according to the node resource table comprises:
and converting each item of resource attribute state data of the nodes into predefined characters, and arranging the predefined characters according to the fixed sequence of the resource table items to form a main string of each node.
4. The load balancing resource scheduling method based on Rete and string pattern matching algorithm according to claim 3, wherein the forming the string of patterns to be matched according to the Pod resource scheduling request includes:
converting each resource attribute request in the Pod resource scheduling request into corresponding characters, wherein the resource attribute request adopts x i X is a resource attribute request, and a subscript i is the position of the resource attribute in a resource table;
all the resource attribute requests are converted into characters and are arranged according to the sequence in the resource list items to form a pattern string to be matched.
5. The load balancing resource scheduling method based on the Rete and string pattern matching algorithm according to claim 1, wherein,
if the available nodes are not matched, the Pod resource scheduling request waits until the available nodes are searched again in the next scheduling period.
6. The load balancing resource scheduling method based on the Rete and string pattern matching algorithm according to claim 1, wherein the selecting the best node for the acquired available node by using the Rete algorithm comprises:
importing the available nodes into a fact set to construct a discrimination network;
matching facts, namely available nodes in a fact set;
and executing the connection operation until one rule matching is finished.
7. The load balancing resource scheduling method based on Rete and string pattern matching algorithm according to claim 6, wherein the matching of facts means that the attribute of facts is judged, if the constraint condition is satisfied, the facts are continuously propagated to the subsequent nodes, otherwise, the subsequent propagation is abandoned, and the matching process is stopped;
the attribute of the fact is denoted by code A, B, C:
a=0, indicating that after the resource is tuned into the available node k, the available node is full, and the available node k is not available;
a=1, indicating that after the resource is tuned into the available node k, the available node is not fully loaded and the available node k is available;
b=0, which means that all available nodes need to be queued without idle nodes on the basis that the nodes are available after the resource is called into the available nodes;
b=1, meaning that all available nodes do not need queuing on the basis of the availability of the node after the resource call is made to the available node;
b=2, which means that, based on availability of the node after resource call into the available node, part of the available nodes need to be queued, and the remaining available nodes do not need to be queued;
c=0, which indicates that the resource weight coefficient to be called is low;
c=1, which indicates that the resource weight coefficient to be called is high and the resource weight coefficient to be called is preferential;
the constraint conditions are as follows: the nodes are not fully loaded.
8. The method for scheduling load balancing resources based on Rete and string pattern matching algorithm according to claim 6, wherein the rule is as follows:
rule 1: a=0, after the resource is tuned into the node k, the node is fully loaded, and the node k is not available;
rule 2: a=1, b=0, c=0, the weight coefficient of the resource to be called is low, and no idle node exists, all the resources are queued up from high to low according to the weight priority order, and the resource with low weight coefficient waits for the other resource with high weight coefficient to be called and then is called;
rule 3: a=1, b=1, c=0, the resource weight coefficient to be called is low, all nodes are idle, and the resources are sequentially called to idle nodes from high to low according to the priority of the resource weight priority;
rule 4: a=1, b=2, c=0, the weight coefficient of the resource to be called is low, part of nodes need to be queued, part of nodes are idle, the resource with low weight coefficient waits for the node needing to be queued, and when the node finishes calling the resource to be idle, the resource with low weight coefficient is called;
rule 5: a=1, b=0, c=1, the weight coefficient of the resource to be called is high, no idle node exists, all the resources are queued up from high to low according to the weight priority order, and the resource with high weight coefficient is called preferentially;
rule 6: a=1, b=1, c=1, the resource weight coefficient to be called is high, all nodes are idle, all idle nodes call the resources to the nodes in sequence according to the priority of the resource weight from high to low, and the resource with high weight coefficient is called preferentially;
rule 7: a=1, b=2, c=1, the weight coefficient of the resource to be called is high, part of nodes need to be queued, part of nodes are idle, and the resource with high weight coefficient is preferentially called to idle nodes.
CN202110551965.XA 2021-05-20 2021-05-20 Load balancing resource scheduling method based on Rete and character string pattern matching algorithm Active CN113391891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110551965.XA CN113391891B (en) 2021-05-20 2021-05-20 Load balancing resource scheduling method based on Rete and character string pattern matching algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110551965.XA CN113391891B (en) 2021-05-20 2021-05-20 Load balancing resource scheduling method based on Rete and character string pattern matching algorithm

Publications (2)

Publication Number Publication Date
CN113391891A CN113391891A (en) 2021-09-14
CN113391891B true CN113391891B (en) 2024-03-12

Family

ID=77618135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110551965.XA Active CN113391891B (en) 2021-05-20 2021-05-20 Load balancing resource scheduling method based on Rete and character string pattern matching algorithm

Country Status (1)

Country Link
CN (1) CN113391891B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103377259A (en) * 2012-04-28 2013-10-30 北京新媒传信科技有限公司 Multiple-mode-string matching method and device
CN109960585A (en) * 2019-02-02 2019-07-02 浙江工业大学 A kind of resource regulating method based on kubernetes
CN110780998A (en) * 2019-09-29 2020-02-11 武汉大学 Kubernetes-based dynamic load balancing resource scheduling method
CN111694633A (en) * 2020-04-14 2020-09-22 新华三大数据技术有限公司 Cluster node load balancing method and device and computer storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103377259A (en) * 2012-04-28 2013-10-30 北京新媒传信科技有限公司 Multiple-mode-string matching method and device
CN109960585A (en) * 2019-02-02 2019-07-02 浙江工业大学 A kind of resource regulating method based on kubernetes
CN110780998A (en) * 2019-09-29 2020-02-11 武汉大学 Kubernetes-based dynamic load balancing resource scheduling method
CN111694633A (en) * 2020-04-14 2020-09-22 新华三大数据技术有限公司 Cluster node load balancing method and device and computer storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于Rete规则推理的告警关联性分析;杨杨;石晓丹;宋双;霍永华;陈连栋;;北京邮电大学学报(02);全文 *
智能环境下基于雾计算的推理节点优化分配研究;汪成亮;黄心田;;电子学报(01);全文 *

Also Published As

Publication number Publication date
CN113391891A (en) 2021-09-14

Similar Documents

Publication Publication Date Title
CN107239335B (en) Job scheduling system and method for distributed system
CN104636204B (en) A kind of method for scheduling task and device
US8775622B2 (en) Computer-based cluster management system and method
CN111966484A (en) Cluster resource management and task scheduling method and system based on deep reinforcement learning
CN113722127A (en) Efficient lightweight easy-to-use distributed network message middleware
CN114153580A (en) Cross-multi-cluster work scheduling method and device
CN115220916B (en) Automatic calculation scheduling method, device and system of video intelligent analysis platform
CN109086407A (en) The multiple pipeline dispatching method of Based on Distributed memory database
CN106202092A (en) The method and system that data process
WO2020134133A1 (en) Resource allocation method, substation, and computer-readable storage medium
US6549931B1 (en) Distributing workload between resources used to access data
CN105975345A (en) Video frame data dynamic equilibrium memory management method based on distributed memory
CN110740079A (en) full link benchmark test system for distributed scheduling system
CN108415912A (en) Data processing method based on MapReduce model and equipment
CN113157694A (en) Database index generation method based on reinforcement learning
CN110084507A (en) The scientific workflow method for optimizing scheduling of perception is classified under cloud computing environment
CN116050540A (en) Self-adaptive federal edge learning method based on joint bi-dimensional user scheduling
CN109150759B (en) Progressive non-blocking opportunity resource reservation method and system
CN111414961A (en) Task parallel-based fine-grained distributed deep forest training method
US20230176905A1 (en) Automatic driving simulation task scheduling method and apparatus, device, and readable medium
CN113391891B (en) Load balancing resource scheduling method based on Rete and character string pattern matching algorithm
CN116360954B (en) Industrial Internet of things management and control method and system based on cloud edge cooperative technology
CN113568931A (en) Route analysis system and method for data access request
CN116010051A (en) Federal learning multitasking scheduling method and device
Guo et al. Handling data skew at reduce stage in Spark by ReducePartition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant