CN117768481A - Edge gateway cluster computing power scheduling method and system based on network and load - Google Patents
Edge gateway cluster computing power scheduling method and system based on network and load Download PDFInfo
- Publication number
- CN117768481A CN117768481A CN202311527012.5A CN202311527012A CN117768481A CN 117768481 A CN117768481 A CN 117768481A CN 202311527012 A CN202311527012 A CN 202311527012A CN 117768481 A CN117768481 A CN 117768481A
- Authority
- CN
- China
- Prior art keywords
- gateway
- industrial
- data
- node
- industrial data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000012545 processing Methods 0.000 claims abstract description 72
- 230000005540 biological transmission Effects 0.000 claims abstract description 27
- 238000005728 strengthening Methods 0.000 claims abstract description 10
- 238000004364 calculation method Methods 0.000 claims description 34
- 230000008569 process Effects 0.000 claims description 26
- 238000012937 correction Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 5
- 230000002159 abnormal effect Effects 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 238000013500 data storage Methods 0.000 claims description 3
- 238000010200 validation analysis Methods 0.000 claims description 3
- 238000003860 storage Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000009776 industrial production Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- SLXKOJJOQWFEFD-UHFFFAOYSA-N 6-aminohexanoic acid Chemical compound NCCCCCC(O)=O SLXKOJJOQWFEFD-UHFFFAOYSA-N 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention provides an edge gateway cluster computing power scheduling method based on a network and a load, which comprises the following steps: acquiring industrial data through an industrial gateway deployed in a work area; forming a gateway cluster by a plurality of industrial gateways in the same working area; synchronizing industrial data among all industrial gateways in the gateway cluster by adopting a Raft algorithm; calculating cost overhead required for processing the industrial data on an industrial gateway node in a different gateway cluster and/or cost overhead required for processing the industrial data on a cloud node; and distributing computing tasks on industrial gateway nodes and/or cloud nodes in different gateway clusters through a Qlearning strengthening algorithm based on the cost overhead. The stability of the main gateway node election and the efficiency of data transmission are effectively guaranteed.
Description
Technical Field
The invention relates to the technical field of industrial communication information, in particular to an edge gateway cluster computing power scheduling method and system based on a network and a load.
Background
In industrial production, a large amount of data generated by industrial equipment and sensors are required to be processed through industrial gateways, so that the hardware capacity of the gateways is required, a large amount of data processing processes are performed on edge gateway nodes, the hardware reliability of a single edge gateway is low, so that the data acquisition efficiency and the data safety are affected, the fault redundancy of the data acquisition process is improved through forming an edge gateway cluster by a plurality of gateways, the safety of the industrial production process is ensured, the consistency of information of different industrial gateway nodes in the industrial gateway cluster is ensured, the consistency of the industrial gateway cluster is generally realized through a Raft algorithm, the network condition and the load condition of each gateway are not considered by the conventional Raft distributed consensus algorithm, and the stability of master node election is affected.
In addition, a certain computational effort is required for processing industrial data, and in order to improve the overall efficiency of the system, a reasonable task scheduling unloading method is required to unload part of computing tasks to other gateway nodes for processing. The existing task unloading method is usually used for directly calculating according to fixed parameters such as network bandwidth, node distance and the like, and the influence of the real-time state of the network on the data transmission efficiency is not considered. Meanwhile, the task unloading method with time optimization as a target can cause excessive load of part of nodes, influence the actual processing time of data and influence the overall efficiency of the system.
The above problems are currently in need of solution.
Disclosure of Invention
The present invention is directed to overcoming at least one of the above drawbacks of the prior art, and in one aspect, the present invention provides a method for edge gateway cluster computing power scheduling based on a network and a load, the method comprising: acquiring industrial data through an industrial gateway deployed in a work area; forming a gateway cluster by a plurality of industrial gateways in the same working area; synchronizing industrial data among all industrial gateways in the gateway cluster by adopting a Raft algorithm; calculating cost overhead required for processing the industrial data on an industrial gateway node in a different gateway cluster and/or cost overhead required for processing the industrial data on a cloud node; and distributing computing tasks on industrial gateway nodes and/or cloud nodes in different gateway clusters through a Qlearning strengthening algorithm based on the cost overhead.
Further, the synchronizing industrial data between the industrial gateways in the gateway cluster by using a Raft algorithm includes: selecting a main gateway node in the gateway cluster; the master gateway node is configured to receive and synchronize the industrial data to other industrial gateway nodes in the gateway cluster.
Further, the electing a master gateway node in the gateway cluster includes: calculating election parameters of the main gateway node based on network conditions of all industrial gateways in the gateway cluster; determining a master gateway node in the gateway cluster based on the election parameters; the calculation formula is as follows:wherein (1)>Represents an average value of CPU utilization within a preset time, < >>Represents the average utilization of the memory within a preset time, < + >>Representing the average occupancy of the disk within a preset time, < >>Indicating the average communication delay in the network over a preset time.
Further, the method further comprises: when the main gateway node is abnormal, other industrial gateway nodes in the same gateway cluster become candidate nodes to wait for selecting a new main gateway node; when the candidate node receives the message of the node with larger period or index, the candidate node exits the election main gateway node; and when the candidate node receives the election messages of other candidate nodes, comparing the election parameters of the two candidate nodes, wherein the election parameters are larger as the main gateway node.
Further, the calculating a cost overhead required for processing the industrial data on an industrial gateway node within a different gateway cluster and/or a cost overhead required for processing the industrial data on a cloud node includes: calculating bandwidth predictorsWherein B is k Is bandwidth predictive value, t k Is the time of arrival of the acknowledgement character of the current network TCP protocol, t k-1 Is the time of arrival of the last validation character, d k Is the kth acknowledgement character acknowledging the received data quantity, d k-1 K-1 acknowledgement characters confirm the received data volume, RTT k Is t k Round trip delay at time. Alpha k Is t k A time of day adjustment factor.
Further, the calculating a cost overhead required for processing the industrial data on an industrial gateway node within a different gateway cluster and/or a cost overhead required for processing the industrial data on a cloud node includes: when a calculation task is performed on the local network Guan Jiqun, the cost overhead required for processing calculation of the industrial data on the local gateway cluster is determined based on the bandwidth predicted value to be:wherein T is i L The time, w, required for the processing of the industrial data generated by the device i in the local gateway cluster is calculated i For the data transmission quantity of industrial data to be calculated, f i Computing capabilities for data of local gateway cluster hardware; the time required by the processing calculation of the industrial data generated by the equipment i in the local gateway cluster is the processing calculation of the industrial data in the local gateway clusterThe cost overhead required.
Further, the calculating a cost overhead required for processing the industrial data on an industrial gateway node within a different gateway cluster and/or a cost overhead required for processing the industrial data on a cloud node includes: when the calculation task is performed on other gateway clusters, determining that cost required by processing and calculation of the industrial data on the other gateway clusters based on the bandwidth predicted value is as follows: wherein w is i B for the data transmission amount of industrial data to be calculated k For bandwidth prediction, λ is the wideband correction coefficient, T i LtoE Data transmission delay, T i Eexe For processing time of industrial data at edge node, f e Is the computing power of other gateway cluster hardware, T i LE Scheduling the time required for processing and calculating the industrial data generated by the equipment i in other gateway clusters; the time required for processing and calculating the industrial data generated by the equipment i in other gateway clusters is the cost required for processing and calculating the industrial data in other gateway clusters.
Further, the calculating a cost overhead required for processing the industrial data on an industrial gateway node within a different gateway cluster and/or a cost overhead required for processing the industrial data on a cloud node includes: when a calculation task is calculated in a cloud node, determining cost overhead required by processing calculation of the industrial data in the cloud node based on the bandwidth predicted value is as follows: wherein w is i Lambda is a wideband correction coefficient, B, for the amount of data transmission of industrial data to be calculated k Pre-emption of bandwidthMeasuring the value, T i EtoC T is the transmission delay from the relay node to the cloud server i Sexe For the processing time of industrial data at a cloud node, f s Is the computing power of the cloud server, T i LtoE Data transmission delay, T i LC Scheduling the industrial data generated by the equipment i to a cloud server for processing and calculating the required time; and the time required for processing and calculating the industrial data generated by the equipment i to the cloud server is the cost required for processing and calculating the industrial data at the cloud node.
Further, the method further comprises: setting a task queue threshold on an industrial gateway node; when the industrial gateway node receives a task for calculating industrial data, if the length of a task queue of the current industrial gateway node is equal to a threshold value, uploading the task at the head of the task queue to the cloud node for calculation.
In another aspect, the present invention provides an edge gateway cluster computing power scheduling system based on a network and a load, the system comprising: the data acquisition module is used for acquiring industrial data through an industrial gateway deployed in the working area; the gateway cluster construction module is used for forming a gateway cluster by a plurality of industrial gateways in the same working area; the data synchronization module is used for synchronizing industrial data among all industrial gateways in the gateway cluster by adopting a Raft algorithm; the data calculation module is used for calculating cost expenditure required by processing the industrial data on the industrial gateway nodes in different gateway clusters and/or cost expenditure required by processing the industrial data on the cloud node; the data scheduling module is used for distributing computing tasks on different industrial gateway nodes and/or cloud nodes through a Qlearning strengthening algorithm based on the cost overhead; the data transmission module is used for enabling industrial data to interact between each gateway cluster and the cloud node; and the data storage module is used for storing the industrial data.
In yet another aspect, the present invention provides a computer readable storage medium having one or more instructions stored therein, where the computer instructions are configured to cause the computer to perform the above-described edge gateway cluster computing power scheduling method based on network and load.
In yet another aspect, the present invention provides an electronic device, including: a memory and a processor; at least one program instruction is stored in the memory; the processor loads and executes the at least one program instruction to implement the edge gateway cluster computing power scheduling method based on the network and the load.
The beneficial effects of the invention are as follows: the invention provides an edge gateway cluster computing power scheduling method based on a network and a load, which comprises the following steps: acquiring industrial data through an industrial gateway deployed in a work area; forming a gateway cluster by a plurality of industrial gateways in the same working area; synchronizing industrial data among all industrial gateways in the gateway cluster by adopting a Raft algorithm; calculating cost overhead required for processing the industrial data on an industrial gateway node in a different gateway cluster and/or cost overhead required for processing the industrial data on a cloud node; and distributing computing tasks on industrial gateway nodes and/or cloud nodes in different gateway clusters through a Qlearning strengthening algorithm based on the cost overhead. The election of the main gateway node in the Raft algorithm can be effectively ensured by the network condition and the load condition of each gateway. The efficiency of data transmission can be effectively ensured by reasonably distributing the calculation tasks based on the cost overhead required by processing industrial data on the industrial gateway nodes in different gateway clusters by adopting a Qlearning strengthening algorithm.
Drawings
The invention is further described below with reference to the drawings and examples.
Fig. 1 is a flowchart of an edge gateway cluster computing power scheduling method based on a network and a load according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of an edge gateway cluster computing power dispatching system based on network and load according to an embodiment of the present invention.
Fig. 3 is a partial block diagram of an electronic device provided by an embodiment of the invention.
Detailed Description
Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
The present invention will now be described in detail with reference to the accompanying drawings. The figure is a simplified schematic diagram illustrating the basic structure of the invention only by way of illustration, and therefore it shows only the constitution related to the invention.
To facilitate subsequent understanding, the following terms that may occur are explained:
ACK: the (Acknowledgement) is an Acknowledgement character, and in data communication, a transmission class control character is sent by the receiving station to the sending station. Indicating that the transmitted data has been acknowledged with no errors. In the TCP/IP protocol, if the receiver successfully receives data, an ACK data is replied. Typically, the ACK signal has its own fixed format and length, and is returned to the sender by the receiver.
The Raft algorithm: a distributed election algorithm. Raft agrees with the selected leader, the servers in the Raft cluster being the leader or follower, and in the exact case of election being candidates, the leader is responsible for copying the log to the watcher, which periodically informs the follower of its presence by sending heartbeat messages, each follower has a timeout which expects the leader's heartbeat, resets the timeout when the heartbeat is received, and if no heartbeat is received, the watcher changes its status to a candidate and starts the leader election.
Qlearning enhancement algorithm: qlearning is a value-based algorithm in a reinforcement learning algorithm, Q is Q (S, a) or is the expectation that the Action a (a epsilon A) is taken to obtain benefits under the S State (S epsilon S) at a certain moment, and the environment feeds back corresponding return r according to the Action of the agent, so that the main idea of the algorithm is to construct State and Action into a Q-table to store the Q value, and then select the Action capable of obtaining the maximum benefits according to the Q value.
Examples
Referring to fig. 1, a flow chart of a method for computing power scheduling for an edge gateway cluster based on network and load is shown. The method comprises the following steps:
s110: industrial data is acquired through an industrial gateway deployed within a work area.
As an example, in industrial production, industrial equipment and sensors produce large amounts of industrial data that need to be processed through industrial gateways deployed within a work area.
S120: and forming a gateway cluster by a plurality of industrial gateways in the same working area.
As an example, because the hardware reliability of a single edge gateway is low, the data collection efficiency and the data security are affected, and therefore, the fault redundancy of the data collection process is improved by forming an edge gateway cluster by a plurality of gateways, and the security of the industrial production process is ensured. The same area may be understood as the same house, the same building, the same factory, etc., and for example, the same area may be understood as the same workshop, that is, a plurality of industrial gateways are typically disposed in the same workshop to receive industrial data sent by devices in a processing workshop, so that a plurality of industrial gateways in the workshop form a gateway cluster, and similarly, a plurality of industrial gateways in another workshop form another gateway cluster.
S130: the industrial data between the various industrial gateways within the gateway cluster is synchronized using a Raft algorithm.
By way of example, the synchronizing of industrial data between individual industrial gateways within a gateway cluster using a Raft algorithm includes: selecting a main gateway node in the gateway cluster; the master gateway node is configured to receive and synchronize the industrial data to other industrial gateway nodes in the gateway cluster. The electing a master gateway node in the gateway cluster includes: calculating election parameters of the main gateway node based on network conditions of all industrial gateways in the gateway cluster; determining a master gateway node in the gateway cluster based on the election parameters; the calculation formula is as follows:
wherein,represents an average value of CPU utilization within a preset time, < >>Represents the average utilization of the memory within a preset time, < + >>Representing the average occupancy of the disk within a preset time, < >>Indicating the average communication delay in the network over a preset time. The higher the election parameter value, the more suitable the gateway node is as a master gateway node. The preset time can be set to be 10 minutes, so that the stability of the master node election is effectively ensured by considering the network condition and the actual load condition of each gateway in the process of the master node election. In short, since different industrial gateways are combined into oneAnd if the gateway cluster is selected, one industrial gateway in the gateway cluster is required to be selected as a Leader node for capturing the whole gateway cluster, and the Leader node is used for receiving industrial data and synchronizing the industrial data to other gateway nodes in the gateway cluster so as to realize data consistency.
Optionally, after the main gateway node is abnormal, other industrial gateway nodes in the same gateway cluster become candidate nodes to wait for selecting a new main gateway node; when the candidate node receives the message of the node with larger period or index, the candidate node exits the election main gateway node; and when the candidate node receives the election messages of other candidate nodes, comparing the election parameters of the two candidate nodes, wherein the election parameters are larger as the main gateway node.
S140: the cost overhead required to process the industrial data on the industrial gateway node within the different gateway cluster and/or the cost overhead required to process the industrial data on the cloud node is calculated.
As an example, the calculating of the cost overhead required to process the industrial data on the industrial gateway node within the different gateway cluster and/or the cost overhead required to process the industrial data on the cloud node includes:
calculating bandwidth predictors
Wherein B is k Is bandwidth predictive value, t k Is the time of arrival of the acknowledgement character of the current network TCP protocol, t k-1 Is the time of arrival of the last validation character, d k Is the kth acknowledgement character acknowledging the received data quantity, d k-1 K-1 acknowledgement characters confirm the received data volume, RTT k Is t k Round trip delay at time. Alpha k Is t k A time of day adjustment factor. When alpha is k Taking a larger value, the estimated value for bandwidth is more conservative, when α k Taking smaller values, the bandwidth is estimated to be higher, the mediation factor alpha k It may be given different values based on actual conditions.
As an example, the calculating of the cost overhead required to process the industrial data on the industrial gateway node within the different gateway cluster and/or the cost overhead required to process the industrial data on the cloud node includes: when a calculation task is performed on the local network Guan Jiqun, the cost overhead required for processing calculation of the industrial data on the local gateway cluster is determined based on the bandwidth predicted value to be:
T i L =T i Lexe ;
wherein T is i L The time, w, required for the processing of the industrial data generated by the device i in the local gateway cluster is calculated i For the data transmission quantity of industrial data to be calculated, f i Computing capabilities for data of local gateway cluster hardware; the time required for processing and calculating the industrial data generated by the equipment i in the local gateway cluster is the cost required for processing and calculating the industrial data in the local gateway cluster.
As an example, the calculating of the cost overhead required to process the industrial data on the industrial gateway node within the different gateway cluster and/or the cost overhead required to process the industrial data on the cloud node includes:
when the calculation task is performed on other gateway clusters, determining that cost required by processing and calculation of the industrial data on the other gateway clusters based on the bandwidth predicted value is as follows:
wherein w is i B for the data transmission amount of industrial data to be calculated k For bandwidth prediction, λ is a wideband correction factor, typically ranging from 0 to 1, T i LtoE Data transmission delay, T i Eexe For processing time of industrial data at edge node, f e Is the computing power of other gateway cluster hardware, T i LE Scheduling the time required for processing and calculating the industrial data generated by the equipment i in other gateway clusters; the time required for processing and calculating the industrial data generated by the equipment i in other gateway clusters is the cost required for processing and calculating the industrial data in other gateway clusters.
As an example, the calculating of the cost overhead required to process the industrial data on the industrial gateway node within the different gateway cluster and/or the cost overhead required to process the industrial data on the cloud node includes: when a calculation task is calculated in a cloud node, determining cost overhead required by processing calculation of the industrial data in the cloud node based on the bandwidth predicted value is as follows:
wherein w is i Lambda is a wideband correction coefficient, B, for the amount of data transmission of industrial data to be calculated k For bandwidth prediction value, T i EtoC T is the transmission delay from the relay node to the cloud server i Sexe For the processing time of industrial data at a cloud node, f s Is the computing power of the cloud server, T i LtoE For numbers of digitsData transmission delay, T i LC Scheduling the industrial data generated by the equipment i to a cloud server for processing and calculating the required time; and the time required for processing and calculating the industrial data generated by the equipment i to the cloud server is the cost required for processing and calculating the industrial data at the cloud node.
As an example, a task queue threshold is set on an industrial gateway node; when the industrial gateway node receives a task for calculating industrial data, if the length of a task queue of the current industrial gateway node is equal to a threshold value, uploading the task at the head of the task queue to the cloud node for calculation.
S150: and distributing computing tasks on industrial gateway nodes and/or cloud nodes in different gateway clusters through a Qlearning strengthening algorithm based on the cost overhead.
As an example, the solution of task offloading uses Qlearning reinforcement algorithm to solve for Q (s, a) =q (s, a) +α [ R (s, a) +γmaxq ' (s ', a ') -Q (s, a) ];
r (s, a) is a reward function expressed asWherein->The overhead for the local execution of all computing tasks, i.e. the computing time required for all tasks to be executed locally, +.>The best task offloading mode can be automatically selected based on the Qlearning enhancement algorithm for the total cost overhead of the system in the current state. Namely, different unloading methods are iterated through a Qlearning strengthening algorithm to obtain an optimal solution, and the calculation tasks of the industrial data are distributed in a mode corresponding to the optimal solution. Since Qlearning enhancement algorithms are well established in the prior art, they are not described in detail herein.
As an example, a task queue threshold is set on an industrial gateway node; when the industrial gateway node receives a task for calculating industrial data, if the length of a task queue of the current industrial gateway node is equal to a threshold value, uploading the task at the head of the task queue to the cloud node for calculation. Thus, the congestion of a certain node can be avoided from affecting the overall calculation efficiency of the system.
In the embodiment, the election of the main gateway node in the Raft algorithm can be effectively ensured by carrying out the election of the main gateway node based on the network condition and the load condition of each gateway. The efficiency of data transmission can be effectively ensured by reasonably distributing the calculation tasks based on the cost overhead required by processing industrial data on the industrial gateway nodes in different gateway clusters by adopting a Qlearning strengthening algorithm.
Example 2
Referring to fig. 2, the present embodiment provides a schematic structural diagram of an edge gateway cluster computing power dispatching system based on a network and a load, where the system includes:
the data acquisition module 210 is configured to acquire industrial data via an industrial gateway deployed within a work area.
The gateway cluster construction module 220 is configured to form a gateway cluster from a plurality of industrial gateways in the same working area.
The data synchronization module 230 is configured to synchronize industrial data between each industrial gateway in the gateway cluster by using a Raft algorithm.
The data calculation module 240 is configured to calculate a cost overhead required for processing the industrial data on an industrial gateway node in a different gateway cluster and/or a cost overhead required for processing the industrial data on a cloud node.
The data scheduling module 250 is configured to allocate computing tasks on different industrial gateway nodes and/or cloud nodes through a Qlearning enhancement algorithm based on the cost overhead.
The data transmission module 260 is configured to enable industrial data to interact between each gateway cluster and the cloud node.
The data storage module 270 is used for storing industrial data.
Example 3
The embodiment of the invention also provides a storage medium, wherein the storage medium is stored with a network and load-based edge gateway cluster computing power scheduling method, and the network and load-based edge gateway cluster computing power scheduling program realizes the steps of the network and load-based edge gateway cluster computing power scheduling method when being executed by a processor. Because the storage medium adopts all the technical schemes of all the embodiments, the storage medium has at least all the beneficial effects brought by the technical schemes of the embodiments, and the description is omitted here.
Example 4
Referring to fig. 3, an embodiment of the present invention further provides an electronic device, including: a memory and a processor; at least one program instruction is stored in the memory; the processor loads and executes the at least one program instruction to implement the network and load based edge gateway cluster computing power scheduling method provided in embodiment 1.
The memory 302 and the processor 301 are connected by a bus, which may include any number of interconnected buses and bridges, which connect together the various circuits of the one or more processors 301 and the memory 302. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 301 is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor 301.
The processor 301 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 302 may be used to store data used by processor 301 in performing operations.
The foregoing is merely an embodiment of the present invention, and a specific structure and characteristics of common knowledge in the art, which are well known in the scheme, are not described herein, so that a person of ordinary skill in the art knows all the prior art in the application day or before the priority date of the present invention, and can know all the prior art in the field, and have the capability of applying the conventional experimental means before the date, so that a person of ordinary skill in the art can complete and implement the present embodiment in combination with his own capability in the light of the present application, and some typical known structures or known methods should not be an obstacle for a person of ordinary skill in the art to implement the present application. It should be noted that modifications and improvements can be made by those skilled in the art without departing from the structure of the present invention, and these should also be considered as the scope of the present invention, which does not affect the effect of the implementation of the present invention and the utility of the patent. The protection scope of the present application shall be subject to the content of the claims, and the description of the specific embodiments and the like in the specification can be used for explaining the content of the claims.
Claims (10)
1. An edge gateway cluster computing power scheduling method based on a network and a load is characterized by comprising the following steps:
acquiring industrial data through an industrial gateway deployed in a work area;
forming a gateway cluster by a plurality of industrial gateways in the same working area;
synchronizing industrial data among all industrial gateways in the gateway cluster by adopting a Raft algorithm;
calculating cost overhead required for processing the industrial data on an industrial gateway node in a different gateway cluster and/or cost overhead required for processing the industrial data on a cloud node;
and distributing computing tasks on industrial gateway nodes and/or cloud nodes in different gateway clusters through a Qlearning strengthening algorithm based on the cost overhead.
2. The network and load based edge gateway cluster computational power scheduling method of claim 1 wherein synchronizing industrial data between individual industrial gateways within a gateway cluster using a Raft algorithm comprises:
selecting a main gateway node in the gateway cluster;
the master gateway node is configured to receive and synchronize the industrial data to other industrial gateway nodes in the gateway cluster.
3. The network and load based edge gateway cluster computational power scheduling method of claim 2, wherein the electing a master gateway node in the gateway cluster comprises:
calculating election parameters of the main gateway node based on network conditions of all industrial gateways in the gateway cluster;
determining a master gateway node in the gateway cluster based on the election parameters;
the calculation formula is as follows:
wherein,represents an average value of CPU utilization within a preset time, < >>Represents the average utilization of the memory within a preset time, < + >>Representing the average occupancy of the disk within a preset time, < >>Indicating the average communication delay in the network over a preset time.
4. The network and load based edge gateway cluster computing power scheduling method of claim 3, further comprising:
when the main gateway node is abnormal, other industrial gateway nodes in the same gateway cluster become candidate nodes to wait for selecting a new main gateway node;
when the candidate node receives the message of the node with larger period or index, the candidate node exits the election main gateway node;
and when the candidate node receives the election messages of other candidate nodes, comparing the election parameters of the two candidate nodes, wherein the election parameters are larger as the main gateway node.
5. The network and load based edge gateway cluster computational power scheduling method of claim 1, wherein the calculating of the cost overhead required to process the industrial data on an industrial gateway node within a different gateway cluster and/or the cost overhead required to process the industrial data on a cloud node comprises:
calculating bandwidth predictors
Wherein B is k Is bandwidth predictive value, t k Is the time of arrival of the acknowledgement character of the current network TCP protocol, t k-1 Is the time of arrival of the last validation character, d k Is the kth acknowledgement character acknowledging the received data quantity, d k-1 K-1 acknowledgement characters confirm the received data volume, RTT k Is t k Round trip delay at time. Alpha k Is t k A time of day adjustment factor.
6. The network and load based edge gateway cluster computational power scheduling method of claim 5, wherein the calculating a cost overhead required to process the industrial data at an industrial gateway node within a different gateway cluster and/or a cost overhead required to process the industrial data at a cloud node comprises:
when a calculation task is performed on the local network Guan Jiqun, the cost overhead required for processing calculation of the industrial data on the local gateway cluster is determined based on the bandwidth predicted value to be:
T i L =T i Lexe ;
wherein T is i L The time, w, required for the processing of the industrial data generated by the device i in the local gateway cluster is calculated i For the data transmission quantity of industrial data to be calculated, f i Computing capabilities for data of local gateway cluster hardware;
the time required for processing and calculating the industrial data generated by the equipment i in the local gateway cluster is the cost required for processing and calculating the industrial data in the local gateway cluster.
7. The network and load based edge gateway cluster computational power scheduling method of claim 5, wherein the calculating a cost overhead required to process the industrial data at an industrial gateway node within a different gateway cluster and/or a cost overhead required to process the industrial data at a cloud node comprises:
when the calculation task is performed on other gateway clusters, determining that cost required by processing and calculation of the industrial data on the other gateway clusters based on the bandwidth predicted value is as follows:
wherein w is i B for the data transmission amount of industrial data to be calculated k For bandwidth prediction, λ is the wideband correction coefficient, T i LtoE Data transmission delay, T i Eexe For processing time of industrial data at edge node, f e Is the computing power of other gateway cluster hardware, T i LE Scheduling the time required for processing and calculating the industrial data generated by the equipment i in other gateway clusters;
the time required for processing and calculating the industrial data generated by the equipment i in other gateway clusters is the cost required for processing and calculating the industrial data in other gateway clusters.
8. The network and load based edge gateway cluster computational power scheduling method of claim 5, wherein the calculating a cost overhead required to process the industrial data at an industrial gateway node within a different gateway cluster and/or a cost overhead required to process the industrial data at a cloud node comprises:
when a calculation task is calculated in a cloud node, determining cost overhead required by processing calculation of the industrial data in the cloud node based on the bandwidth predicted value is as follows:
wherein w is i Lambda is a wideband correction coefficient, B, for the amount of data transmission of industrial data to be calculated k For bandwidth prediction value, T i EtoC T is the transmission delay from the relay node to the cloud server i Sexe For the processing time of industrial data at a cloud node, f s Is the computing power of the cloud server, T i LtoE Data transmission delay, T i LC Scheduling the industrial data generated by the equipment i to a cloud server for processing and calculating the required time;
and the time required for processing and calculating the industrial data generated by the equipment i to the cloud server is the cost required for processing and calculating the industrial data at the cloud node.
9. The network and load based edge gateway cluster computing power scheduling method of claim 1, further comprising:
setting a task queue threshold on an industrial gateway node;
when the industrial gateway node receives a task for calculating industrial data, if the length of a task queue of the current industrial gateway node is equal to a threshold value, uploading the task at the head of the task queue to the cloud node for calculation.
10. An edge gateway cluster computing power scheduling system based on a network and a load, the system comprising:
the data acquisition module is used for acquiring industrial data through an industrial gateway deployed in the working area;
the gateway cluster construction module is used for forming a gateway cluster by a plurality of industrial gateways in the same working area;
the data synchronization module is used for synchronizing industrial data among all industrial gateways in the gateway cluster by adopting a Raft algorithm;
the data calculation module is used for calculating cost expenditure required by processing the industrial data on the industrial gateway nodes in different gateway clusters and/or cost expenditure required by processing the industrial data on the cloud node;
the data scheduling module is used for distributing computing tasks on different industrial gateway nodes and/or cloud nodes through a Qlearning strengthening algorithm based on the cost overhead;
the data transmission module is used for enabling industrial data to interact between each gateway cluster and the cloud node;
and the data storage module is used for storing the industrial data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311527012.5A CN117768481A (en) | 2023-11-15 | 2023-11-15 | Edge gateway cluster computing power scheduling method and system based on network and load |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311527012.5A CN117768481A (en) | 2023-11-15 | 2023-11-15 | Edge gateway cluster computing power scheduling method and system based on network and load |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117768481A true CN117768481A (en) | 2024-03-26 |
Family
ID=90313333
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311527012.5A Pending CN117768481A (en) | 2023-11-15 | 2023-11-15 | Edge gateway cluster computing power scheduling method and system based on network and load |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117768481A (en) |
-
2023
- 2023-11-15 CN CN202311527012.5A patent/CN117768481A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110535965A (en) | A kind of data processing method and device, storage medium | |
US20200036635A1 (en) | Sensor network system | |
JP4902905B2 (en) | Message transmission method, communication method, deferred acknowledgment communication system, and message transmission system | |
US20050268153A1 (en) | Method of solving a split-brain condition | |
US20060161641A1 (en) | Computer-readable recording medium, relay control method, and relay control apparatus | |
CN111093220B (en) | Autonomous unmanned cluster dynamic management method and management platform | |
CN111556457A (en) | Task migration method and system of underwater self-organizing network based on edge gateway | |
CN113992691B (en) | Method, device and equipment for distributing edge computing resources and storage medium | |
CN112954736A (en) | Policy-based computation offload of wireless energy-carrying internet-of-things equipment | |
CN111294289A (en) | Multi-link switching robot communication method and system | |
CN111309393A (en) | Cloud edge-side collaborative application unloading algorithm | |
CN113132490A (en) | MQTT protocol QoS mechanism selection scheme based on reinforcement learning | |
EP2432193A2 (en) | Method of data replication in a distributed data storage system and corresponding device | |
CN111405531A (en) | Method, medium, terminal and device for improving communication quality | |
EP3660679A1 (en) | Data backup method, device and system | |
CN113840330A (en) | Method for establishing connection, gateway equipment, network system and scheduling center | |
CN117768481A (en) | Edge gateway cluster computing power scheduling method and system based on network and load | |
CN101981908B (en) | Monitoring system | |
JPH07168790A (en) | Information processor | |
WO2020161788A1 (en) | Information processing apparatus, information processing system, program, and information processing method | |
US10104571B1 (en) | System for distributing data using a designated device | |
CN115225496A (en) | Mobile sensing service unloading fault-tolerant method based on edge computing environment | |
CN112449016B (en) | Task unloading method and device, storage medium and electronic equipment | |
US10177929B1 (en) | System for distributing data to multiple devices | |
US20140114614A1 (en) | Remote monitoring system, remote monitoring apparatus, communication apparatus, and remote monitoring method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |