CN117176726A - Cloud edge end cooperative high concurrency access method and system - Google Patents

Cloud edge end cooperative high concurrency access method and system Download PDF

Info

Publication number
CN117176726A
CN117176726A CN202311167610.6A CN202311167610A CN117176726A CN 117176726 A CN117176726 A CN 117176726A CN 202311167610 A CN202311167610 A CN 202311167610A CN 117176726 A CN117176726 A CN 117176726A
Authority
CN
China
Prior art keywords
data
service
edge gateway
access
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311167610.6A
Other languages
Chinese (zh)
Inventor
李立生
李帅
文艳
文祥宇
张世栋
刘洋
房牧
李建修
张鹏平
由新红
孙勇
张林利
王峰
苏国强
刘合金
黄敏
于海东
刘文彬
李景华
刘明林
黄锐
刘兆元
公伟勇
梁子龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
QINGDAO POWER SUPPLY Co OF STATE GRID SHANDONG ELECTRIC POWER Co
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd
State Grid Shandong Electric Power Co Ltd
Economic and Technological Research Institute of State Grid Shandong Electric Power Co Ltd
Original Assignee
QINGDAO POWER SUPPLY Co OF STATE GRID SHANDONG ELECTRIC POWER Co
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd
State Grid Shandong Electric Power Co Ltd
Economic and Technological Research Institute of State Grid Shandong Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by QINGDAO POWER SUPPLY Co OF STATE GRID SHANDONG ELECTRIC POWER Co, State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd, State Grid Shandong Electric Power Co Ltd, Economic and Technological Research Institute of State Grid Shandong Electric Power Co Ltd filed Critical QINGDAO POWER SUPPLY Co OF STATE GRID SHANDONG ELECTRIC POWER Co
Priority to CN202311167610.6A priority Critical patent/CN117176726A/en
Publication of CN117176726A publication Critical patent/CN117176726A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention belongs to the technical field of power systems, and discloses a cloud edge end cooperative high concurrency access method and a system, wherein the method comprises the following steps: collecting power service data of a service terminal, analyzing the power service data, determining an edge gateway access type of the power service data, and determining an access channel priority of the service terminal to an edge gateway according to the edge gateway access type; the priority of the access channel is ordered in a descending order to obtain a first ordering queue, and the service terminal which accords with the pre-allocation judging threshold value of the access channel is accessed to the corresponding edge gateway according to the first ordering queue; and determining the profit of the edge gateway accessing the cloud server according to the high concurrency access scene of the multi-time scale terminal of the power system, and sorting the edge gateways which are not accessed to the cloud server in a descending order according to the profit to obtain a second sorting queue, and accessing the edge gateway to the cloud server corresponding to the highest profit according to the second sorting queue. The invention meets the requirement of differentiated access of mass service terminals.

Description

Cloud edge end cooperative high concurrency access method and system
Technical Field
The invention relates to the technical field of power systems, in particular to a cloud edge end cooperative high concurrency access method and system.
Background
The cloud edge end cooperatively combines the advantages of cloud computing and edge computing, so that the demand of edge cloud data transmission bandwidth is greatly reduced, and the data processing efficiency and the service response capability are improved. However, with the large-scale access of high-proportion new energy, the concurrent access requirements of new states such as source network load storage collaborative scheduling, distributed power management and control, power grid panoramic situation awareness and the like are high, and the acquired data volume is large. In order to meet the high concurrent access requirements of different services, communication resources are required to be reasonably configured, and channel pre-allocation and load balancing are performed. However, the traditional cloud side cooperation lacks an effective end-side channel pre-allocation and side cloud data access mechanism, and cannot meet the requirements of high concurrent access and data processing under the constraint of limited communication, calculation and storage resources, so that the problem that cloud side load is unbalanced and service requirements cannot be guaranteed exists.
The invention patent number 202010582955.8 provides a channel resource allocation system and a channel resource allocation method, which allocate resources in edge equipment based on a terminal access request, if the terminal access request cannot be satisfied, different opportunities are selected to enter a cloud service center resource allocation stage according to the requested service grade, resources are allocated in the cloud service center, service types and grades are distinguished from the perspective of resource service matching integration, different cloud edge cooperative resources are allocated according to different service types and grades, and the optimal allocation of the edge channel resources is ensured, so that the reliability of large-scale terminal access is improved. The patent of the invention 202210635189.6 proposes a cloud platform load balancing method, wherein an active acquisition method is adopted to acquire corresponding network delay, access load information is acquired based on a network response delay value, and a load balancing server is connected for processing, so that cloud platform load balancing is realized. The above-described technique is optimized from only a single aspect of terminal access management or load balancing, and has the following drawbacks.
1. In the prior art, a large amount of frequent signaling interaction is required between the terminal and the edge gateway, the constraint of the side layer queue state and the end layer differentiated access success rate is ignored, and the method cannot adapt to a high-concurrency access scene of a large number of terminals.
2. The prior art ignores the service priority of the terminal, and is difficult to meet the differentiated requirements of different services; meanwhile, queue backlog evolution and space-time coupling of cloud edge nodes and the influence of queue backlog difference among cloud edge nodes on high concurrency access decisions are not considered, so that partial network node data backlog is serious, and the service timely response requirement of massive terminals under high concurrency access cannot be met.
Disclosure of Invention
The embodiment of the invention provides a cloud edge end cooperative high concurrency access method and a system, which are used for solving the technical problems in the prior art.
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
According to a first aspect of the embodiment of the invention, a cloud edge end cooperative high concurrency access method is provided, and the Yun Bianduan cooperative high concurrency access method is applied to a high concurrency access scene of a multi-time-scale terminal of an electric power system.
In one embodiment, the Yun Bianduan co-operating with the high concurrency access method comprises:
collecting power service data of a service terminal, analyzing the power service data, determining an edge gateway access type of the power service data, and determining an access channel priority of the service terminal to access an edge gateway according to the edge gateway access type;
the priority of the access channel is ordered in a descending order to obtain a first ordering queue, and the service terminal which accords with the pre-allocation judgment threshold of the access channel is accessed to the corresponding edge gateway according to the first ordering queue;
determining profit of the edge gateway accessing the cloud server according to the high concurrency access scene of the multi-time scale terminal of the power system, and sorting the edge gateways which are not accessed to the cloud server in a descending order according to the profit to obtain a second sorting queue, and accessing the edge gateway to the cloud server corresponding to the highest profit according to the second sorting queue.
In one embodiment, the power system multi-time scale terminal high concurrency access scenario includes a terminal layer, an edge layer and a cloud layer,
wherein the terminal layer includes N service terminals, and the set of the service terminals is d= { D 1 ,...,d n ,...,d N -a }; the edge layer comprises J edge gateways, and the set of the edge gateways is E= { E 1 ,...,e j ,...,e J Service terminals in the communication range of the edge gateway are gathered intoThe cloud layer comprises a high concurrency access management and control platform and K cloud servers, and the set of the cloud servers is S= { S 1 ,...,s k ,...,s K };
The service terminal transmits the acquired data to an edge gateway through a power line carrier communication network; edge gateway vs. communication range D j The individual terminals pre-allocate access channels and have D j The data queue is used for storing the uncomputed business terminal data; the high concurrency access management and control platform dynamically adjusts the data access decision of the edge gateway according to the queue backlog of the edge gateway and the cloud server;
the terminal layer, the edge layer and the cloud layer adopt a multi-time scale model to divide data transmission time into I time periods, and each time period comprises T 0 Each time slot, the time slot length is tau.
In one embodiment, a first data queue backlog evolution model is provided between the service terminal and the edge gateway, and a formula of the first data queue backlog evolution model is:
The edge gateway and the cloud server are provided with a second data queue backlog evolution model, and the formula of the second data queue backlog evolution model is as follows:
in the method, in the process of the invention,buffering a data queue of a service terminal for an edge gateway, < >>Data volume uploaded to edge gateway for the t-th slot service terminal,/the number of data volume uploaded to edge gateway for the t-th slot service terminal>Data volume for service terminal uploaded to cloud server for edge gateway, +.>Pre-assigning an indication variable, +.>Indicating pre-allocation of transmission channels for service terminals, otherwiseFor the upload rate of the service terminal to the edge gateway, < >>Constraint for minimum collected data quantity of service terminal; />The uploading rate from the edge gateway to the cloud server is set; ,/>Data volume, x, uploaded to cloud server for queues of service terminals in edge gateway j,k (t) selecting an indicator variable, x, for a cloud server of a tth time slot j,k (t) =1 means that the edge gateway uploads data to the cloud server computing, otherwise x j,k (t)=0,Z j,k (t) caching data queues of edge gateways for cloud servers,>data volume uploaded to cloud server for t-th time slot edge gateway, Y j,k (t) processing data volume from edge gateway for tth slot cloud server, +.>Computing resource for processing business terminal data in edge gateway for t-th time slot cloud server,/for business terminal data in edge gateway >Computing resources required for processing per bit of service terminal data.
In one embodiment, the edge gateway is provided with a first load balancing degree model for balancing the load of the data queue of the service terminal, and the formula of the first load balancing degree model is as follows:
the cloud server is provided with a second load balancing degree model for carrying out load balancing on the data queues of the edge gateway, and the formula of the second load balancing degree model is as follows:
in the method, in the process of the invention,for average queue backlog of edge gateway, +.>Buffering a data queue of a service terminal for an edge gateway, < >>Average queue backlog for cloud server, Z j,k (t) caching data queues of edge gateways for cloud servers,>load balancing for data queues of service terminals in edge gateway,/-, for>And (5) balancing the load of the data queue of the edge gateway in the cloud server.
In one embodiment, analyzing the power traffic data, determining an edge gateway access type of the power traffic data includes:
analyzing the power service data of each time terminal, and judging whether the service terminal corresponding to the power service data is a new access service terminal or not;
if the judging result is yes, comparing the service data characteristic vector of the power service data with the data flow characteristic vector of the service terminal, calculating service matching degree, and determining the access type of the edge gateway according to the service matching degree; under the condition that the judgment result is negative, adopting a historical service classification result to determine the access type of the edge gateway;
The calculation formula of the service matching degree is as follows:
in the method, in the process of the invention,is the data flow characteristic vector, X of the service terminal m Is a data feature vector for class m traffic,matching the service.
In one embodiment, determining the access channel priority of the service terminal to the edge gateway according to the edge gateway access type includes:
calculating the average value deviation of a queue of the service terminal and the access success rate deviation of the service terminal and the edge gateway based on the historical power service data according to the access type; the queue average deviation comprises queue backlog, queue input, queue output and average deviation of the same service terminals;
and calculating the access channel priority of the service terminal to the edge gateway according to the queue average value deviation and the access success rate deviation.
In one embodiment, the calculation formula of the queue average deviation is:
in the method, in the process of the invention,for queue backlog bias, +.>Inputting deviation for queue, ++>Output bias for queue, ++>Buffering a data queue of a service terminal for an edge gateway, < >>Data volume uploaded to edge gateway for the t-th slot service terminal,/the number of data volume uploaded to edge gateway for the t-th slot service terminal>Data volume of service terminal uploaded to cloud server for edge gateway, |S m (i) I is a set of service types S m (i) The number of elements in the matrix.
In one embodiment, the calculation formula of the access success rate deviation is:
in the method, in the process of the invention,for accessing success rate deviation c n Minimum desired constraint for service terminals and edge gateways, < ->The channel of the ith period is pre-assigned an indicator variable, l is the sum index.
In one embodiment, the calculation formula of the access channel priority is:
in the method, in the process of the invention,for access channel priority, and +.>The larger indicates that the next period is more prone to pre-allocating transmission channels for the traffic terminal, +.>For queue backlog bias, +.>Inputting deviation for queue, ++>The deviations are output for the queues and,and the access success rate deviation is obtained.
In one embodiment, according to a high concurrency access scenario of a multi-time scale terminal of a power system, a calculation formula when determining profit of an edge gateway accessing a cloud server is:
in χ j,k (t) profit for the edge gateway to access the cloud server, p j,k (t) bid costs to be paid for the edge gateway to access the cloud server,for the service priority weight of the service terminal, < +.>Load balancing for data queues of service terminals in edge gateway,/-, for>And (5) balancing the load of the data queue of the edge gateway in the cloud server.
According to a second aspect of the embodiment of the invention, a cloud edge end cooperative high concurrency access system is provided, and the Yun Bianduan cooperative high concurrency access system is applied to a multi-time-scale terminal high concurrency access scene of a power system.
In one embodiment, the Yun Bianduan cooperative high concurrency access party system includes:
the priority determining module is used for collecting the electric power business data of the business terminal, analyzing the electric power business data, determining the access type of the edge gateway of the electric power business data, and determining the access channel priority of the business terminal to the edge gateway according to the access type of the edge gateway;
the side access module is used for carrying out descending order sequencing on the priority of the access channel to obtain a first sequencing queue, and accessing the service terminal which accords with the access channel pre-allocation judgment threshold to the corresponding edge gateway according to the first sequencing queue;
bian Yun access module, configured to determine profit of the edge gateway accessing to the cloud server according to the high concurrency access scenario of the multi-time scale terminal of the power system, and sort the edge gateways that are not accessing to the cloud server in descending order according to the profit, to obtain a second sorting queue, and access the edge gateway to the cloud server corresponding to the highest profit according to the second sorting queue.
In one embodiment, the power system multi-time scale terminal high concurrency access scenario includes a terminal layer, an edge layer and a cloud layer,
wherein the terminal layer includes N service terminals, and the set of the service terminals is d= { D 1 ,...,d n ,...,d N -a }; the edge layer comprises J edge gateways, and the set of the edge gateways is E= { E 1 ,...,e j ,...,e J Service terminals in the communication range of the edge gateway are gathered intoThe cloud layer comprises a high concurrency access management and control platform and K cloud servers, and the set of the cloud servers is S= { S 1 ,...,s k ,...,s K };
The service terminal transmits the acquired data to an edge gateway through a power line carrier communication network; edge gateway vs. communication range D j The individual terminals pre-allocate access channels and have D j The data queue is used for storing the uncomputed business terminal data; the high concurrency access management and control platform dynamically adjusts the data access decision of the edge gateway according to the queue backlog of the edge gateway and the cloud server;
the terminal layer, the edge layer and the cloud layer adopt a multi-time scale model to divide data transmission time into I time periods, and each time period comprises T 0 Each time slot, the time slot length is tau.
In one embodiment, a first data queue backlog evolution model is provided between the service terminal and the edge gateway, and a formula of the first data queue backlog evolution model is:
The edge gateway and the cloud server are provided with a second data queue backlog evolution model, and the formula of the second data queue backlog evolution model is as follows:
in the method, in the process of the invention,buffering a data queue of a service terminal for an edge gateway, < >>Data volume uploaded to edge gateway for the t-th slot service terminal,/the number of data volume uploaded to edge gateway for the t-th slot service terminal>Data volume for service terminal uploaded to cloud server for edge gateway, +.>Pre-assigning an indication variable, +.>Indicating pre-allocation of transmission channels for service terminals, otherwiseFor the upload rate of the service terminal to the edge gateway, < >>Constraint for minimum collected data quantity of service terminal; />The uploading rate from the edge gateway to the cloud server is set; ,/>Data volume, x, uploaded to cloud server for queues of service terminals in edge gateway j,k (t) selecting an indicator variable, x, for a cloud server of a tth time slot j,k (t) =1 means that the edge gateway uploads data to the cloud server computing, otherwise x j,k (t)=0,Z j,k (t) caching data queues of edge gateways for cloud servers,>data volume uploaded to cloud server for t-th time slot edge gateway, Y j,k (t) processing data volume from edge gateway for tth slot cloud server, +.>Computing resource for processing business terminal data in edge gateway for t-th time slot cloud server,/for business terminal data in edge gateway >Computing resources required for processing per bit of service terminal data.
In one embodiment, the edge gateway is provided with a first load balancing degree model for balancing the load of the data queue of the service terminal, and the formula of the first load balancing degree model is as follows:
the cloud server is provided with a second load balancing degree model for carrying out load balancing on the data queues of the edge gateway, and the formula of the second load balancing degree model is as follows:
in the method, in the process of the invention,for average queue backlog of edge gateway, +.>Buffering a data queue of a service terminal for an edge gateway, < >>Average queue backlog for cloud server, Z j,k (t) cloud server cacheData queue of edge gateway->Load balancing for data queues of service terminals in edge gateway,/-, for>And (5) balancing the load of the data queue of the edge gateway in the cloud server.
In one embodiment, the priority determining module includes a terminal judging module, a matching degree calculating module and a type determining module, wherein,
the terminal judging module is used for analyzing the power service data of each time terminal and judging whether the service terminal corresponding to the power service data is a new access service terminal or not;
The matching degree calculating module is used for comparing the service data characteristic vector of the power service data with the data flow characteristic vector of the service terminal under the condition that the judging result is yes, and calculating the service matching degree;
the type determining module is used for determining the access type of the edge gateway according to the service matching degree under the condition that the judging result is yes; under the condition that the judgment result is negative, adopting a historical service classification result to determine the access type of the edge gateway;
the calculation formula of the service matching degree is as follows:
in the method, in the process of the invention,is the data flow characteristic vector, X of the service terminal m Is a data feature vector for class m traffic,matching the service.
In one embodiment, the priority determining module includes a queue average calculating module, an access success rate deviation calculating module, and a priority calculating module, wherein,
the queue average value calculation module is used for calculating the queue average value deviation of the service terminal based on the historical power service data according to the access type, wherein the queue average value deviation comprises queue backlog, queue input, queue output and average value deviation of the same type of service terminal;
the access success rate deviation calculation module is used for calculating the access success rate deviation of the service terminal and the edge gateway based on the historical power service data according to the access type;
And the priority calculating module is used for calculating the access channel priority of the service terminal to the edge gateway according to the queue average value deviation and the access success rate deviation.
In one embodiment, the calculation formula of the queue average deviation is:
in the method, in the process of the invention,for queue backlog bias, +.>Inputting deviation for queue, ++>Output bias for queue, ++>Buffering a data queue of a service terminal for an edge gateway, < >>Data volume uploaded to edge gateway for the t-th slot service terminal,/the number of data volume uploaded to edge gateway for the t-th slot service terminal>For the data volume of the service terminal uploaded to the cloud server by the edge gateway, S m (i) For a set of traffic types S m (i) The number of elements in the matrix.
In one embodiment, the calculation formula of the access success rate deviation is:
in the method, in the process of the invention,for accessing success rate deviation c n Minimum desired constraint for service terminals and edge gateways, < ->The channel of the ith period is pre-assigned an indicator variable, l is the sum index.
In one embodiment, the calculation formula of the access channel priority is:
in the method, in the process of the invention,for access channel priority, and +.>The larger indicates that the next period is more prone to pre-allocating transmission channels for the traffic terminal, +.>For queue backlog bias, +.>Inputting deviation for queue, ++ >The deviations are output for the queues and,and the access success rate deviation is obtained.
In one embodiment, according to a high concurrency access scenario of a multi-time scale terminal of a power system, a calculation formula when determining profit of an edge gateway accessing a cloud server is:
in χ j,k (t) profit for the edge gateway to access the cloud server, p j,k (t) bid costs to be paid for the edge gateway to access the cloud server,for the service priority weight of the service terminal, < +.>Load balancing for data queues of service terminals in edge gateway,/-, for>And (5) balancing the load of the data queue of the edge gateway in the cloud server.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
the invention carries out service classification by calculating the matching degree of the terminal data flow characteristic vector and the service data characteristic vector. Based on the classification result, an access priority grade is built according to the side queue state to decide the pre-allocation of the access channel, the side queue state deviation and the end access success rate deviation are comprehensively considered during the channel pre-allocation, the accuracy of the pre-allocation is improved, the terminal access time delay is reduced, and the differentiated requirement of high-concurrency access of the terminal is met.
Based on the back pressure design idea, the cloud server access strategy of the edge gateway is optimized through the load under-balance perception price increasing mechanism, the access quantity and queue backlog are reduced, high concurrency constraint is met, dynamic adaptation of cloud server computing resources and the edge gateway queue backlog is realized through a dynamic bidding mode, cloud edge load balance is realized, and business data processing requirements are met.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating a cloud-edge end co-ordinated high concurrency access method, according to an example embodiment;
FIG. 2 is a block diagram illustrating a cloud-edge co-high concurrency access system in accordance with an illustrative embodiment;
FIG. 3 is a schematic diagram illustrating a cloud-edge cooperative high concurrency access procedure in practical application according to an exemplary embodiment;
FIG. 4 is an architecture diagram of a cloud-edge co-high concurrency access system in actual use, as illustrated in accordance with an example embodiment;
fig. 5 is a schematic structural diagram of a cloud edge end cooperative high concurrency access device in practical application according to an exemplary embodiment.
Detailed Description
The following description and the drawings sufficiently illustrate specific embodiments herein to enable those skilled in the art to practice them. Portions and features of some embodiments may be included in, or substituted for, those of others. The scope of the embodiments herein includes the full scope of the claims, as well as all available equivalents of the claims. The terms "first," "second," and the like herein are used merely to distinguish one element from another element and do not require or imply any actual relationship or order between the elements. Indeed the first element could also be termed a second element and vice versa. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a structure, apparatus, or device that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such structure, apparatus, or device. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a structure, apparatus or device comprising the element. Various embodiments are described herein in a progressive manner, each embodiment focusing on differences from other embodiments, and identical and similar parts between the various embodiments are sufficient to be seen with each other.
The terms "longitudinal," "transverse," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like herein refer to an orientation or positional relationship based on that shown in the drawings, merely for ease of description herein and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operate in a particular orientation, and thus are not to be construed as limiting the invention. In the description herein, unless otherwise specified and limited, the terms "mounted," "connected," and "coupled" are to be construed broadly, and may be, for example, mechanically or electrically coupled, may be in communication with each other within two elements, may be directly coupled, or may be indirectly coupled through an intermediary, as would be apparent to one of ordinary skill in the art.
Herein, unless otherwise indicated, the term "plurality" means two or more.
Herein, the character "/" indicates that the front and rear objects are an or relationship. For example, A/B represents: a or B.
Herein, the term "and/or" is an association relation describing an object, meaning that three relations may exist. For example, a and/or B, represent: a or B, or, A and B.
It should be understood that, although the steps in the flowchart are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the figures may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of other steps or other steps.
The various modules in the apparatus or system of the present application may be implemented in whole or in part in software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Embodiments of the invention and features of the embodiments may be combined with each other without conflict.
Fig. 1 illustrates an embodiment of a cloud-edge co-ordinated high concurrency access method of the present invention.
In this alternative embodiment, the Yun Bianduan cooperative high concurrency access method is applied to a high concurrency access scenario of a multi-time-scale terminal of a power system, and includes:
step S101, collecting power business data of a business terminal, analyzing the power business data, determining an edge gateway access type of the power business data, and determining an access channel priority of the business terminal to an edge gateway according to the edge gateway access type;
step S103, descending order is carried out on the priority of the access channel to obtain a first ordering queue, and a service terminal which accords with the pre-allocation judgment threshold value of the access channel is accessed to a corresponding edge gateway according to the first ordering queue;
step S105, according to a high concurrency access scene of the multi-time scale terminal of the power system, determining profit of the edge gateway accessing the cloud server, and according to the profit, ordering the edge gateways which are not accessed to the cloud server in a descending order to obtain a second ordering queue, and according to the second ordering queue, accessing the edge gateway to the cloud server corresponding to the highest profit.
In this optional embodiment, when the power service data is analyzed and the edge gateway access type of the power service data is determined, the power service data of each time terminal may be analyzed, and whether the service terminal corresponding to the power service data is a new access service terminal may be determined; if the judging result is yes, comparing the service data characteristic vector of the power service data with the data flow characteristic vector of the service terminal, calculating service matching degree, and determining the access type of the edge gateway according to the service matching degree; and under the condition that the judgment result is negative, determining the access type of the edge gateway by adopting the historical service classification result.
In this optional embodiment, when determining the priority of the access channel of the service terminal to the edge gateway according to the access type of the edge gateway, the average value deviation of the queue of the service terminal and the access success rate deviation of the service terminal and the edge gateway may be calculated based on historical power service data according to the access type; the queue average deviation comprises queue backlog, queue input, queue output and average deviation of the same service terminals; and calculating the access channel priority of the service terminal to the edge gateway according to the queue average value deviation and the access success rate deviation.
Fig. 2 illustrates one embodiment of a cloud-edge co-high concurrency access system of the present invention.
In this alternative embodiment, the Yun Bianduan cooperative high concurrency access system is applied to a high concurrency access scenario of a multi-time-scale terminal of a power system, and includes:
the priority determining module 201 is configured to collect power service data of a service terminal, analyze the power service data, determine an edge gateway access type of the power service data, and determine an access channel priority of the service terminal for accessing to an edge gateway according to the edge gateway access type;
the side access module 203 is configured to sort the priority of the access channel in a descending order, obtain a first sorting queue, and access a service terminal that meets an access channel pre-allocation decision threshold to a corresponding edge gateway according to the first sorting queue;
bian Yun access module 205 is configured to determine profit of an edge gateway accessing to a cloud server according to a high concurrency access scenario of a multi-time scale terminal of the power system, and sort edge gateways not accessing to the cloud server in descending order according to the profit, to obtain a second sorting queue, and access the edge gateway to a cloud server corresponding to the highest profit according to the second sorting queue.
In this alternative embodiment, the priority determining module 201 may include a terminal determining module (not shown in the figure), a matching degree calculating module (not shown in the figure), and a type determining module (not shown in the figure), where the terminal determining module is configured to analyze the power service data of each time terminal, and determine whether the service terminal corresponding to the power service data is a new access service terminal; the matching degree calculating module is used for comparing the service data characteristic vector of the power service data with the data flow characteristic vector of the service terminal under the condition that the judging result is yes, and calculating the service matching degree; the type determining module is used for determining the access type of the edge gateway according to the service matching degree under the condition that the judging result is yes; and under the condition that the judgment result is negative, determining the access type of the edge gateway by adopting the historical service classification result.
In this alternative embodiment, the priority determining module 201 may further include a queue average calculating module (not shown in the figure), an access success rate deviation calculating module (not shown in the figure), and a priority calculating module (not shown in the figure), where the queue average calculating module is configured to calculate, according to the access type, a queue average deviation of the service terminal based on historical power service data, where the queue average deviation includes a queue backlog, a queue input, and a queue output, and an average deviation of the service terminal of the same type; the access success rate deviation calculation module is used for calculating the access success rate deviation of the service terminal and the edge gateway based on the historical power service data according to the access type; and the priority calculating module is used for calculating the access channel priority of the service terminal to the edge gateway according to the queue average value deviation and the access success rate deviation.
In practical application, the Yun Bianduan cooperative high concurrency access method can be divided into four steps, specifically: 1. constructing a high concurrency access scene of a novel power system multi-time scale terminal; 2. constructing a high concurrent access optimization problem of the multi-time scale terminal; 3. pre-allocation of large time scale side access channels based on access priority assessment; 4. small time scale cloud edge load balancing based on load undersalance perceived bidding.
The following describes the technical scheme of the present invention in detail based on the above four directional steps, respectively. The method comprises the following steps:
1. novel construction of high concurrency access scene of multi-time-scale terminal of power system
The novel high concurrency access scene of the multi-time scale terminal of the power system comprises a cloud layer, an edge layer and a terminal layer. Defining N service terminals contained in a terminal layer, wherein the set is D= { D 1 ,...,d n ,...,d N }. Defining an edge layer to include J edge gateways, the set being E= { E 1 ,...,e j ,...,e J }. Defined at edge gateway e j The terminal set in the communication range isCloud layer comprises a high concurrency access management and control platform and K cloud servers, wherein the cloud servers are assembled into S= { S 1 ,...,s k ,...,s K }. The service terminal transmits the collected data to the edge gateway through the power line carrier communication network, and the edge gateway is D in the communication range j Pre-distributing access channel for each terminal and constructing D j The data queues are used for storing terminal data which are not calculated, and the management and control platform dynamically adjusts the edge gateway data access decision according to the queue backlog of the edge gateway and the cloud server, so that load balancing optimization is realized. And the cloud server is used for supporting power services such as distributed power management and control, panoramic sensing and the like by processing the access data of the edge gateway.
The novel high concurrency access scene of the multi-time-scale terminal of the power system adopts a multi-time-scale model to divide I large time scales (time periods), and each time period comprises T 0 A small time scale (time slot) with a slot length τ.
Defining edge gateway e j Cached service terminalThe data queue is +.>The backlog evolution model is as follows:
wherein,representing the t-th time slot service terminal->Uploading to edge gateway e j Data volume of->Representing edge gateway e j Uploading to cloud server about service terminal +.>Is a data amount of (a) in the data stream. />Pre-assigning an indicator variable for the large scale channel of the ith period,/for the large scale channel of the ith period>Denoted as service terminal->Preassigned transmission channel, otherwise->
Depending on the minimum of the end-side data throughput and the amount of data collected by the terminal, this can be expressed as:
wherein,representing service termination->To edge gateway e j Upload rate of->Representing service termination->Is related to the traffic demand. Similarly, let go of>Depending on the minimum of edge cloud data throughput and terminal data queue backlog, this can be expressed as:
wherein,representing edge gateway e j To cloud server s k Upload rate of->For edge gateway e j Middle business terminal->Uploading the queue of the cloud server s k A small scale cloud server defining the t-th time slot selects an indicator variable of x j,k (t),x j,k (t) =1 represents edge gateway e j Uploading data to cloud server s k Calculation of otherwise x j,k (t)=0。/>
Cloud server s k Upper edge gateway e j The data queue backlog evolution modeling of (1) is:
wherein,for the tth slot edge gateway e j Uploading to cloud server s k Data amount Y of (2) j,k (t) is a tth slot cloud server s k Handling data from edge gateway e j Depending on the cloud server s k Available computing resources and edge gateway e j Can be expressed as:
wherein,representing a tth slot cloud server s k For handling edge gateway e j Middle business terminal->Computing resources of data->Representing handling of per bit service terminals->Computing resources required for the data.
2. And constructing the high concurrency access optimization problem of the multi-time-scale terminal.
And (3) constructing a data queue load balancing model of the edge gateway and the cloud server based on the data queue backlog evolution model constructed in the step (1).
Edge gateway e j Medium service terminalThe data queue load balancing of (1) is modeled as:
wherein,representing edge gateway e j Is a flat part of (2)Equal queue backlog, i.e. service termination +.>Queue backlog +.>The closer the average queue backlog, the more balanced its load. Similarly, define cloud server s k Middle edge gateway e j The data queue load balancing of (1) is modeled as:
the cloud edge load balance can be calculated as:
wherein,representing service termination->Is the traffic priority weight, beta Z Representing cloud load balancing weights.
Based on the cloud edge load balance model, the high concurrent access optimization problem of the multi-time-scale terminal can be constructed as follows:
/>
wherein q j For edge gateway e j A maximum number of allocable channels; c n Is a service terminalThe minimum expected constraint of the pre-allocation channel, namely the constraint of the success rate of terminal access; q k Is cloud server s k The maximum number of queues that can be processed simultaneously; />Indicating variable for queue backlog,/>Representation->Otherwise-> Is cloud server s k Upper queue pressure limit of (c).
The constraint conditions are as follows: c (C) 1 Pre-distributing an indicating variable, a service terminal access indicating variable and queue backlog indicating variable constraint for an access channel; c (C) 2 Pre-distributing constraint for access channel of service terminal; c (C) 3 The success rate constraint is accessed to the service terminal; c (C) 4 Accessing cloud server constraint for the edge gateway; c (C) 5 And C 6 High concurrency access constraint for service terminals, namely that each cloud server can process q at most k A service terminal queue, wherein the queue backlog of the cloud server does not exceed the upper limit
The problem of high concurrent access optimization of the multi-time-scale terminal can be further decomposed into a problem of pre-allocation optimization of a large-time-scale side access channel and a problem of cloud edge load balancing optimization of a small time scale.
3. And pre-allocating the large-time-scale side access channels based on access priority evaluation.
Further, step 3 comprises the steps of:
step 3.1: terminal business classification based on flow characteristic matching degree;
judging the access type of the terminal in each time period, if judging the service terminalFor a new access terminal, by comparing the traffic data feature vector with +.>Calculating service matching degree according to the data flow characteristic vector of the network node, and realizing service classification; otherwise, adopting the historical service classification result. The service matching degree is calculated as follows:
wherein,for business terminal->Data flow characteristic vector, X m Is the data feature vector of the m-th service. The larger indicates the business terminal +.>The higher the matching degree of the data flow characteristics with the class m service. If business terminal->Identified as class m service, service terminal +.>Joining a set of class m servicesS in S m (i) In (i.e.)>
Step 3.2: calculating the deviation of the side layer queue state and the end layer access success rate;
based on step 3.1, obtaining a terminal service classification result, and aiming at the mth service, an edge gateway e j Based on the historical data of the last period, calculating the service terminalQueue backlog, queue input, queue output, and deviation from the average of the terminals of the same class of service. Definitions->The calculation formulas are as follows:
wherein, |S m (i) I is set S m (i) The number of elements in the matrix.
Definition of the definitionConstraint c is minimum desired n And business terminal->The deviation of the success rate of the access is calculated by the following formula:
in the method, in the process of the invention,for accessing success rate deviation c n Minimum desired constraint for service terminals and edge gateways, < ->The channel of the ith period is pre-assigned an indicator variable, l is the sum index.
Step 3.3: side access channel pre-allocation policy optimization based on access priority evaluation;
based on step 3.2And +.>Access priority scoring of (c) The calculation is as follows:
wherein,the larger indicates that the next period is more prone to be business terminal +.>Preallocation transmission channel, define V min Pre-allocating a decision threshold for an access channel, the set of terminals meeting the threshold being +.>Will D j (i) The terminals in the network are arranged according to the descending order of scores, and the edge gateway e j For the first q of the sequence j Pre-allocation of access channels for service terminals, i.e
4. Small time scale cloud edge load balancing based on load undersalance perceived bidding.
Further, step 4 comprises the steps of:
step 4.1: access profit calculation based on load balance awareness.
Based on data queue load balance modeling, perceiving Bian Yun load balance, and calculating edge gateway e j Accessing cloud server s k Is to be added to the profit:
wherein p is j,k (t) is edge gateway e j Accessing cloud server s k The bid costs to be paid.
Step 4.2: cloud edge load balancing iterative optimization based on load undersalanced perceived bidding.
Based on the access profit obtained in step 4.1, the edge gateway e of the cloud server is not accessed j The access profits are arranged in descending order and are directed to the cloud server with the highest access profits, for example: cloud server s k Initiating an access request, when the same cloud server s is accessed k Edge gateway e initiating access request j When the number or queue backlog meets the high concurrency constraint, the management and control platform allows the edge gateways to access the cloud server s k The method comprises the steps of carrying out a first treatment on the surface of the Otherwise, the contention problem needs to be solved by load undersalanced perceived bidding to meet high concurrency constraints. The bidding process is introduced as follows:
management and control platform improves cloud server s based on load under-balance sensing bidding k Is not limited by the cost of (a). Cloud server s k Edge gateway e j The cost update formula of (2) is:
wherein Δp is the cost unit increase step size. The cost updating thought is as follows: the higher the terminal service priority, the larger the Yun Bian queue product pressure difference, and the smaller the cost increase. After the cost is improved based on the updating of the cost updating formula, the edge gateway participating in bidding recalculates the access cloud server s k The edge gateway with low profit gives up the cloud server s k And initiates access requests to other cloud servers capable of obtaining higher profits. Along with the continuous increase of the cost, the cloud server s is accessed k The number of edge gateways or queue backlog is gradually reduced until a high concurrency constraint is met.
And (4) repeating the steps 4.1 and 4.2 until all edge gateways are accessed to the cloud server, and ending the optimization of the current time slot. And the edge gateway uploads the data to the cloud server according to the cloud edge access strategy.
In practical application, the Yun Bianduan cooperative high concurrency access system can be divided into three layers of cloud side, side and end side as shown in fig. 4. The terminal side comprises a novel power system service terminal; the side comprises a side communication module, a service matching degree calculation module, a service classification module, a queue state deviation calculation module, an access success rate deviation calculation module, an access priority evaluation module and an access channel pre-allocation module; the cloud side comprises a cloud side communication module, a load under-balance perception bidding module, a cloud side access decision module, a data processing module and a data storage module. The concrete introduction is as follows:
1. end layer: novel power system service terminal: the method is used for collecting various power business data and uploading the data to the side layer.
2. Edge layer: and the side layer communication module is as follows: and transmitting the access channel pre-allocation decision to the service terminal by receiving various power service data uploaded by the terminal layer and cloud edge access strategies of the cloud layer, and uploading the queue data to the cloud layer. And a service matching degree calculating module: and comparing the service data characteristic vector with the terminal data flow characteristic vector, calculating the service matching degree carried by the terminal, and sending the calculation result to the service classification module. And a service classification module: and classifying the service carried by the terminal according to the service matching degree result, and transmitting the classification result to a queue state deviation calculation module and an access success rate deviation calculation module. A queue state deviation calculation module: and calculating the deviation of the queue backlog, the queue input and the queue output of the terminal and the average value of the same service terminal, and sending the calculation result to the access priority evaluation module. And the access success rate deviation calculating module is used for: and calculating the deviation between the minimum expected constraint of the pre-allocation channel and the access success rate of the terminal, and transmitting the calculation result to the access priority evaluation module. An access priority evaluation module: and calculating an access priority grade according to the queue state deviation and the access success rate deviation, and sending the access priority grade to an access channel pre-allocation module. An access channel pre-allocation module: and arranging the terminals meeting the access channel pre-allocation judgment threshold according to the score descending order according to the access priority scoring result, carrying out access channel pre-allocation decision according to the data queue according to the sorting, and transmitting the decision to the side communication module.
3. Cloud layer: cloud layer communication module: and transmitting the cloud edge access strategy to the side layer, and receiving the queue data uploaded by the side layer. Cloud edge access decision module: and calculating access profit, and optimizing cloud edge access strategies according to the access profit. If the access strategy does not meet the high concurrency constraint, a bid request is sent to a load under-balance perception bidding module; otherwise, the obtained access strategy is sent to the cloud side communication module. A load under-balance perception bidding module: and according to the bid request of the cloud side access decision module, sensing the load under-balance condition of the cloud side, updating the bid cost, and sending the updated cost to the cloud side access decision module. And a data processing module: processing the queue data uploaded by the side layer and sending the processing result to the data storage module. And a data storage module: and storing the data processing result.
In practical application, the Yun Bianduan cooperative high concurrency access device may be shown in fig. 5, and includes an edge layer communication module, a service matching degree calculation module, a service classification module, a queue state deviation calculation module, an access success rate deviation calculation module, an access priority evaluation module, an access channel pre-allocation module and a power supply module, where the power supply module is responsible for supplying power to each module in the device.
The present invention is not limited to the structure that has been described above and shown in the drawings, and various modifications and changes can be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (16)

1. The cloud side end cooperative high concurrency access method is characterized in that the Yun Bianduan cooperative high concurrency access method is applied to a high concurrency access scene of a multi-time scale terminal of an electric power system, and comprises the following steps:
collecting power service data of a service terminal, analyzing the power service data, determining an edge gateway access type of the power service data, and determining an access channel priority of the service terminal to access an edge gateway according to the edge gateway access type;
the priority of the access channel is ordered in a descending order to obtain a first ordering queue, and the service terminal which accords with the pre-allocation judgment threshold of the access channel is accessed to the corresponding edge gateway according to the first ordering queue;
determining profit of the edge gateway accessing the cloud server according to the high concurrency access scene of the multi-time scale terminal of the power system, and sorting the edge gateways which are not accessed to the cloud server in a descending order according to the profit to obtain a second sorting queue, and accessing the edge gateway to the cloud server corresponding to the highest profit according to the second sorting queue.
2. The cloud edge end cooperative high concurrency access method of claim 1, wherein the power system multi-time scale terminal high concurrency access scene comprises a terminal layer, an edge layer and a cloud layer,
wherein the terminal layer includes N service terminals, and the set of the service terminals is d= { D 1 ,...,d n ,...,d N -a }; the edge layer comprises J edge gateways, and the set of the edge gateways is E= { E 1 ,...,e j ,...,e J Edge gateway communication rangeIs set as (a) business terminalThe cloud layer comprises a high concurrency access management and control platform and K cloud servers, and the set of the cloud servers is S= { S 1 ,...,s k ,...,s K };
The service terminal transmits the acquired data to an edge gateway through a power line carrier communication network; edge gateway vs. communication range D j The individual terminals pre-allocate access channels and have D j The data queue is used for storing the uncomputed business terminal data; the high concurrency access management and control platform dynamically adjusts the data access decision of the edge gateway according to the queue backlog of the edge gateway and the cloud server;
the terminal layer, the edge layer and the cloud layer adopt a multi-time scale model to divide data transmission time into I time periods, and each time period comprises T 0 Each time slot, the time slot length is tau.
3. The cloud edge end cooperative high concurrency access method according to claim 2, wherein a first data queue backlog evolution model is provided between the service terminal and the edge gateway, and a formula of the first data queue backlog evolution model is as follows:
the edge gateway and the cloud server are provided with a second data queue backlog evolution model, and the formula of the second data queue backlog evolution model is as follows:
in the method, in the process of the invention,buffering a data queue of a service terminal for an edge gateway, < >>Data volume uploaded to edge gateway for the t-th slot service terminal,/the number of data volume uploaded to edge gateway for the t-th slot service terminal>Data volume for service terminal uploaded to cloud server for edge gateway, +.>Pre-assigning an indication variable, +.>The pre-allocation of transmission channels for service terminals is indicated, otherwise +.> For the upload rate of the service terminal to the edge gateway, < >>Is a service terminalIs limited by the minimum acquired data volume;the uploading rate from the edge gateway to the cloud server is set; ,/>Data volume, x, uploaded to cloud server for queues of service terminals in edge gateway j,k (t) selecting an indicator variable, x, for a cloud server of a tth time slot j,k (t) =1 means that the edge gateway uploads data to the cloud server computing, otherwise x j,k (t)=0,z j,k (t) caching data queues of edge gateways for cloud servers,>data volume uploaded to cloud server for t-th time slot edge gateway, Y j,k (t) processing data volume from edge gateway for tth slot cloud server, +.>Computing resource for processing business terminal data in edge gateway for t-th time slot cloud server,/for business terminal data in edge gateway>Computing resources required for processing per bit of service terminal data.
4. The cloud edge end cooperative high concurrency access method according to claim 3, wherein the edge gateway is provided with a first load balancing degree model for carrying out load balancing on a data queue of a service terminal, and a formula of the first load balancing degree model is as follows:
the cloud server is provided with a second load balancing degree model for carrying out load balancing on the data queues of the edge gateway, and the formula of the second load balancing degree model is as follows:
in the method, in the process of the invention,for average queue backlog of edge gateway, +.>Buffering a data queue of a service terminal for an edge gateway, < >>Average queue backlog for cloud server, Z j,k (t) caching data queues of edge gateways for cloud servers,>load balancing for data queues of service terminals in edge gateway,/-, for>And (5) balancing the load of the data queue of the edge gateway in the cloud server.
5. The cloud edge cooperative high concurrency access method of claim 1, wherein analyzing the power service data and determining an edge gateway access type of the power service data comprises:
analyzing the power service data of each time terminal, and judging whether the service terminal corresponding to the power service data is a new access service terminal or not;
if the judging result is yes, comparing the service data characteristic vector of the power service data with the data flow characteristic vector of the service terminal, calculating service matching degree, and determining the access type of the edge gateway according to the service matching degree; under the condition that the judgment result is negative, adopting a historical service classification result to determine the access type of the edge gateway;
the calculation formula of the service matching degree is as follows:
in the method, in the process of the invention,is the data flow characteristic vector, X of the service terminal m Data feature vector for class m service, < >>Matching the service.
6. The cloud edge cooperative high concurrency access method of claim 5, wherein determining the access channel priority of the service terminal to access the edge gateway according to the edge gateway access type comprises:
calculating the average value deviation of a queue of the service terminal and the access success rate deviation of the service terminal and the edge gateway based on the historical power service data according to the access type; the queue average deviation comprises queue backlog, queue input, queue output and average deviation of the same service terminals;
And calculating the access channel priority of the service terminal to the edge gateway according to the queue average value deviation and the access success rate deviation.
7. The cloud edge end cooperative high concurrency access method of claim 6, wherein the calculation formula of the queue average value deviation is:
in the method, in the process of the invention,for queue backlog bias, +.>Inputting deviation for queue, ++>Output bias for queue, ++>Buffering a data queue of a service terminal for an edge gateway, < >>The amount of data uploaded to the edge gateway for the tth slot service terminal,data volume of service terminal uploaded to cloud server for edge gateway, |S m (i) I is a set of service types S m (i) The number of elements in the matrix.
8. The cloud edge end cooperative high concurrency access method of claim 7, wherein the calculation formula of the access success rate deviation is as follows:
in the method, in the process of the invention,for accessing success rate deviation c n Minimum desired constraint for service terminals and edge gateways, < ->The channel of the ith period is pre-assigned an indicator variable, l is the sum index.
9. The cloud edge end cooperative high concurrency access method of claim 8, wherein the calculation formula of the access channel priority is:
in the method, in the process of the invention, For access channel priority, and +.>The larger indicates that the next period is more prone to pre-allocating transmission channels for the traffic terminal, +.>For queue backlog bias, +.>Inputting deviation for queue, ++>Output bias for queueThe difference in the number of the two,and the access success rate deviation is obtained.
10. The cloud edge end cooperative high concurrency access method according to claim 4, wherein the calculation formula when determining the profit of the edge gateway to access the cloud server according to the high concurrency access scene of the multi-time scale terminal of the power system is:
in χ j,k (t) profit for the edge gateway to access the cloud server, p j,k (t) bid costs to be paid for the edge gateway to access the cloud server,for the service priority weight of the service terminal, < +.>Load balancing for data queues of service terminals in edge gateway,/-, for>And (5) balancing the load of the data queue of the edge gateway in the cloud server.
11. The cloud side end cooperative high concurrency access system is characterized in that the Yun Bianduan cooperative high concurrency access system is applied to a multi-time scale terminal high concurrency access scene of an electric power system, and comprises the following components:
the priority determining module is used for collecting the electric power business data of the business terminal, analyzing the electric power business data, determining the access type of the edge gateway of the electric power business data, and determining the access channel priority of the business terminal to the edge gateway according to the access type of the edge gateway;
The side access module is used for carrying out descending order sequencing on the priority of the access channel to obtain a first sequencing queue, and accessing the service terminal which accords with the access channel pre-allocation judgment threshold to the corresponding edge gateway according to the first sequencing queue;
bian Yun access module, configured to determine profit of the edge gateway accessing to the cloud server according to the high concurrency access scenario of the multi-time scale terminal of the power system, and sort the edge gateways that are not accessing to the cloud server in descending order according to the profit, to obtain a second sorting queue, and access the edge gateway to the cloud server corresponding to the highest profit according to the second sorting queue.
12. The cloud-edge co-ordinated high concurrency access system of claim 11, wherein said power system multi-time scale terminal high concurrency access scenario comprises a terminal layer, an edge layer and a cloud layer,
wherein the terminal layer includes N service terminals, and the set of the service terminals is d= { D 1 ,...,d n ,...,d N -a }; the edge layer comprises J edge gateways, and the set of the edge gateways is E= { E 1 ,...,e j ,...,e J Service terminals in the communication range of the edge gateway are gathered intoThe cloud layer comprises a high concurrency access management and control platform and K cloud servers, and the set of the cloud servers is S= { S 1 ,...,s k ,...,s K };
The service terminal transmits the acquired data to an edge gateway through a power line carrier communication network; edge gateway vs. communication range D j The individual terminals pre-allocate access channels and have D j The data queue is used for storing the uncomputed business terminal data; the high concurrency access management and control platform dynamically adjusts the data access decision of the edge gateway according to the queue backlog of the edge gateway and the cloud server;
the terminal layer, theBetween the edge layer and the cloud layer, a multi-time scale model is adopted to divide the data transmission time into I time periods, and each time period comprises T 0 Each time slot, the time slot length is tau.
13. The cloud edge end cooperative high concurrency access system of claim 12, wherein a first data queue backlog evolution model is provided between the service terminal and the edge gateway, and a formula of the first data queue backlog evolution model is as follows:
the edge gateway and the cloud server are provided with a second data queue backlog evolution model, and the formula of the second data queue backlog evolution model is as follows:
in the method, in the process of the invention,buffering a data queue of a service terminal for an edge gateway, < >>Data volume uploaded to edge gateway for the t-th slot service terminal,/the number of data volume uploaded to edge gateway for the t-th slot service terminal >Data volume for service terminal uploaded to cloud server for edge gateway, +.>Pre-assigning an indication variable, +.>The pre-allocation of transmission channels for service terminals is indicated, otherwise +.> For the upload rate of the service terminal to the edge gateway, < >>Constraint for minimum collected data quantity of service terminal;the uploading rate from the edge gateway to the cloud server is set; ,/>Data volume, x, uploaded to cloud server for queues of service terminals in edge gateway j,k (t) selecting an indicator variable, x, for a cloud server of a tth time slot j,k (t) =1 means that the edge gateway uploads data to the cloud server computing, otherwise x j,k (t)=0,Z j,k (t) caching data queues of edge gateways for cloud servers,>data volume uploaded to cloud server for t-th time slot edge gateway, Y j,k (t) processing data volume from edge gateway for tth slot cloud server, +.>Computing resource for processing business terminal data in edge gateway for t-th time slot cloud server,/for business terminal data in edge gateway>Computing resources required for processing per bit of service terminal data.
14. The cloud edge end cooperative high concurrency access system of claim 13, wherein the edge gateway is provided with a first load balancing degree model for balancing load of a data queue of a service terminal, and a formula of the first load balancing degree model is as follows:
The cloud server is provided with a second load balancing degree model for carrying out load balancing on the data queues of the edge gateway, and the formula of the second load balancing degree model is as follows:
in the method, in the process of the invention,for average queue backlog of edge gateway, +.>Buffering a data queue of a service terminal for an edge gateway, < >>Average queue backlog for cloud server, Z j,k (t) caching data queues of edge gateways for cloud servers,>load balancing for data queues of service terminals in edge gateway,/-, for>And (5) balancing the load of the data queue of the edge gateway in the cloud server.
15. The cloud-edge co-high concurrency access system of claim 11, wherein the priority determination module comprises a terminal determination module, a matching degree calculation module and a type determination module, wherein,
the terminal judging module is used for analyzing the power service data of each time terminal and judging whether the service terminal corresponding to the power service data is a new access service terminal or not;
the matching degree calculating module is used for comparing the service data characteristic vector of the power service data with the data flow characteristic vector of the service terminal under the condition that the judging result is yes, and calculating the service matching degree;
The type determining module is used for determining the access type of the edge gateway according to the service matching degree under the condition that the judging result is yes; under the condition that the judgment result is negative, adopting a historical service classification result to determine the access type of the edge gateway;
the calculation formula of the service matching degree is as follows:
in the method, in the process of the invention,is the data flow characteristic vector, X of the service terminal m Data feature vector for class m service, < >>Matching the service.
16. The cloud-edge cooperative high concurrency access system of claim 11, wherein the priority determination module comprises a queue average calculation module, an access success rate deviation calculation module and a priority calculation module, wherein,
the queue average value calculation module is used for calculating the queue average value deviation of the service terminal based on the historical power service data according to the access type, wherein the queue average value deviation comprises queue backlog, queue input, queue output and average value deviation of the same type of service terminal;
the access success rate deviation calculation module is used for calculating the access success rate deviation of the service terminal and the edge gateway based on the historical power service data according to the access type;
And the priority calculating module is used for calculating the access channel priority of the service terminal to the edge gateway according to the queue average value deviation and the access success rate deviation.
CN202311167610.6A 2023-09-11 2023-09-11 Cloud edge end cooperative high concurrency access method and system Pending CN117176726A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311167610.6A CN117176726A (en) 2023-09-11 2023-09-11 Cloud edge end cooperative high concurrency access method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311167610.6A CN117176726A (en) 2023-09-11 2023-09-11 Cloud edge end cooperative high concurrency access method and system

Publications (1)

Publication Number Publication Date
CN117176726A true CN117176726A (en) 2023-12-05

Family

ID=88939250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311167610.6A Pending CN117176726A (en) 2023-09-11 2023-09-11 Cloud edge end cooperative high concurrency access method and system

Country Status (1)

Country Link
CN (1) CN117176726A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117729258A (en) * 2024-02-18 2024-03-19 绿城科技产业服务集团有限公司 Edge service gateway system and method based on cloud edge cooperation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117729258A (en) * 2024-02-18 2024-03-19 绿城科技产业服务集团有限公司 Edge service gateway system and method based on cloud edge cooperation
CN117729258B (en) * 2024-02-18 2024-04-26 绿城科技产业服务集团有限公司 Edge service gateway system and method based on cloud edge cooperation

Similar Documents

Publication Publication Date Title
CN111614754B (en) Fog-calculation-oriented cost-efficiency optimized dynamic self-adaptive task scheduling method
CN113037877B (en) Optimization method for time-space data and resource scheduling under cloud edge architecture
CN117176726A (en) Cloud edge end cooperative high concurrency access method and system
CN112118312B (en) Network burst load evacuation method facing edge server
CN111641973A (en) Load balancing method based on fog node cooperation in fog computing network
CA3073377A1 (en) Distributed multicloud service placement engine and method therefor
CN111901145B (en) Power Internet of things heterogeneous shared resource allocation system and method
CN113452566A (en) Cloud edge side cooperative resource management method and system
CN116501711A (en) Computing power network task scheduling method based on &#39;memory computing separation&#39; architecture
CN114866462A (en) Internet of things communication routing method and system for smart campus
CN114205317B (en) SDN and NFV-based service function chain SFC resource allocation method and electronic equipment
CN113271221B (en) Network capacity opening method and system and electronic equipment
CN114205374B (en) Transmission and calculation joint scheduling method, device and system based on information timeliness
CN111371879B (en) Network path management method, device, system, service architecture and electronic equipment
CN116760722A (en) Storage auxiliary MEC task unloading system and resource scheduling method
CN114693141B (en) Transformer substation inspection method based on end edge cooperation
CN114448838A (en) System reliability evaluation method
CN116012067A (en) Resource allocation method, apparatus, computer, readable storage medium, and program product
Mirzaee et al. CHFL: A collaborative hierarchical federated intrusion detection system for vehicular networks
CN113840007B (en) Load balancing method and device
CN115208819A (en) Long-acting high-performance service scheduling and resource allocation method for edge service system
CN116936048B (en) Federal learning hospital selection method, device and storage medium for heterogeneous medical information
CN117237004B (en) Energy storage device transaction processing method and device and storage medium
CN116599966B (en) Edge cloud service parallel resource allocation method based on block chain sharing
CN113485718B (en) Context-aware AIoT application program deployment method in edge cloud cooperative system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination