CN108418718B - Data processing delay optimization method and system based on edge calculation - Google Patents

Data processing delay optimization method and system based on edge calculation Download PDF

Info

Publication number
CN108418718B
CN108418718B CN201810182882.6A CN201810182882A CN108418718B CN 108418718 B CN108418718 B CN 108418718B CN 201810182882 A CN201810182882 A CN 201810182882A CN 108418718 B CN108418718 B CN 108418718B
Authority
CN
China
Prior art keywords
delay
edge
layer
computing
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810182882.6A
Other languages
Chinese (zh)
Other versions
CN108418718A (en
Inventor
李光顺
禹继国
吴俊华
成秀珍
王茂励
王纪萍
宋见荣
张勇
刘云翠
张颖
闫佳和
任新荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qufu Normal University
Original Assignee
Qufu Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qufu Normal University filed Critical Qufu Normal University
Priority to CN201810182882.6A priority Critical patent/CN108418718B/en
Publication of CN108418718A publication Critical patent/CN108418718A/en
Application granted granted Critical
Publication of CN108418718B publication Critical patent/CN108418718B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays

Abstract

The invention discloses a data processing delay optimization method and a data processing delay optimization system based on edge calculation, wherein the method comprises the following steps: constructing a network architecture model; determining the calculation delay of the edge calculation layer by adopting a Lagrange multiplier method; determining the communication delay of the edge computing layer by adopting a Kruskal method; determining a computation delay of the cloud computing layer; determining communication delay of a cloud computing layer by adopting a balanced transmission method; determining a data processing delay optimization model according to the computing delay and the communication delay of the edge computing layer and the computing delay and the communication delay of the cloud computing layer; and determining an optimal data processing delay value according to the data processing delay optimization model. The invention first determines the data processing delay of the edge calculation layer. And then determining the data processing delay of the cloud computing layer. And finally, determining an optimal data processing delay value according to the data processing delay of the edge computing layer and the data processing delay of the cloud computing layer, reducing the data processing delay of the system and improving the data processing efficiency.

Description

Data processing delay optimization method and system based on edge calculation
Technical Field
The present invention relates to the field of data processing delay technologies, and in particular, to a data processing delay optimization method and system based on edge calculation.
Background
With the rapid development of science and technology, astronomy, finance, medical treatment, internet and other industries can generate massive data. Nowadays, the collection, arrangement, analysis and application of big data all need corresponding technical realization and support, if handle improperly, can produce higher data processing delay, influence service efficiency.
At present, the traditional data of the Internet of things are processed on a cloud computing platform in batches through Hadoop, so that data processing delay is reduced, and the problems of insufficient memory capacity and low operation speed of mobile terminal equipment are solved. However, this method has the following disadvantages: (1) the mobile terminal device is far away from the cloud data center, and the ultra-long distance data transmission can generate higher delay and energy consumption, influence the service quality of the system and reduce the service efficiency of the system. (2) With the development of the internet of things, the data transmission amount is increased explosively, huge pressure is brought to network bandwidth and a cloud data center by huge equipment quantity and massive real-time data transmission, and the burden of the cloud data center is increased. (3) The data transmission time is too long, and the risk of data attack is increased. Based on the above problems, how to overcome the above problems becomes a problem to be solved in the art.
Disclosure of Invention
The invention aims to provide a data processing delay optimization method and system based on edge calculation so as to reduce data processing delay and improve data processing efficiency.
In order to achieve the above object, the present invention provides a data processing delay optimization method based on edge calculation, the method comprising:
constructing a network architecture model; the network architecture comprises a mobile terminal layer, an edge computing layer and a cloud computing layer;
determining the calculation delay of the edge calculation layer by adopting a Lagrange multiplier method;
determining the communication delay of the edge computing layer by adopting a Kruskal method;
determining a computation delay of the cloud computing layer;
determining communication delay of a cloud computing layer by adopting a balanced transmission method;
determining a data processing delay optimization model according to the computing delay and the communication delay of the edge computing layer and the computing delay and the communication delay of the cloud computing layer;
and determining an optimal data processing delay value according to the data processing delay optimization model.
Optionally, the determining the computation delay of the edge computation layer by using a lagrangian multiplier method specifically includes:
acquiring the computing capacity and the task amount of each edge device in the edge computing layer;
determining the computing delay of each edge device according to the computing capacity and the task amount of each edge device;
and determining the calculation delay of the edge calculation layer according to the calculation delay of each edge device.
Optionally, the determining the communication delay of the edge computation layer by using the Kruskal method specifically includes:
establishing an edge equipment weighted undirected graph according to each edge equipment in the edge calculation layer;
determining communication delay between edge devices on the edge device weighted undirected graph;
establishing a minimum weight tree according to communication delay among edge devices by adopting a Kruskal method;
and determining the communication delay of the edge calculation layer according to the minimum weight tree.
Optionally, the determining the computation delay of the cloud computing layer specifically includes:
acquiring data processing capacity and computing capacity of each cloud server in the cloud computing layer;
determining the computing delay of each cloud server according to the data processing capacity and the computing capacity of each cloud server;
and determining the computing delay of the cloud computing layer according to the computing delay of each cloud server.
Optionally, the determining the communication delay of the cloud computing layer by using the balanced transmission method specifically includes:
acquiring delay and communication rate of WAN transmission paths from each edge device to each cloud server;
determining the communication delay of each cloud server according to the delay and the communication rate of the WAN transmission path from each edge device to each cloud server;
and determining the communication delay of the cloud computing layer according to the communication delay of each cloud server.
The invention also provides a data processing delay optimization system based on edge calculation, which comprises:
the building module is used for building a network architecture model; the network architecture comprises a mobile terminal layer, an edge computing layer and a cloud computing layer;
the first calculation delay determining module is used for determining the calculation delay of the edge calculation layer by adopting a Lagrange multiplier method;
a first communication delay determining module for determining a communication delay of the edge computing layer by using a Kruskal method;
a second computation delay determination module for determining a computation delay of the cloud computing layer;
the second communication delay determining module is used for determining the communication delay of the cloud computing layer by adopting a balanced transmission method;
the data processing delay optimization model determining module is used for determining a data processing delay optimization model according to the computing delay and the communication delay of the edge computing layer and the computing delay and the communication delay of the cloud computing layer;
and the optimal data processing delay value determining module is used for determining an optimal data processing delay value according to the data processing delay optimization model.
Optionally, the first computation delay determining module specifically includes:
the first acquisition unit is used for acquiring the computing capacity and the task amount of each edge device in the edge computing layer;
an edge device calculation delay determining unit configured to determine a calculation delay of each of the edge devices according to a calculation capability and a task amount of each of the edge devices;
and the edge calculation layer calculation delay determining unit is used for determining the calculation delay of the edge calculation layer according to the calculation delay of each edge device.
Optionally, the first communication delay determining module specifically includes:
the edge device weighted undirected graph building unit is used for building an edge device weighted undirected graph according to each edge device in the edge calculation layer;
an inter-edge device communication delay determining unit, configured to determine a communication delay between edge devices on the edge device weighted undirected graph;
a minimum weight tree construction unit for constructing a minimum weight tree according to a communication delay between each edge device by using a Kruskal method;
and the edge calculation layer communication delay determining unit is used for determining the communication delay of the edge calculation layer according to the minimum weight tree.
Optionally, the second computation delay determining module specifically includes:
a second obtaining unit, configured to obtain a data processing amount and a computing capacity of each cloud server in the cloud computing layer;
the cloud server computing delay determining unit is used for determining the computing delay of each cloud server according to the data processing capacity and the computing capacity of each cloud server;
and the cloud computing layer computing delay determining unit is used for determining the computing delay of the cloud computing layer according to the computing delay of each cloud server.
Optionally, the second communication delay determining module specifically includes:
a third acquisition unit configured to acquire delay and communication rate of a WAN transmission path from each edge device to each cloud server;
the cloud server communication delay determining unit is used for determining the communication delay of each cloud server according to the delay and the communication rate of the WAN transmission path from each edge device to each cloud server;
and the cloud computing layer communication delay determining unit is used for determining the communication delay of the cloud computing layer according to the communication delay of each cloud server.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention first determines the data processing delay of the edge calculation layer. And then determining the data processing delay of the cloud computing layer. And finally, determining an optimal data processing delay value according to the data processing delay of the edge computing layer and the data processing delay of the cloud computing layer, reducing the data processing delay of the system and improving the data processing efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flowchart of a data processing delay optimization method based on edge calculation according to an embodiment of the present invention;
FIG. 2 is a diagram of a network architecture model according to an embodiment of the present invention;
FIG. 3 is a network topology diagram of an edge device according to an embodiment of the present invention;
FIG. 4 is an undirected graph with edge device weights according to an embodiment of the present invention;
fig. 5 is a block diagram of a data processing delay optimization system based on edge calculation according to an embodiment of the present invention.
FIG. 6 is a simulation diagram of computation delay and communication delay of a cloud computing layer according to an embodiment of the present invention;
fig. 7 is a simulation diagram comparing data processing delays of an edge computing layer and a cloud computing layer according to an embodiment of the present invention;
FIG. 8 is a simulation diagram illustrating the effect of data processing delay on the percentage of the data amount to the total data amount processed in the edge calculation according to an embodiment of the present invention;
FIG. 9 is a simulation diagram illustrating the effect of the number of edge devices on the data processing delay according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a data processing delay optimization method and system based on edge calculation so as to reduce data processing delay and improve data processing efficiency.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
FIG. 1 is a flowchart of a data processing delay optimization method based on edge calculation according to an embodiment of the present invention; as shown in fig. 1, the present invention provides a data processing delay optimization method based on edge calculation, the method includes:
step 101: constructing a network architecture model; the network architecture comprises a mobile terminal layer, an edge computing layer and a cloud computing layer.
Step 102: and determining the calculation delay of the edge calculation layer by adopting a Lagrange multiplier method.
Step 103: and determining the communication delay of the edge computing layer by adopting a Kruskal method.
Step 104: determining a computation delay of the cloud computing layer.
Step 105: and determining the communication delay of the cloud computing layer by adopting a balanced transmission method.
Step 106: and determining a data processing delay optimization model according to the computing delay and the communication delay of the edge computing layer and the computing delay and the communication delay of the cloud computing layer.
Step 107: and determining an optimal data processing delay value according to the data processing delay optimization model.
Each step is discussed in detail below.
Step 101: constructing a network architecture model; the network architecture comprises a mobile terminal layer, an edge computing layer and a cloud computing layer. The network architecture model is shown in fig. 2.
In actual application of the internet of things, users want to obtain information wanted by themselves or send requests to be responded in time as soon as possible, and therefore information providers are required to have high working efficiency. Aiming at the problems of high transmission delay, overlarge pressure of a cloud data center and the like in cloud computing, a network architecture model is constructed, and is shown in figure 2.
The network architecture model is mainly divided into three layers: the mobile terminal comprises a cloud computing layer, an edge computing layer and a mobile terminal layer. The bottom layer is a mobile terminal layer, is located at the bottom layer of the system architecture, and comprises all terminal devices, such as a smart phone, a notebook computer, an automobile and the like. The mobile terminal device is used for acquiring a service request of a user and judging whether data corresponding to the service request is stored in the mobile terminal. And if the data is not stored in the mobile terminal, the service request is sent to the edge device.
The middle layer is an edge computing layer which comprises a plurality of edge devices, wherein the edge devices are any of routers, gateways, switches and access nodes and are bridges for connecting the cloud data center and the mobile terminal users. The edge devices are mainly distributed at local mobile user sites, such as: parks, shopping centers, buses, and the like. The edge device may pre-process all source data and store time sensitive data (e.g., control information) locally, while non-time sensitive data (e.g., monitoring information) is forwarded to the cloud server to support future data retrieval and mining. In addition, when the edge device receives a service request of a mobile user, it is first determined whether data requested by the mobile device is stored inside the edge device. And if the service request is stored in the edge device, immediately responding to the user, and otherwise, forwarding the service request to the cloud server.
The top layer is a cloud computing layer, which consists of a high-end server and a data center and has strong computing and storage capacity. On one hand, the cloud server is responsible for storing a large amount of non-time-sensitive data forwarded by the edge device and responding to a user request forwarded by the edge device to cache the data to the edge device. On the other hand, due to the weak computing, storing and analyzing capabilities of the edge device, when the data processing amount exceeds the processing capability of the edge device, the rest data is forwarded to the cloud server for processing.
As shown in fig. 2, since each edge device is located near the end user and receives the end user's service request through the local area network L AN, the local area network communication delay is negligible (compared to WAN). unprocessed data in the MEC is forwarded to the cloud server through the Wide Area Network (WAN) for processing, so the communication delay from the fog device to the cloud server needs to be considered.
The specific concept of determining the data processing delay of the edge calculation layer in the invention is as follows:
each edge device is mainly distributed in areas such as supermarkets, parks, buses and the like, and mutual cooperation among the edge devices is not separated, so that a network topology diagram (shown in fig. 3) formed by m edge devices is abstracted into a weighted undirected graph G (V, E) (shown in fig. 4). Wherein V ═ { z ═1,z2,…,zi,…zmIs set of vertices, ziFor a set of edge devices i, m is the number of edge devices. E ═ Ez1,z2,…ezi,zj,…ezm-1,zmIs the set of edges, ezi,zjFor edge devices ziAnd zjThe communication link between, the weight τ on the edgezi,zjIndicating edge devices ziAnd zjThe communication delay therebetween.
Suppose that each edge device z in FIG. 3i(edge device z)i) Has a computing power of vzi(i ═ 1,2, …, m), the edge computation layer's computation task is X. In order to reduce the amount of data forwarded by the edge computing layer to the cloud computing layer and reduce communication delay, it is necessary to enhance the computing capability of the edge device so that as many tasks as possible are processed at the edge computing layer. Therefore, the invention provides a mutual cooperation scheme between the edge devices. In the data processing process, the edge device ziReceiving a computing task X from an end user and partitioning the task to satisfy XiiAnd a plurality of subtasks of the X are distributed to the nodes including the X for calculation, and the specific scheme is as follows:
the delay of the edge computation layer includes the communication delay between the edge devices and the computation delay of the edge devices. Regarding the communication delay, in an undirected graph composed of edge devices, the communication delay between the edge devices is used as a weight, a tree with the minimum weight is generated by using a Kruskal algorithm, and the minimum weight W (T), namely the minimum communication delay, is obtained. For an edge device i, the computation delay of the edge device i can be calculated by the amount of tasks x assigned to itiFor the plotting, this function should satisfy the following two points: (i) the edge device computation delay increases accordingly with an increase in the number of tasks, (ii) the more the number of computations increases, the faster the edge device computation delay increases.
The specific steps are given below:
step 102: and determining the calculation delay of the edge calculation layer by adopting a Lagrange multiplier method. The method comprises the following specific steps:
step 1021: and acquiring the computing capacity and the task amount of each edge device in the edge computing layer.
Step 1022: determining the computing delay of each edge device according to the computing capacity and the task amount of each edge device; the concrete formula is as follows:
Figure BDA0001589401220000081
wherein v isziIs the computing power of the edge device i; x is the number ofiIs any of edge devices iTraffic volume; a isiA real number between 0 and 1 which is preset in advance and corresponds to the edge device i;
Figure BDA0001589401220000082
the calculated delay for edge device i.
Step 1023: and determining the calculation delay of the edge calculation layer according to the calculation delay of each edge device. The concrete formula is as follows:
Figure BDA0001589401220000083
wherein m is the total number of edge devices;
Figure BDA0001589401220000084
the maximum data size that can be processed by the edge device i; x is the total amount of data that needs to be processed by the edge computing layer, so the amount of data processed on the edge device forms an m-dimensional vector X ═ X1,x2,…,xm};
Figure BDA0001589401220000085
The computation delay of the layer is computed for the edge.
The specific steps of solving the computation delay of the edge computation layer include:
I. initializing parameters; the parameters include: given an initial point x(0)Initial multiplier vector v (v)1,v2,…,vm) Penalty factor M>0, amplification β > 0 and precision > 0, parameter γ ∈ (0,1), argument k ═ 1;
II. Computing delay from edge computation layer
Figure BDA0001589401220000086
Constructing a communication delay objective function F (x) of an edge calculation layer;
Figure BDA0001589401220000087
wherein v isiIs the ith multiplier in the initial multiplier vector;
Figure BDA0001589401220000088
computing a computation delay for the layer for the edge; h isi(x) Are constraints.
III in x(k-1)As an initial point, x(k)And for the optimal solution, a Newton algorithm is adopted to obtain a calculation delay value of the edge calculation layer according to a communication delay objective function F (x).
IV, judge h (x)(k)) If | is less than.
V, if h (x)(k)) If | | > is greater than or equal to |, then judging
Figure BDA0001589401220000091
Whether it is greater than a parameter gamma; if it is not
Figure BDA0001589401220000092
Let M be β M, otherwise keep M unchanged, and execute step VI.
VI, order vi (k+1)=vi (k)-Mhi(x(k)) I is 1,2, …, m, k is k +1, and a communication delay objective function of the edge calculation layer is reconstructed; wherein v isi (k)Refers to the L margin multiplier h adopted in the k-th iterationi(x(k)) The function value of the constraint function is taken when the k-th generation of independent variable is taken.
VII if h (x)(k)) If | <, stopping iteration and outputting x(k)For the optimal solution, x(k)The corresponding communication delay objective function value is the minimum computation delay of the edge computation layer.
Step 103: and determining the communication delay W (T) of the edge computing layer by adopting a Kruskal method.
The Kruskal algorithm is used to solve for the minimum communication delay between edge devices. For weighted undirected graph G of edge devices (V, E), let the minimum spanning tree of G be T (U, TE), and its initial state be U (V), TE { }, so that each edge device in T constitutes a connected component. Then, each edge in the edge set E is sequentially examined according to the order of the weight of the edge (i.e., the communication delay between edge devices) from small to large. If the two examined vertexes are two different connected components in T, adding the examined edges into TE, and simultaneously connecting the two connected components into one connected component; if the two edge devices under consideration are contained in a connected component, the edge is dropped, preventing looping. In this way, when the connected component in T is 1, the connected component is a minimum spanning tree of G. The method comprises the following specific steps:
step 1031: and establishing an undirected graph G with the weight of the edge equipment according to each edge equipment in the edge computing layer.
Step 1032: determining a communication delay between edge devices on the edge device weighted undirected graph.
Step 1033: the Kruskal method is used to build a minimum weight tree based on the communication delay between edge devices. The method comprises the following specific steps:
I. searching two vertexes u and v corresponding to the shortest edge in the edge set E;
II. Judging whether the two vertexes u and v are positioned in two different connected components in the minimum spanning tree T of the weighted undirected graph G;
III, if the two vertexes u and v are positioned in two different connected components in the T, merging an edge formed by the two vertexes u and v into a set TE, and simultaneously connecting the two connected components into one connected component;
IV, if two vertexes u, v are contained in a connected component, then the edge is discarded;
v, marking an edge consisting of two vertexes u and V in the edge set E, and simultaneously not participating in selection of the subsequent shortest edge.
Step 1034: and determining the communication delay W (T) of the edge calculation layer according to the minimum weight tree.
The delay of the cloud computing layer mainly comprises the computing delay of the cloud server and the communication delay from the edge device to the cloud server. According to the load balancing principle, the invention considers the communication delay problem as a balanced transmission problem. Assume that the amount of unprocessed data (i.e., the amount of data that needs to be transferred to the cloud computing layer for processing) of each edge device is/iAnd the data volume processed by each cloud server is yjTo wantAnd forwarding the unprocessed data volume of the edge computing layer to the cloud computing layer for processing, wherein an optimal transmission scheme needs to be selected in order to reduce transmission delay. Suppose dijThe transmission delay of the WAN transmission path from the edge device i to the cloud server j may obtain the following delay matrix:
Figure BDA0001589401220000101
λijif the communication rate is the communication rate transmitted from the edge device i to the cloud server j, the following transmission matrix can be obtained:
Figure BDA0001589401220000102
method for solving communication delay problem by utilizing balanced transmission and obtaining optimal distribution matrix zij。zijIs the amount of data transfer from edge device i to cloud server j.
Figure BDA0001589401220000103
For the computing latency of the cloud server, the data loss rate is not considered herein. By
Figure BDA0001589401220000111
According to the conservation principle, the data volume to be processed by each cloud server is yjThe following steps are given:
step 104: determining a computing latency of the cloud computing layer
Figure BDA0001589401220000112
Step 1041: and acquiring the data processing capacity and the computing capacity of each cloud server in the cloud computing layer.
Step 1042: determining the computing delay of each cloud server according to the data processing capacity and the computing capacity of each cloud server; the concrete formula is as follows:
Figure BDA0001589401220000113
wherein, yjIs the data throughput of cloud server j; v. ofjIs the computing power of cloud server j; n is the total number of cloud servers;
Figure BDA0001589401220000116
is the computing latency of cloud server j.
Step 1043: determining the computing delay of a cloud computing layer according to the computing delay of each cloud server; the concrete formula is as follows:
Figure BDA0001589401220000114
wherein the content of the first and second substances,
Figure BDA0001589401220000115
a computing latency for a cloud computing layer; and Y is the total data processing amount of the cloud computing layer.
Step 105: determining communication delay of a cloud computing layer by adopting a balanced transmission method;
step 1051: and acquiring the delay and the communication rate of the WAN transmission path from each edge device to each cloud server.
Step 1052: determining the communication delay of each cloud server according to the delay and the communication rate of the WAN transmission path from each edge device to each cloud server; the concrete formula is as follows:
Figure BDA0001589401220000117
wherein d isijDelay of WAN transmission path from the edge device i to each cloud server j without considering data loss rate; lambda [ alpha ]ijThe communication rate from the edge device i to each cloud server j is not considered when the data loss rate is not considered; m is the total number of edge devices; n is the total number of cloud servers;
Figure BDA0001589401220000121
serving a cloudThe communication delay of the device j.
Step 1053: determining communication delay of a cloud computing layer according to the communication delay of each cloud server; the concrete formula is as follows:
Figure BDA0001589401220000122
wherein the content of the first and second substances,
Figure BDA0001589401220000123
a communication delay for a cloud computing layer; dijDelay of WAN transmission path from the edge device i to each cloud server j without considering data loss rate; lambda [ alpha ]ijThe communication rate from the edge device i to each cloud server j is not considered when the data loss rate is not considered; m is the total number of edge devices; n is the total number of cloud servers; lambda [ alpha ]ij maxA maximum communication rate subject to a broadband constraint when communicating for each path;
Figure BDA0001589401220000125
is the communication latency of cloud server j.
Step 106: computing a computational delay of a layer from the edge
Figure BDA0001589401220000126
And communication delay W (T), computing delay of the cloud computing layer
Figure BDA0001589401220000127
And communication delay
Figure BDA0001589401220000128
Determining a data processing delay optimization model; the concrete formula is as follows:
Figure BDA0001589401220000124
wherein x isiThe task amount of the edge device i; x is the total amount of data needing to be processed by the edge calculation layer; m is the total number of edge devices;
Figure BDA0001589401220000129
computing a computation delay for the layer for the edge; y isjIs the data throughput of cloud server j; n is the total number of cloud servers; y is the total data processing amount of the cloud computing layer;
Figure BDA00015894012200001210
a computing latency for cloud server j; w (T) is the communication delay of the edge calculation layer;
Figure BDA0001589401220000131
communication delay of the cloud computing layer L is the total amount of data to be processed.
According to the invention, a network architecture model is firstly constructed, and a scheme of mutual cooperation between edge devices is provided at an edge computing layer so as to enhance the computing capability of the edge computing layer and reduce the data amount forwarded to the cloud computing layer, and communication delay and computing delay are respectively solved by adopting a Kruskal algorithm and a Lagrange multiplier method, so that the data processing delay of the edge computing layer is optimized. And then, in the cloud computing layer, the communication delay from each edge device to the cloud server is solved by adopting a balanced transmission method, and the computing delay of the cloud computing layer is solved according to the characteristics of the cloud server, so that the data processing delay of the cloud computing layer is greatly reduced. And finally, determining an optimal data processing delay value according to the data processing delay of the edge computing layer and the data processing delay of the cloud computing layer, further reducing the data processing delay and improving the data processing efficiency.
The invention also provides a data processing delay optimization system based on edge calculation, which comprises:
a building module 501, configured to build a network architecture model; the network architecture comprises a mobile terminal layer, an edge computing layer and a cloud computing layer.
A first computation delay determining module 502, configured to determine the computation delay of the edge computation layer by using a lagrangian multiplier method.
A first communication delay determining module 503, configured to determine the communication delay of the edge computing layer by using a Kruskal method.
A second computation delay determination module 504, configured to determine a computation delay of the cloud computing layer.
And a second communication delay determining module 505, configured to determine the communication delay of the cloud computing layer by using a balanced transmission method.
A data processing delay optimization model determining module 506, configured to determine a data processing delay optimization model according to the computation delay and the communication delay of the edge computing layer, and the computation delay and the communication delay of the cloud computing layer.
And an optimal data processing delay value determining module 507, configured to determine an optimal data processing delay value according to the data processing delay optimization model.
The various modules were analyzed as follows:
the first computation delay determining module 502 specifically includes:
the first acquisition unit is used for acquiring the computing capacity and the task amount of each edge device in the edge computing layer;
an edge device calculation delay determining unit configured to determine a calculation delay of each of the edge devices according to a calculation capability and a task amount of each of the edge devices;
and the edge calculation layer calculation delay determining unit is used for determining the calculation delay of the edge calculation layer according to the calculation delay of each edge device.
The first communication delay determining module 503 specifically includes:
the edge device weighted undirected graph building unit is used for building an edge device weighted undirected graph according to each edge device in the edge calculation layer;
an inter-edge device communication delay determining unit, configured to determine a communication delay between edge devices on the edge device weighted undirected graph;
a minimum weight tree construction unit for constructing a minimum weight tree according to a communication delay between each edge device by using a Kruskal method;
and the edge calculation layer communication delay determining unit is used for determining the communication delay of the edge calculation layer according to the minimum weight tree.
The second calculation delay determining module 504 specifically includes:
a second obtaining unit, configured to obtain a data processing amount and a computing capacity of each cloud server in the cloud computing layer;
the cloud server computing delay determining unit is used for determining the computing delay of each cloud server according to the data processing capacity and the computing capacity of each cloud server;
and the cloud computing layer computing delay determining unit is used for determining the computing delay of the cloud computing layer according to the computing delay of each cloud server.
The second communication delay determining module 505 specifically includes:
a third acquisition unit configured to acquire delay and communication rate of a WAN transmission path from each edge device to each cloud server;
the cloud server communication delay determining unit is used for determining the communication delay of each cloud server according to the delay and the communication rate of the WAN transmission path from each edge device to each cloud server;
and the cloud computing layer communication delay determining unit is used for determining the communication delay of the cloud computing layer according to the communication delay of each cloud server.
Experimental simulation verification
In the experiment, the number n of the cloud servers is assumed to be 5, the computing capacity of the cloud servers is 10GHz, the experiment platform adopts MAT L AB, the data task quantity in the experiment is set in a simulation mode, and the communication delay and the computing delay are obtained through an MAT L AB simulation experiment, as shown in FIG. 6.
As shown in fig. 6, at the cloud computing layer, since the cloud server has strong computing power, the computing delay is relatively small and does not change much. The latency of the cloud computing layer depends primarily on the communication latency from the edge devices to the cloud servers. On the one hand, this is due to the large distance from the end user to the cloud data center, and the high delay of data transmission over very long distances. On the other hand, the limitation of network bandwidth greatly increases the transmission delay from the edge device to the cloud server. As the amount of data continues to increase, the communication delay also increases faster.
In order to verify the effectiveness of data processing delay of the edge computing layer, the delay performance of data processing of a single edge device and the cloud computing layer is compared, then the influence of data quantity on data processing delay when the edge computing layer processes in different proportions is analyzed, and finally the influence of the increase of the number of the edge devices on the data processing delay is analyzed.
The experimental platform adopts MAT L AB, and the computing power and communication time delay of the edge devices and the cloud servers in the experiment are set according to the literature 'medical big data oriented cloud network and distributed computing scheme thereof'.
TABLE 1 edge device computing capabilities
Figure BDA0001589401220000151
Data processing delay performance analysis of edge calculation layer
At the edge calculation layer, the invention provides a calculation scheme for calculating delay and communication delay. To verify its effectiveness in data processing, the present invention compares the latency to a single edge device (single edge node) and cloud computing layer, and the experimental results are shown in fig. 7.
Experimental results show that when the data amount X <2Gb, a single edge device has less delay compared to the cloud computing layer and the edge computing layer because a single edge device does not generate communication delay and the data amount is within the computing power of a single edge device. However, as the amount of data increases, the delay it generates increases rapidly, subject to the computational power of the individual edge devices. Although the cloud server has a strong computing capability, the end user is far away from the cloud server and is limited by bandwidth, which causes a large communication delay, so that the data processing delay is higher than that of the edge computing scheme. And when the data quantity X is greater than 19Gb, the data processing delay of the edge computing scheme is influenced by the limit of the computing capacity of a single edge device, and the cloud computing layer enables the data processing delay to be lower than the delay generated when the edge computing scheme works by virtue of the strong computing capacity of the cloud computing layer. Therefore, the delay can be effectively reduced by putting a proper amount of data in the edge calculation layer for processing.
Influence of percentage a of data amount processed at edge calculation layer on data processing delay
a is the percentage of the amount of data processed at the edge calculation layer. In order to verify the performance of the network architecture model, the invention researches the influence of the percentage a of the data processing amount of the edge computing layer on the data processing delay, and the simulation result is shown in fig. 8.
The experimental result shows that when a is less than or equal to 0.5, namely the data volume processed in the edge calculation is less than half, the larger a is, the smaller the generated delay is; when a is greater than 0.5 and the data amount is small, the larger a is, the smaller the delay generated is. However, as the amount of data increases, the delay increases accordingly, and the larger a, the faster the corresponding delay increases, even exceeding the delay generated by the conventional cloud computing layer. This is because when the data amount reaches a threshold (e.g., 16Gb for a-0.8 and 13Gb for a-1), the data processing delay increases rapidly beyond a certain value due to the computational power of the individual edge devices. This indicates that the data processing delay in the edge calculation layer is limited by the computational power of the individual edge devices, and that the delay increases when the amount of data increases to a certain extent. This also shows that when the total data volume is small, the invention puts all the data into the edge computing to process, which results in smaller delay, and when the data volume is large, the interaction between the edge device and the cloud server can effectively reduce the delay, and the system shows better performance.
Impact of number of edge devices on data processing delay
In order to study the influence of the number of edge devices of an edge computing layer on data processing delay, the invention solves the data processing delay values when the total data amount X is respectively 2Gb, 6Gb, 10Gb, 16Gb and 20Gb, and the result is shown in FIG. 9.
The experimental result shows that the data processing delay generally becomes a descending trend along with the continuous increase of the number of the edge devices. When the data amount is small (below 6Gb as in fig. 9), the influence of the addition of the edge device on the data processing delay is small, and the data processing delay is basically in a steady state; when the data amount is larger (from 10Gb to 20Gb in the figure), the data processing delay is obviously reduced with the increase of the edge devices. This is because the computation delay generated by the edge devices is smaller when the amount of processed data is smaller, and the communication delay is larger and larger as the number of edge devices increases, so that the delay when the amount of processed data is smaller depends mainly on the communication delay between the edge devices and the change is not obvious. However, when the amount of processed data is large, the computation delay of the edge devices is correspondingly increased, and when the bandwidth allows, the communication delay between the edge devices is relatively small, and at this time, the data processing delay mainly depends on the computation delay of the edge devices, so when the number of the edge devices is increased, the amount of data that needs to be processed by each edge device is reduced, and the computation delay is also reduced, so that the overall data processing delay is obviously reduced. This shows that determining the appropriate number of edge devices based on their computing power has an important role in reducing data processing latency.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principle and the implementation mode of the invention are explained by applying a specific example, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (6)

1. A method for optimizing data processing delay based on edge calculation, the method comprising:
constructing a network architecture model; the network architecture comprises a mobile terminal layer, an edge computing layer and a cloud computing layer;
determining the calculation delay of the edge calculation layer by adopting a Lagrange multiplier method;
determining the communication delay of the edge computing layer by adopting a Kruskal method;
determining a computation delay of the cloud computing layer;
determining communication delay of a cloud computing layer by adopting a balanced transmission method;
determining a data processing delay optimization model according to the computing delay and the communication delay of the edge computing layer and the computing delay and the communication delay of the cloud computing layer, specifically comprising: summing the computing delay and the communication delay of the edge computing layer and the computing delay and the communication delay of the cloud computing layer, and then calculating the minimum value to obtain a data processing delay optimization model;
determining an optimal data processing delay value according to the data processing delay optimization model;
the determining the computation delay of the edge computation layer by adopting a Lagrange multiplier method specifically comprises the following steps:
acquiring the computing capacity and the task amount of each edge device in the edge computing layer;
determining the computing delay of each edge device according to the computing capacity and the task amount of each edge device;
determining a computation delay of an edge computation layer according to the computation delay of each edge device; the concrete formula is as follows:
Figure FDA0002478361430000011
wherein m is the total number of edge devices;
Figure FDA0002478361430000012
the maximum data size that can be processed by the edge device i; x is the total amount of data needing to be processed by the edge calculation layer;
Figure FDA0002478361430000013
computing a computation delay for the layer for the edge;
Figure FDA0002478361430000014
a calculated delay for edge device i; v. ofziIs the computing power of the edge device i; x is the number ofiThe task amount of the edge device i; a isiA real number between 0 and 1 which is preset in advance and corresponds to the edge device i;
the specific steps of solving the computation delay of the edge computation layer include:
I. initializing parameters; the parameters include: given an initial point x(0)Initial multiplier vector v (v)1,v2,…,vm) Penalty factor M>0, amplification β > 0 and precision > 0, parameter γ ∈ (0,1), argument k ═ 1;
II. Computing delay from edge computation layer
Figure FDA0002478361430000021
Constructing a communication delay objective function F (x) of an edge calculation layer;
Figure FDA0002478361430000022
wherein v isiIs the ith multiplier in the initial multiplier vector;
Figure FDA0002478361430000023
computing a computation delay for the layer for the edge; h isi(x) Is a constraint condition;
III in x(k-1)As an initial point, x(k)For the optimal solution, a Newton algorithm is adopted, and the calculation delay value of the edge calculation layer is obtained according to a communication delay target function F (x);
IV, judge h (x)(k)) Whether | is less than;
v, if h (x)(k)) If | | > is greater than or equal to |, then judging
Figure FDA0002478361430000024
Whether it is greater than a parameter gamma; if it is not
Figure FDA0002478361430000025
If M is β M, otherwise keeping M unchanged, executing step VI;
VI, order vi (k+1)=vi (k)-Mhi(x(k)) I is 1,2, …, m, k is k +1, and a communication delay objective function of the edge calculation layer is reconstructed; wherein v isi (k)Refers to L margin multiplier adopted in the k-th iterationi(x(k)) When the kth generation of independent variable is taken, the function value of a constraint function is obtained;
VII if h (x)(k)) If | <, stopping iteration and outputting x(k)For the optimal solution, x(k)The corresponding communication delay objective function value is the minimum calculation delay of the edge calculation layer;
the determining the communication delay of the edge calculation layer by using the Kruskal method specifically includes:
establishing an edge equipment weighted undirected graph according to each edge equipment in the edge calculation layer;
determining communication delay between edge devices on the edge device weighted undirected graph;
establishing a minimum weight tree according to communication delay among edge devices by adopting a Kruskal method; the method comprises the following specific steps:
I. searching two vertexes u and v corresponding to the shortest edge in the edge set E;
II. Judging whether the two vertexes u and v are positioned in two different connected components in the minimum spanning tree T of the weighted undirected graph G;
III, if the two vertexes u and v are positioned in two different connected components in the T, merging an edge formed by the two vertexes u and v into a set TE, and simultaneously connecting the two connected components into one connected component;
IV, if two vertexes u, v are contained in a connected component, then the edge is discarded;
v, marking an edge consisting of two vertexes u and V in the edge set E, and simultaneously not participating in the selection of the subsequent shortest edge;
and determining the communication delay of the edge calculation layer according to the minimum weight tree.
2. The data processing delay optimization method based on edge computing according to claim 1, wherein the determining the computing delay of the cloud computing layer specifically includes:
acquiring data processing capacity and computing capacity of each cloud server in the cloud computing layer;
determining the computing delay of each cloud server according to the data processing capacity and the computing capacity of each cloud server;
and determining the computing delay of the cloud computing layer according to the computing delay of each cloud server.
3. The data processing delay optimization method based on edge computing according to claim 1, wherein the determining the communication delay of the cloud computing layer by using a balanced transmission method specifically includes:
acquiring delay and communication rate of WAN transmission paths from each edge device to each cloud server;
determining the communication delay of each cloud server according to the delay and the communication rate of the WAN transmission path from each edge device to each cloud server;
and determining the communication delay of the cloud computing layer according to the communication delay of each cloud server.
4. A data processing delay optimization system based on edge computation, the system comprising:
the building module is used for building a network architecture model; the network architecture comprises a mobile terminal layer, an edge computing layer and a cloud computing layer;
the first calculation delay determining module is used for determining the calculation delay of the edge calculation layer by adopting a Lagrange multiplier method;
a first communication delay determining module for determining a communication delay of the edge computing layer by using a Kruskal method;
a second computation delay determination module for determining a computation delay of the cloud computing layer;
the second communication delay determining module is used for determining the communication delay of the cloud computing layer by adopting a balanced transmission method;
the data processing delay optimization model determining module is configured to determine a data processing delay optimization model according to the computation delay and the communication delay of the edge computing layer and the computation delay and the communication delay of the cloud computing layer, and specifically includes: summing the computing delay and the communication delay of the edge computing layer and the computing delay and the communication delay of the cloud computing layer, and then calculating the minimum value to obtain a data processing delay optimization model;
the optimal data processing delay value determining module is used for determining an optimal data processing delay value according to the data processing delay optimization model;
the first calculation delay determining module specifically includes:
the first acquisition unit is used for acquiring the computing capacity and the task amount of each edge device in the edge computing layer;
an edge device calculation delay determining unit configured to determine a calculation delay of each of the edge devices according to a calculation capability and a task amount of each of the edge devices;
an edge calculation layer calculation delay determination unit configured to determine a calculation delay of an edge calculation layer from the calculation delay of each of the edge devices; the concrete formula is as follows:
Figure FDA0002478361430000041
wherein m is the total number of edge devices;
Figure FDA0002478361430000042
the maximum data size that can be processed by the edge device i; x is the total amount of data needing to be processed by the edge calculation layer;
Figure FDA0002478361430000043
computing a computation delay for the layer for the edge;
Figure FDA0002478361430000044
a calculated delay for edge device i; v. ofziIs the computing power of the edge device i; x is the number ofiThe task amount of the edge device i; a isiA real number between 0 and 1 which is preset in advance and corresponds to the edge device i;
the specific steps of solving the computation delay of the edge computation layer include:
I. initializing parameters; the parameters include: given an initial point x(0)Initial multiplier vector v (v)1,v2,…,vm) Penalty factor M>0, amplification β > 0 and precision > 0, parameter γ ∈ (0,1), argument k ═ 1;
II. Computing delay from edge computation layer
Figure FDA0002478361430000045
Constructing a communication delay objective function F (x) of an edge calculation layer;
Figure FDA0002478361430000046
wherein v isiIs the ith multiplier in the initial multiplier vector;
Figure FDA0002478361430000047
computing a computation delay for the layer for the edge; h isi(x) Is a constraint condition;
III in x(k-1)As an initial point, x(k)For the optimal solution, a Newton algorithm is adopted, and the calculation delay value of the edge calculation layer is obtained according to a communication delay target function F (x);
IV, judge h (x)(k)) Whether | is less than;
v, if h (x)(k)) If | | > is greater than or equal to |, then judging
Figure FDA0002478361430000051
Whether it is greater than a parameter gamma; if it is not
Figure FDA0002478361430000052
If M is β M, otherwise keeping M unchanged, executing step VI;
VI, order vi (k+1)=vi (k)-Mhi(x(k)) I is 1,2, …, m, k is k +1, and a communication delay objective function of the edge calculation layer is reconstructed; wherein v isi (k)Refers to L margin multiplier adopted in the k-th iterationi(x(k)) When the kth generation of independent variable is taken, the function value of a constraint function is obtained;
VII if h (x)(k)) If | <, stopping iteration and outputting x(k)For the optimal solution, x(k)The corresponding communication delay objective function value is the minimum calculation delay of the edge calculation layer;
the first communication delay determining module specifically includes:
the edge device weighted undirected graph building unit is used for building an edge device weighted undirected graph according to each edge device in the edge calculation layer;
an inter-edge device communication delay determining unit, configured to determine a communication delay between edge devices on the edge device weighted undirected graph;
a minimum weight tree construction unit for constructing a minimum weight tree according to a communication delay between each edge device by using a Kruskal method; the method comprises the following specific steps:
I. searching two vertexes u and v corresponding to the shortest edge in the edge set E;
II. Judging whether the two vertexes u and v are positioned in two different connected components in the minimum spanning tree T of the weighted undirected graph G;
III, if the two vertexes u and v are positioned in two different connected components in the T, merging an edge formed by the two vertexes u and v into a set TE, and simultaneously connecting the two connected components into one connected component;
IV, if two vertexes u, v are contained in a connected component, then the edge is discarded;
v, marking an edge consisting of two vertexes u and V in the edge set E, and simultaneously not participating in the selection of the subsequent shortest edge;
and the edge calculation layer communication delay determining unit is used for determining the communication delay of the edge calculation layer according to the minimum weight tree.
5. The system according to claim 4, wherein the second computation delay determining module specifically includes:
a second obtaining unit, configured to obtain a data processing amount and a computing capacity of each cloud server in the cloud computing layer;
the cloud server computing delay determining unit is used for determining the computing delay of each cloud server according to the data processing capacity and the computing capacity of each cloud server;
and the cloud computing layer computing delay determining unit is used for determining the computing delay of the cloud computing layer according to the computing delay of each cloud server.
6. The system according to claim 4, wherein the second communication delay determining module specifically includes:
a third acquisition unit configured to acquire delay and communication rate of a WAN transmission path from each edge device to each cloud server;
the cloud server communication delay determining unit is used for determining the communication delay of each cloud server according to the delay and the communication rate of the WAN transmission path from each edge device to each cloud server;
and the cloud computing layer communication delay determining unit is used for determining the communication delay of the cloud computing layer according to the communication delay of each cloud server.
CN201810182882.6A 2018-03-06 2018-03-06 Data processing delay optimization method and system based on edge calculation Active CN108418718B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810182882.6A CN108418718B (en) 2018-03-06 2018-03-06 Data processing delay optimization method and system based on edge calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810182882.6A CN108418718B (en) 2018-03-06 2018-03-06 Data processing delay optimization method and system based on edge calculation

Publications (2)

Publication Number Publication Date
CN108418718A CN108418718A (en) 2018-08-17
CN108418718B true CN108418718B (en) 2020-07-10

Family

ID=63129805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810182882.6A Active CN108418718B (en) 2018-03-06 2018-03-06 Data processing delay optimization method and system based on edge calculation

Country Status (1)

Country Link
CN (1) CN108418718B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109120542A (en) * 2018-08-20 2019-01-01 国云科技股份有限公司 A kind of flow management system and its implementation based on edge calculations
CN109151824B (en) * 2018-10-12 2021-04-13 大唐高鸿信息通信(义乌)有限公司 Library data service expansion system and method based on 5G architecture
CN109600419B (en) * 2018-11-12 2021-05-11 南京信息工程大学 Calculation migration method supporting Internet of vehicles application in mobile edge computing environment
CN109982104B (en) * 2019-01-25 2020-12-01 武汉理工大学 Motion-aware video prefetching and cache replacement decision method in motion edge calculation
CN110351257B (en) * 2019-06-27 2021-03-23 绿城科技产业服务集团有限公司 Distributed Internet of things security access system
CN111324618A (en) * 2020-02-18 2020-06-23 青岛农业大学 System and method for synchronizing medicinal biological resource data in different places in real time
CN111954318B (en) * 2020-07-20 2022-06-10 广东工贸职业技术学院 Equipment interconnection method, device and system
CN112257807B (en) * 2020-11-02 2022-05-27 曲阜师范大学 Dimension reduction method and system based on self-adaptive optimization linear neighborhood set selection
CN113055482A (en) * 2021-03-17 2021-06-29 山东通维信息工程有限公司 Intelligent cloud box equipment based on edge computing
CN113114774A (en) * 2021-04-16 2021-07-13 广州金融科技股份有限公司 Rental article monitoring and alarming method and device based on edge calculation
CN115348210A (en) * 2022-06-21 2022-11-15 深圳市高德信通信股份有限公司 Delay optimization method based on edge calculation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104159311A (en) * 2014-08-21 2014-11-19 哈尔滨工业大学 Method of united resource allocation of cognitive heterogeneous network based on convex optimization method
CN105652243A (en) * 2016-03-14 2016-06-08 西南科技大学 Multi-channel group sparsity linear prediction and time delay estimation method
CN107343025A (en) * 2017-06-07 2017-11-10 西安电子科技大学 Time delay optimization method under the distributed satellites cloud and mist network architecture and power consumption constraint
CN107493334A (en) * 2017-08-18 2017-12-19 西安电子科技大学 A kind of cloud and mist calculating network framework and the method for strengthening cloud and mist network architecture reliability
CN107682443A (en) * 2017-10-19 2018-02-09 北京工业大学 Joint considers the efficient discharging method of the mobile edge calculations system-computed task of delay and energy expenditure

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9164800B2 (en) * 2012-10-25 2015-10-20 Alcatel Lucent Optimizing latencies in cloud systems by intelligent compute node placement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104159311A (en) * 2014-08-21 2014-11-19 哈尔滨工业大学 Method of united resource allocation of cognitive heterogeneous network based on convex optimization method
CN105652243A (en) * 2016-03-14 2016-06-08 西南科技大学 Multi-channel group sparsity linear prediction and time delay estimation method
CN107343025A (en) * 2017-06-07 2017-11-10 西安电子科技大学 Time delay optimization method under the distributed satellites cloud and mist network architecture and power consumption constraint
CN107493334A (en) * 2017-08-18 2017-12-19 西安电子科技大学 A kind of cloud and mist calculating network framework and the method for strengthening cloud and mist network architecture reliability
CN107682443A (en) * 2017-10-19 2018-02-09 北京工业大学 Joint considers the efficient discharging method of the mobile edge calculations system-computed task of delay and energy expenditure

Also Published As

Publication number Publication date
CN108418718A (en) 2018-08-17

Similar Documents

Publication Publication Date Title
CN108418718B (en) Data processing delay optimization method and system based on edge calculation
CN110365753B (en) Low-delay load distribution method and device for Internet of things service based on edge calculation
Sun et al. Multi-objective optimization of resource scheduling in fog computing using an improved NSGA-II
CN108924198B (en) Data scheduling method, device and system based on edge calculation
Abd Elaziz et al. IoT workflow scheduling using intelligent arithmetic optimization algorithm in fog computing
CN105704255B (en) A kind of server load balancing method based on genetic algorithm
CN109818786B (en) Method for optimally selecting distributed multi-resource combined path capable of sensing application of cloud data center
Jayasena et al. Optimized task scheduling on fog computing environment using meta heuristic algorithms
CN112003660B (en) Dimension measurement method of resources in network, calculation force scheduling method and storage medium
CN111176820A (en) Deep neural network-based edge computing task allocation method and device
Li et al. Computation offloading and service allocation in mobile edge computing
Kumar et al. Novel Dynamic Scaling Algorithm for Energy Efficient Cloud Computing.
Xu et al. A meta reinforcement learning-based virtual machine placement algorithm in mobile edge computing
Kim et al. Network virtualization for real-time processing of object detection using deep learning
Zhou et al. Load balancing prediction method of cloud storage based on analytic hierarchy process and hybrid hierarchical genetic algorithm
Li et al. Optimal service selection and placement based on popularity and server load in multi-access edge computing
Ashouri et al. Analyzing distributed deep neural network deployment on edge and cloud nodes in IoT systems
Subrahmanyam et al. Optimizing horizontal scalability in cloud computing using simulated annealing for Internet of Things
CN114785692A (en) Virtual power plant aggregation regulation and control communication network flow balancing method and device
CN116339932A (en) Resource scheduling method, device and server
Xu et al. Joint optimization of energy conservation and migration cost for complex systems in edge computing
Zheng et al. Simulation study on latency-aware network in edge computing
Li et al. ESMO: Joint frame scheduling and model caching for edge video analytics
Zhang et al. Deploying GIS services into the edge: A study from performance evaluation and optimization viewpoint
Song et al. Latency minimization for mobile edge computing enhanced proximity detection in road networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant