CN111970323A - Time delay optimization method and device for cloud-edge multi-layer cooperation in edge computing network - Google Patents

Time delay optimization method and device for cloud-edge multi-layer cooperation in edge computing network Download PDF

Info

Publication number
CN111970323A
CN111970323A CN202010665235.8A CN202010665235A CN111970323A CN 111970323 A CN111970323 A CN 111970323A CN 202010665235 A CN202010665235 A CN 202010665235A CN 111970323 A CN111970323 A CN 111970323A
Authority
CN
China
Prior art keywords
edge
computing
original data
upper limit
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010665235.8A
Other languages
Chinese (zh)
Inventor
宋令阳
王鹏飞
邸博雅
边凯归
庹虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202010665235.8A priority Critical patent/CN111970323A/en
Publication of CN111970323A publication Critical patent/CN111970323A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/78Architectures of resource allocation
    • H04L47/783Distributed allocation of resources, e.g. bandwidth brokers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a cloud-edge-based multilayer cooperation time delay optimization method and device in an edge computing network, electronic equipment and a readable storage medium. The method comprises the steps that a cloud computing center receives registration information sent by a plurality of edge servers, an edge computing network is established according to the registration information sent by the edge servers and resource information of the cloud computing center, a task unloading proportion and a resource allocation strategy for each device in the edge computing network are obtained, and the task unloading proportion and the resource allocation strategy are sent to corresponding devices according to the edge computing network, so that the corresponding devices process original data according to the task unloading proportion and the resource allocation strategy. By comprehensively considering the computing resources and the transmission resources of the edge server, the edge device and the cloud computing center and adopting a cloud-edge cooperation mode, a better task unloading proportion and a better resource allocation strategy can be obtained, so that the data processing and data transmission delay of the whole system is reduced.

Description

Time delay optimization method and device for cloud-edge multi-layer cooperation in edge computing network
Technical Field
The invention relates to the technical field of communication, in particular to a time delay optimization method and a time delay optimization device for cloud-edge multi-layer cooperation in an edge computing network.
Background
Mobile edge computing, refers to providing computing and network services at the edge of a network, such as a radio access network, near the user.
In the related art, only the task unloading from the edge device to the single-layer edge server is considered, the computing resource allocation and the transmission resource allocation are not considered, and the cloud-edge multi-layer cooperation mode is not considered, so that the single-layer edge server cannot allocate the computing resources and the transmission resources according to the actual situation, and the delay of data processing and transmission is high.
Disclosure of Invention
The embodiment of the invention provides a cloud-edge multilayer cooperation time delay optimization method and device in an edge computing network, and aims to reduce the time delay of data processing and data transmission of the whole system.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a cloud-edge multilayer cooperation-based delay optimization method in a large-scale edge computing network, which is applied to a cloud computing center, where the cloud computing center is in communication connection with a plurality of edge servers, and each edge server is in communication connection with one or more edge devices, and the method includes:
receiving registration information sent by the plurality of edge servers, wherein the registration information comprises registration information of each edge device corresponding to the computing task and registration information of each edge server;
according to the registration information sent by the edge servers and the resource information of the cloud computing center, an edge computing network is established, a task unloading proportion and a resource allocation strategy for each device in the edge computing network are obtained, and the task unloading proportion and the resource allocation strategy are sent to corresponding devices according to the edge computing network, so that the corresponding devices process original data according to the task unloading proportion and the resource allocation strategy;
receiving a first original data processing result sent by a plurality of edge servers and original data left after the processing of the plurality of edge servers, wherein the first original data processing result comprises an original data processing result obtained by processing the original data by a plurality of edge devices according to the corresponding task unloading proportion and the resource allocation strategy of the edge devices, and an original data processing result obtained by processing the received original data by the plurality of edge servers according to the corresponding task unloading proportion and the resource allocation strategy of the edge devices;
processing the remaining original data after respective processing of each edge server according to a task unloading proportion and a resource allocation strategy corresponding to the cloud computing center to obtain a second original data processing result, and summarizing the second original data processing result and the first original data processing results respectively sent by the plurality of edge servers to obtain a third original data processing result.
Optionally, the cloud computing center is connected to the user terminal in a communication manner, and before receiving the registration information sent by the edge server, the method further includes:
receiving the computing task sent by the user terminal, and sending the computing task to a corresponding edge server, or sending the computing task to corresponding edge equipment through the edge server, so that the edge server and the edge equipment which receive the computing task upload registration information to the cloud computing center;
after obtaining a third raw data processing result, the method further includes:
and sending the third original data processing result to the user terminal.
Optionally, the registration information of the edge device includes:
the method comprises the steps that the original data generation rate, the upper limit of computing resources, the upper limit of transmission resources, the number of layers and the IP address and port number of edge equipment are obtained;
the registration information of the edge server includes: the upper limit of computing resources, the upper limit of transmission resources, the number of layers, the IP address and the port number of the edge server;
the resource information of the cloud computing center comprises: and calculating the upper limit of resources, the upper limit of transmission resources, the number of layers, and the IP address and the port number.
Optionally, establishing an edge computing network according to the registration information and the resource information of the cloud computing center, and obtaining a task offloading ratio and a resource allocation policy of each device in the edge computing network, including:
establishing an edge computing network according to the number of layers where each edge device, each edge server and the cloud computing center are located, the IP address and the port number;
and calculating to obtain the task unloading proportion and the resource allocation strategy of each device in the edge computing network according to the original data generation rate, the upper limit of computing resources and the upper limit of transmission resources of each edge device, the upper limit of computing resources and the upper limit of transmission resources of each edge server, and the upper limit of computing resources and the upper limit of transmission resources of the cloud computing center in combination with the edge computing network.
Optionally, the method for calculating the task offload proportion and the resource allocation policy of each device in the edge computing network according to the original data generation rate, the upper limit of computing resources, the upper limit of transmission resources of each edge device, the upper limit of computing resources and the upper limit of transmission resources of each edge server, and the upper limit of computing resources and the upper limit of transmission resources of the cloud computing center, in combination with the edge computing network, includes:
obtaining the original data of each edge device according to the original data generation rate and the processing time interval of each edge device;
according to the original data, the upper limit of computing resources and the upper limit of transmission resources of each edge device, the upper limit of computing resources and the upper limit of transmission resources of each edge server, the upper limit of computing resources and the upper limit of transmission resources of the cloud computing center, the edge computing network is combined, and the Cauchy inequality is utilized to obtain an equation of system delay L with a task unloading strategy as a target function;
calculating to obtain a plurality of groups of limit condition intersection points according to all linear limit conditions, wherein each group of limit condition intersection points correspond to a task unloading strategy;
respectively substituting each group of limiting condition intersection points into an equation of system delay L with the task unloading strategy as a target function to obtain a plurality of total delays, and determining the task unloading strategy corresponding to the minimum total delay as a final task unloading strategy;
and obtaining a calculation resource allocation strategy and a transmission resource allocation strategy by utilizing the final task unloading strategy according to the equal sign establishment condition of the Cauchy inequality, thereby obtaining the task unloading proportion and the resource allocation strategy.
Optionally, obtaining an equation of the system delay L using the task offloading policy as a target function by combining the edge computing network and using the cauchy inequality according to the original data, the upper limit of the computing resource, the upper limit of the transmission resource of each edge device, the upper limit of the computing resource, the upper limit of the transmission resource of each edge server, and the upper limit of the computing resource and the upper limit of the transmission resource of the cloud computing center, includes:
defining a task offload ratio at node i of the nth layer as
Figure BDA0002580092580000041
After the original data lambda is processed, the data volume is reduced to rho lambda, the compression ratio is rho, and the sum L of the calculation time and the transmission time of all nodes at the nth layer in the edge calculation networknThe equation of (1) is:
Figure BDA0002580092580000042
wherein the task unloading proportion is s, the computing resource allocation is theta, the transmission resource allocation is phi,
Figure BDA0002580092580000043
represents the computation time of the nth level node i,
Figure BDA0002580092580000044
the data quantity of the calculation result uploaded by the lower layer received by the nth layer node i,
Figure BDA0002580092580000045
represents the data amount of the calculation result of the nth layer node i,
Figure BDA0002580092580000046
representing the amount of raw data to be uploaded at the node,
Figure BDA0002580092580000047
indicating the transmission time of the nth layer node i,
Figure BDA0002580092580000048
represents a set of lower level devices, M, to which the layer n-1 node j is connectedn-1Representing the number of devices of the (n-1) th layer;
by L0When the computing time of the cloud computing center is represented, the equation of the system delay L of the edge computing network including the N-layer edge servers is:
Figure BDA0002580092580000049
obtaining L by using the Cauchy inequality and combining the equality of LnThe inequality of (a) is:
Figure BDA00025800925800000410
wherein
Figure BDA00025800925800000411
Being the maximum computing power of the nth level node i,
Figure BDA00025800925800000412
all transmission resources of the node j of the (n-1) th layer;
mixing L withnSubstituting the inequality into an equation of system delay L of the edge computing network to obtain the equation of the system delay L taking the task unloading strategy as a target function, wherein the equation of the system delay L is as follows:
Figure BDA0002580092580000051
optionally, the linear constraint condition is:
Figure BDA0002580092580000052
the unloading ratio of the (N + 1) th layer equipment is between 0 and 1;
Figure BDA0002580092580000053
the wireless transmission data volume of the N +1 layer equipment is less than the transmission resource allocated to the equipment;
Figure BDA0002580092580000054
the total transmission resource distributed to all the connected equipment by each equipment of the Nth layer does not exceed the maximum transmission resource amount of the edge server;
Figure BDA0002580092580000055
the unloading proportion of the tasks from the n + 1-layer device i at the n-th layer device j is between 0 and 1;
Figure BDA0002580092580000056
the transmission data volume of the nth layer server j is less than the transmission resource distributed to the nth layer server j;
Figure BDA0002580092580000057
the total transmission resource distributed to all the devices connected by each device in the nth layer does not exceed the maximum transmission resource of the device;
Figure BDA0002580092580000058
the computing resource allocation of each node is not less than the data amount to be computed and simultaneously the maximum computing resource of the node is not exceededA source;
Figure BDA0002580092580000061
the computing task amount carried by the cloud computing center is smaller than the upper limit of the computing resource of the cloud computing center;
the equal sign of the Cauchi inequality is that:
Figure BDA0002580092580000062
Figure BDA0002580092580000063
Figure BDA0002580092580000064
in a second aspect, an embodiment of the present invention provides a cloud-edge multilayer cooperation-based delay optimization device in a large-scale edge computing network, including:
a first receiving module, configured to receive registration information sent by the plurality of edge servers, where the registration information includes registration information of each edge device corresponding to the computing task and registration information of each edge server;
the distribution module is used for establishing an edge computing network according to the registration information sent by the edge servers and the resource information of the cloud computing center, obtaining a task unloading proportion and a resource distribution strategy for each device in the edge computing network, and sending the task unloading proportion and the resource distribution strategy to corresponding devices according to the edge computing network so that the corresponding devices process original data according to the task unloading proportion and the resource distribution strategy;
the second receiving module is used for receiving a first original data processing result sent by the plurality of edge servers and a plurality of original data left after the processing of the edge servers, wherein the first original data processing result comprises an original data processing result obtained by processing the original data by the plurality of edge devices according to the task unloading proportion and the resource allocation strategy corresponding to the edge devices, and an original data processing result obtained by processing the received original data by the plurality of edge servers according to the task unloading proportion and the resource allocation strategy corresponding to the edge devices;
and the processing module is used for processing the residual original data after the respective processing of each edge server according to the task unloading proportion and the resource allocation strategy corresponding to the cloud computing center to obtain a second original data processing result, and summarizing the second original data processing result and the first original data processing results respectively sent by the plurality of edge servers to obtain a third original data processing result.
Optionally, the cloud computing center is connected to the user terminal in a communication manner, and before the first receiving module, the apparatus further includes:
the third receiving module is used for receiving the computing task sent by the user terminal and sending the computing task to a corresponding edge server, or sending the computing task to corresponding edge equipment through the edge server, so that the edge server and the edge equipment which receive the computing task upload registration information to the cloud computing center;
after the processing module, the apparatus further comprises:
and the sending module is used for sending the third original data processing result to the user terminal.
Optionally, the registration information of the edge device includes:
the method comprises the steps that the original data generation rate, the upper limit of computing resources, the upper limit of transmission resources, the number of layers and the IP address and port number of edge equipment are obtained;
the registration information of the edge server includes: the upper limit of computing resources, the upper limit of transmission resources, the number of layers, the IP address and the port number of the edge server;
the resource information of the cloud computing center comprises: and calculating the upper limit of resources, the upper limit of transmission resources, the number of layers, and the IP address and the port number.
Optionally, the allocation submodule includes:
the establishing submodule is used for establishing an edge computing network according to the number of layers where each edge device, each edge server and the cloud computing center are located, the IP address and the port number;
and the obtaining submodule is used for calculating and obtaining the task unloading proportion and the resource allocation strategy of each device in the edge computing network by combining the edge computing network according to the original data generation rate, the computing resource upper limit and the transmission resource upper limit of each edge device, the computing resource upper limit and the transmission resource upper limit of each edge server, and the computing resource upper limit and the transmission resource upper limit of the cloud computing center.
Optionally, the obtaining sub-module includes:
a first obtaining unit, configured to obtain raw data of each edge device according to a raw data generation rate and a processing time interval of each edge device;
a second obtaining unit, configured to obtain, according to original data, an upper limit of computing resources, an upper limit of transmission resources of each edge device, an upper limit of computing resources and an upper limit of transmission resources of each edge server, and an upper limit of computing resources and an upper limit of transmission resources of the cloud computing center, in combination with the edge computing network, and by using a cauchy inequality, an equation of a system delay L with a task offloading policy as a target function;
a third obtaining unit, configured to calculate multiple sets of intersection points of the constraint conditions according to all the linear constraint conditions, where each set of intersection points of the constraint conditions corresponds to one task offloading policy;
the determining unit is used for respectively substituting each group of limiting condition intersection points into the equation of the system delay L taking the task unloading strategy as the target function to obtain a plurality of total delays, and determining the task unloading strategy corresponding to the minimum total delay as a final task unloading strategy;
and the fourth obtaining unit is used for obtaining a calculation resource allocation strategy and a transmission resource allocation strategy by utilizing the final task unloading strategy according to the equal sign establishment condition of the Cauchy inequality, so as to obtain the task unloading proportion and the resource allocation strategy.
Optionally, the second obtaining unit includes:
a first establishing subunit for defining a task offload proportion at node i of the nth layer as
Figure BDA0002580092580000081
After the original data lambda is processed, the data volume is reduced to rho lambda, the compression ratio is rho, and the sum L of the calculation time and the transmission time of all nodes at the nth layer in the edge calculation networknThe equation of (1) is:
Figure BDA0002580092580000082
wherein the task unloading proportion is s, the computing resource allocation is theta, the transmission resource allocation is phi,
Figure BDA0002580092580000083
represents the computation time of the nth level node i,
Figure BDA0002580092580000084
the data quantity of the calculation result uploaded by the lower layer received by the nth layer node i,
Figure BDA0002580092580000085
represents the data amount of the calculation result of the nth layer node i,representing the amount of raw data to be uploaded at the node,
Figure BDA0002580092580000087
indicating the transmission time of the nth layer node i,
Figure BDA0002580092580000088
represents a set of lower level devices, M, to which the layer n-1 node j is connectedn-1Representing the number of devices of the (n-1) th layer;
a second building subunit for using L0When the computing time of the cloud computing center is represented, the equation of the system delay L of the edge computing network including the N-layer edge servers is:
Figure BDA0002580092580000091
a first obtaining subunit, configured to obtain L by using the Cauchy inequality in combination with the equation of LnThe inequality of (a) is:
Figure BDA0002580092580000092
wherein
Figure BDA0002580092580000093
Being the maximum computing power of the nth level node i,
Figure BDA0002580092580000094
all transmission resources of the node j of the (n-1) th layer;
a second obtaining subunit for obtaining LnSubstituting the inequality into an equation of system delay L of the edge computing network to obtain the equation of the system delay L taking the task unloading strategy as a target function, wherein the equation of the system delay L is as follows:
Figure BDA0002580092580000095
optionally, the linear constraint condition is:
Figure BDA0002580092580000096
the unloading ratio of the (N + 1) th layer equipment is between 0 and 1;
Figure BDA0002580092580000097
the wireless transmission data volume of the N +1 layer equipment is less than the transmission resource allocated to the equipment;
Figure BDA0002580092580000098
the total transmission resource distributed by each device of the Nth layer to all the devices connected with the Nth layer does not exceed the maximum transmission resource amount of the server;
Figure BDA0002580092580000101
the unloading proportion of the tasks from the n + 1-layer device i at the n-th layer device j is between 0 and 1;
Figure BDA0002580092580000102
the transmission data volume of the nth layer server j is less than the transmission resource distributed to the nth layer server j;
Figure BDA0002580092580000103
the total transmission resource distributed to all the devices connected by each device in the nth layer does not exceed the maximum transmission resource of the device;
Figure BDA0002580092580000104
the computing resource configuration of each node is not less than the data volume to be computed and does not exceed the maximum computing resource of the node;
Figure BDA0002580092580000105
the computing task amount carried by the cloud computing center is smaller than the upper limit of the computing resource of the cloud computing center;
the equal sign of the Cauchi inequality is that:
Figure BDA0002580092580000106
Figure BDA0002580092580000107
Figure BDA0002580092580000108
in a third aspect, an embodiment of the present invention additionally provides an electronic device, including: the method includes the steps of implementing the cloud-edge multi-layer cooperation-based delay optimization method in the large-scale edge computing network according to the first aspect, when the computer program is executed by the processor.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the cloud-edge multi-layer cooperation-based delay optimization method in a large-scale edge computing network according to the first aspect are implemented.
In the invention, a cloud computing center receives registration information sent by a plurality of edge servers, an edge computing network is established according to the registration information sent by the edge servers and resource information of the cloud computing center, a task unloading proportion and a resource allocation strategy for each device in the edge computing network are obtained, and the task unloading proportion and the resource allocation strategy are sent to corresponding devices according to the edge computing network, so that the corresponding devices process original data according to the task unloading proportion and the resource allocation strategy; then receiving a first original data processing result sent by a plurality of edge servers and a plurality of original data left after the processing of the edge servers, wherein the first original data processing result comprises an original data processing result obtained by processing the original data by a plurality of edge devices according to the corresponding task unloading proportion and resource allocation strategy, and the original data processing result obtained by processing the received original data by a plurality of edge servers according to the corresponding task unloading proportion and resource allocation strategy, processing the left original data after the processing of each edge server respectively according to the task unloading proportion and resource allocation strategy corresponding to the cloud computing center to obtain a second original data processing result, and summarizing the second original data processing result and the first original data processing result sent by the plurality of edge servers respectively, and obtaining a third original data processing result.
The method is applied to a large-scale edge computing network, and data are processed and transmitted in a cloud-edge cooperation mode, specifically, a cloud computing center generates an edge computing network according to registration information of each edge device participating in a computing task and registration information of each edge server, and obtains a task unloading proportion and a resource allocation strategy for each device in the edge computing network by combining edge computing network computing, so that the corresponding device can process data and transmit the data according to the task unloading proportion and the resource allocation strategy, better task unloading proportion and resource allocation strategy can be obtained by comprehensively considering computing resources and transmission resources of the edge servers, the edge devices and the cloud computing center, and delay of data processing and data transmission of the whole system is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive labor.
Fig. 1 is a schematic view of an application scenario of a cloud-edge-based multi-layer cooperation delay optimization method in a large-scale edge computing network according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating steps of a delay optimization method based on cloud-edge multi-layer collaboration in a large-scale edge computing network according to an embodiment of the present invention;
fig. 3 is a schematic transmission flow diagram of a delay optimization method based on cloud-edge multi-layer cooperation in a large-scale edge computing network according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a delay optimization apparatus based on cloud-edge multi-layer cooperation in a large-scale edge computing network according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
With the development of the internet of things, the core network computing faces huge network flow pressure and computing pressure, so that the mobile edge computing can provide multiple purposes and reduce the core network flow pressure and the computing pressure.
In the related art, only the task unloading from the edge device to the single-layer edge server is considered, the allocation of computing resources and transmission resources is not considered, and a cloud-edge multi-layer cooperation mode is not considered, and the transmission resources and the computing resources cannot be reasonably allocated to a plurality of devices in the system according to the actual network connection condition, so that the allocation of the computing resources and the transmission resources of each device is unreasonable, and the delay of data processing and transmission is high.
In order to overcome the problems, the application provides a cloud-edge multi-layer cooperation-based delay optimization method in a large-scale edge computing network, so as to solve the problem that the delay of data processing and transmission in the existing edge computing network is high.
Before introducing the technical scheme of the application, an application scenario targeted by the application is introduced, and the scheme of the application is targeted at a large-scale edge computing network.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a delay optimization method based on cloud-edge multi-layer cooperation in a large-scale edge computing network according to an embodiment of the present invention, as shown in fig. 1, the entire large-scale computing network includes a cloud computing center, a user terminal (user), a multi-layer edge server, and an edge device, where the user terminal is connected to the cloud computing center, the cloud computing center is connected to a plurality of edge servers, where the edge servers may be single-layer servers or multi-layer servers, the edge servers may be divided into servers of a base station or a wireless access point layer, servers of a switch or a gateway layer, a mini-or local computing center, and the like, each edge server of a bottommost layer is respectively connected to one or more edge devices, and the edge devices may include a smart watch, a smart speaker, a smart television, the edge device is used for collecting data, and processing and transmitting the data in a cloud-edge cooperation mode, so that a better task unloading proportion and a better resource allocation strategy can be obtained, and the time delay of data processing and data transmission of the whole system is reduced.
Referring to fig. 2, fig. 2 is a flowchart of steps of a delay optimization method based on cloud-edge multi-layer collaboration in a large-scale edge computing network according to an embodiment of the present invention, where as shown in fig. 2, the method is applied to a cloud computing center, the cloud computing center is communicatively connected to a plurality of edge servers, and each edge server is communicatively connected to one or more edge devices, and the method includes:
step S201: and receiving registration information sent by the plurality of edge servers, wherein the registration information comprises registration information of each edge device corresponding to the computing task and registration information of each edge server.
In a possible implementation manner, the cloud computing center is connected to the user terminal in a communication manner, and before the cloud computing center receives the registration information sent by the edge server, the method further includes:
and receiving the computing task sent by the user terminal, and sending the computing task to a corresponding edge server, or sending the computing task to corresponding edge equipment through the edge server, so that the edge server and the edge equipment which receive the computing task upload registration information to the cloud computing center.
In the embodiment, the user terminal sends the computing task to the cloud computing center, after receiving the computing task, the cloud computing center plans the edge servers and the edge devices which need to execute the task, sends the computing task to the corresponding edge servers which are directly connected, and the edge servers which receive the computing task issue the computing task layer by layer until the computing task is issued to the edge devices, so that the edge devices can collect data according to the computing task.
After receiving the computing task, the edge server and the edge device executing the computing task upload registration information to the cloud computing center, specifically, the edge device uploads the registration information to the edge server directly connected to the edge device, and after receiving the registration information sent by the edge device or the edge server connected to the lower layer, each layer of edge server uploads the registration information together with the registration information of the edge device or the edge server until the edge server on the uppermost layer uploads the received registration information and the registration information of the edge device to the cloud computing center.
In a specific embodiment, the registration information of the edge device includes: the method comprises the following steps of generating rate of original data of edge equipment, calculating resource upper limit, transmission resource upper limit, number of layers and IP address and port number, wherein the registration information of an edge server comprises: the upper limit of computing resources, the upper limit of transmission resources, the number of layers, and the IP address and port number of the edge server.
In this embodiment, the original data generation rate of the edge device is a generation rate of data acquired by each edge device according to the calculation task, the upper limit of the calculation resource of the edge device is all the calculation resources of each edge device, the upper limit of the transmission resource of the edge device is all the transmission resources of each edge device, the upper limit of the calculation resource of the edge server is all the calculation resources of each edge device, the upper limit of the transmission resource of the edge server is all the transmission resources of each edge device, and the number of layers where the edge device and the edge server are located, the IP address and the port number are used for establishing an edge calculation network.
Step S202: and establishing an edge computing network according to the registration information sent by the edge servers and the resource information of the cloud computing center, obtaining a task unloading proportion and a resource allocation strategy for each device in the edge computing network, and sending the task unloading proportion and the resource allocation strategy to the corresponding device according to the edge computing network so that the corresponding device processes original data according to the task unloading proportion and the resource allocation strategy.
In a specific embodiment, the step of establishing the edge computing network is:
step S202-1: and establishing an edge computing network according to the number of layers of each edge device, each edge server and the cloud computing center, the IP address and the port number.
In the embodiment, the original data generation rate, the upper limit of computing resources, the upper limit of transmission resources, the number of layers and the IP address and port number of the edge device sent by a plurality of edge servers are received, and the upper limit of the computing resources, the upper limit of the transmission resources, the number of layers, the IP address and the port number of the edge server, establishing an edge computing network according to the number of layers of the edge device and the edge server and the IP address and the port number, and combining the number of layers of the cloud computing center and the IP address and the port number, wherein the edge computing network is a network established by all the edge devices, the edge servers and the cloud computing center corresponding to the computing task, specifically, the number of layers of the cloud computing center can be 0, the number of layers of the edge server directly connected with the cloud computing center is 1 st layer, and the number of layers of each edge device or each edge server is determined when the edge device or each edge server is installed.
In a specific embodiment, the specific step of obtaining the task offload proportion and the resource allocation policy of each device in the edge computing network by calculation includes:
step S202-2: and calculating to obtain the task unloading proportion and the resource allocation strategy of each device in the edge computing network according to the original data generation rate, the upper limit of computing resources and the upper limit of transmission resources of each edge device, the upper limit of computing resources and the upper limit of transmission resources of each edge server, and the upper limit of computing resources and the upper limit of transmission resources of the cloud computing center in combination with the edge computing network.
In this embodiment, after the edge computing network is established, a task offload proportion and a resource allocation strategy for each device in the edge computing network are obtained according to the upper limit of computing resources and the upper limit of transmission resources of each edge device, each edge server, and the cloud computing center.
In one possible embodiment, the step of calculating the more specific task unloading proportion and resource allocation policy includes:
step S202-2-1: and obtaining the original data of each edge device according to the original data generation rate and the processing time interval of each edge device.
In this embodiment, the edge device continuously acquires data while executing the calculation task, the data acquired by the edge device is processed at a processing time interval, and the data acquired at each processing time interval is the original data, that is, the original data is a product of the generation rate of the original data and the processing time interval.
Step S202-2-2: and obtaining an equation of the system delay L with the task unloading strategy as a target function by combining the edge computing network and utilizing the Cauchy inequality according to the original data, the upper limit of the computing resources and the upper limit of the transmission resources of each edge device, the upper limit of the computing resources and the upper limit of the transmission resources of each edge server, and the upper limit of the computing resources and the upper limit of the transmission resources of the cloud computing center.
In a possible embodiment, step S202-2-2 specifically includes:
defining a task offload ratio at node i of the nth layer as
Figure BDA0002580092580000161
After the original data lambda is processed, the data volume is reduced to rho lambda, the compression ratio is rho, and the sum L of the calculation time and the transmission time of all nodes at the nth layer in the edge calculation networknThe equation of (1) is:
Figure BDA0002580092580000162
wherein the task unloading proportion is s, the computing resource allocation is theta, the transmission resource allocation is phi,
Figure BDA0002580092580000163
represents the computation time of the nth level node i,
Figure BDA0002580092580000164
the data quantity of the calculation result uploaded by the lower layer received by the nth layer node i,
Figure BDA0002580092580000165
represents the data amount of the calculation result of the nth layer node i,
Figure BDA0002580092580000166
representing the amount of raw data to be uploaded at the node,
Figure BDA0002580092580000167
indicating the transmission time of the nth layer node i,
Figure BDA0002580092580000168
represents a set of lower layer devices connected to the node j of the (n-1) th layer, M represents the number of devices, Mn-1Representing the number of devices of the (n-1) th layer;
by L0When the computing time of the cloud computing center is represented, the equation of the system delay L of the edge computing network including the N-layer edge servers is:
Figure BDA0002580092580000169
wherein N is the number of layers of the edge server.
Obtaining L by using the Cauchy inequality and combining the equality of LnThe inequality of (a) is:
Figure BDA00025800925800001610
wherein
Figure BDA00025800925800001611
Being the maximum computing power of the nth level node i,
Figure BDA00025800925800001612
all transmission resources of the node j of the (n-1) th layer; the two quantities are system parameters, and when the equal sign establishment condition is met, the original delay minimization problem can be completely equivalently converted into a task unloading problem Lmin(s), and the equal sign satisfaction condition gives a network computing resource and transmission resource allocation scheme corresponding to the task offloading strategy, that is, as long as the network resource is configured according to the resource allocation scheme given by the equal sign satisfaction condition, the optimal solution obtained by the task offloading problem is the optimal solution of the original joint delay optimization problem.
Mixing L withnSubstituting the inequality into an equation of system delay L of the edge computing network to obtain the equation of the system delay L taking the task unloading strategy as a target function, wherein the equation of the system delay L is as follows:
Figure BDA0002580092580000171
wherein the content of the first and second substances,
Figure BDA0002580092580000172
is that
Figure BDA0002580092580000173
All transmission resources of the node j of the n-1 th layer, the cloud computing center is the 0 th layer, u is the maximum value,
Figure BDA0002580092580000174
b is the maximum computing resource (i.e., the upper limit of the computing resource) of the cloud computing server, and b is the required amount of the computing resource.
Objective function L of task offload problemmin(s) is a concave function with a maximum value, so that under the current linear limiting conditions, the minimum value is always within the limitAt a certain intersection of conditions, the optimal solution can be obtained by traversing the intersections of all the constraint conditions.
In this embodiment, by the above method, according to the relationship among the edge computing network, the computing resource allocation, and the transmission resource allocation, an equation of the system delay L using the task offloading policy as an objective function is obtained, so as to solve the task offloading policy, and obtain the task offloading policy of each device in the edge computing network for the computing task.
Step S202-2-3: and calculating to obtain a plurality of groups of limit condition intersection points according to all the linear limit conditions, wherein each group of limit condition intersection points correspond to one task unloading strategy.
In this embodiment, the task offloading policy needs to satisfy a plurality of linear constraint conditions, and the plurality of linear constraint conditions form an equation set, so that a plurality of sets of constraint condition intersections can be calculated, where each set of constraint condition intersection corresponds to one task offloading policy.
The equation of the system delay L taking the task unloading strategy as the objective function is a concave function and has a maximum value, so that the minimum value is always at a certain intersection point of the limiting conditions under the current linear limiting conditions, and the optimal solution can be obtained by traversing all the intersection points of the limiting conditions.
In one possible embodiment, the specific linear constraints include:
Figure BDA0002580092580000181
the unloading ratio of the (N + 1) th layer equipment is between 0 and 1;
Figure BDA0002580092580000182
the wireless transmission data volume (data to be transmitted) of the N +1 layer equipment is less than the transmission resource allocated to the equipment;
Figure BDA0002580092580000183
each device of the Nth layer being assigned to all devices connected theretoThe total transmission resource does not exceed the maximum transmission resource amount of the edge server;
Figure BDA0002580092580000184
the unloading proportion of the tasks from the n + 1-layer device i at the n-th layer device j is between 0 and 1;
Figure BDA0002580092580000185
the transmission data volume of the nth layer server j is less than the transmission resource distributed to the nth layer server j;
Figure BDA0002580092580000186
the total transmission resource distributed to all the devices connected by each device in the nth layer does not exceed the maximum transmission resource of the device;
Figure BDA0002580092580000187
the computing resource configuration of each node is not less than the data volume to be computed, and simultaneously the maximum computing resource (computing resource upper limit) of the node is not exceeded;
Figure BDA0002580092580000188
and the calculation task amount carried by the cloud calculation center is less than the upper limit of the calculation resource.
According to the original data, the upper limit of computing resources and the upper limit of transmission resources of each edge device, the upper limit of computing resources and the upper limit of transmission resources of each edge server, and the upper limit of computing resources and the upper limit of transmission resources of the cloud computing center, in combination with the edge computing network, an equation set formed by the linear limiting conditions is solved, and then the intersection points of the multiple groups of limiting conditions can be obtained through calculation.
Step S202-2-4: and respectively substituting each group of limiting condition intersection points into the equation of the system delay L taking the task unloading strategy as the objective function to obtain a plurality of total delays, and determining the task unloading strategy corresponding to the minimum total delay as the final task unloading strategy.
In this embodiment, the equation of the system delay L takes the task offloading policy as the objective function, and among the multiple sets of intersection points of the restriction conditions obtained by solving, each set of intersection points of the restriction conditions corresponds to one task offloading policy, and each set of intersection points of the restriction conditions is respectively substituted into the equation of the system delay L taking the task offloading policy as the objective function, so as to obtain the total delay corresponding to each set of intersection points of the restriction conditions, and the task offloading policy corresponding to the minimum total delay therein is determined as the final task offloading policy, thereby obtaining the optimal task offloading policy and minimizing the total delay of the entire system.
Step S202-2-5: and obtaining a calculation resource allocation strategy and a transmission resource allocation strategy by utilizing the final task unloading strategy according to the equal sign establishment condition of the Cauchy inequality, thereby obtaining the task unloading proportion and the resource allocation strategy.
In a possible embodiment, the equality sign of the cauchy inequality is satisfied:
Figure BDA0002580092580000191
Figure BDA0002580092580000192
Figure BDA0002580092580000193
in this embodiment, the obtained final task offloading policy is substituted into an equation set formed by the conditions of equal sign establishment of the cauchy inequality, so as to obtain a calculation resource allocation policy and a transmission resource allocation policy, and then the final task offloading policy is combined to obtain a final task offloading proportion and a resource allocation policy.
And after the final task unloading proportion and the resource allocation strategy are obtained, the task unloading proportion and the resource allocation strategy are sent to corresponding equipment according to the edge computing network, so that the corresponding equipment processes the original data according to the task unloading proportion and the resource allocation strategy.
Specifically, after the cloud computing center sends the task unloading proportion and the resource allocation strategy to the corresponding device according to the edge computing network, after the edge device and the edge server receive the corresponding task unloading proportion and the resource allocation strategy, the edge device unloads the original data of the corresponding proportion to process, and processes the unloaded original data by adopting the corresponding computing resource allocation to obtain a corresponding processing result. According to the same method, each layer of edge server processes the original data according to the corresponding task unloading proportion and the resource allocation strategy, and uploads the processed original data to the cloud computing center.
Step S203: receiving a first original data processing result sent by a plurality of edge servers and original data left after the processing of the plurality of edge servers, wherein the first original data processing result comprises an original data processing result obtained after the processing of the original data by the plurality of edge devices according to the task unloading proportion and the resource allocation strategy corresponding to the edge devices, and an original data processing result obtained after the processing of the received original data by the plurality of edge servers according to the task unloading proportion and the resource allocation strategy corresponding to the edge devices.
In this embodiment, after the raw data is processed by the plurality of edge devices and the plurality of edge servers, the plurality of edge devices process the raw data according to the task offload proportions and the resource allocation policies corresponding to the plurality of edge devices, and the plurality of edge servers process the received raw data according to the task offload proportions and the resource allocation policies corresponding to the plurality of edge devices, so as to obtain a raw data processing result, the raw data processing result is uploaded to the cloud computing center by one or more edge servers on the uppermost layer, the raw data remaining after the processing by the edge devices and the edge servers is also uploaded to the cloud computing center by one or more edge servers on the uppermost layer, and the cloud computing center receives the raw data processing result and the remaining raw data sent by the plurality of edge servers.
Step S204: processing the remaining original data after respective processing of each edge server according to a task unloading proportion and a resource allocation strategy corresponding to the cloud computing center to obtain a second original data processing result, and summarizing the second original data processing result and the first original data processing results respectively sent by the plurality of edge servers to obtain a third original data processing result.
In this embodiment, the cloud computing center allocates corresponding computing resources to process the remaining original data after respective processing by each edge server according to a task offload proportion and a resource allocation policy corresponding to the cloud computing center, so as to obtain a second original data processing result, and summarizes the second original data processing result and the first original data processing results sent by the plurality of edge servers, respectively, so as to obtain a third original data processing result.
In a possible implementation manner, after obtaining the third original data processing result, the method further includes:
and sending the third original data processing result to the user terminal.
And the cloud computing center sends the third original data processing result to the user terminal by allocating corresponding transmission resources.
In the implementation mode of the invention, the cloud computing center receives the registration information sent by a plurality of edge servers, an edge computing network is established according to the registration information sent by the edge servers and the resource information of the cloud computing center, the task unloading proportion and the resource allocation strategy for each device in the edge computing network are obtained, and the task unloading proportion and the resource allocation strategy are sent to the corresponding device according to the edge computing network, so that the corresponding device processes the original data according to the task unloading proportion and the resource allocation strategy; then receiving a first original data processing result sent by a plurality of edge servers and a plurality of original data left after the processing of the edge servers, wherein the first original data processing result comprises an original data processing result obtained by processing the original data by a plurality of edge devices according to the corresponding task unloading proportion and resource allocation strategy, and the original data processing result obtained by processing the received original data by a plurality of edge servers according to the corresponding task unloading proportion and resource allocation strategy, processing the left original data after the processing of each edge server respectively according to the task unloading proportion and resource allocation strategy corresponding to the cloud computing center to obtain a second original data processing result, and summarizing the second original data processing result and the first original data processing result sent by the plurality of edge servers respectively, and obtaining a third original data processing result.
The application is applied to a large-scale edge computing network, and data are processed and transmitted in a cloud-edge cooperation mode, wherein the cloud-edge cooperation mode simultaneously considers a cloud computing center, each edge server and each edge device, the cloud computing center generates an edge computing network according to the registration information of each edge device participating in a computing task and the registration information of each edge server, and obtains a task unloading proportion and a resource allocation strategy aiming at each device in the edge computing network by combining with the edge computing network for computing, so that the corresponding device can process the data and transmit the data according to the task unloading proportion and the resource allocation strategy, better task unloading proportion and a resource allocation strategy can be obtained by comprehensively considering computing resources and transmission resources of the edge servers, the edge devices and the cloud computing center and considering the cloud-edge cooperation, thereby reducing the delay of data processing and data transmission of the whole system.
Referring to fig. 3, fig. 3 is a schematic diagram of a transmission flow of a delay optimization method based on cloud-edge multi-layer cooperation in a large-scale edge computing network in an embodiment of the present invention, as shown in fig. 3, the entire system includes a user, a cloud computing center, an upper edge server, a base station/wireless access point, and an edge user, where the upper edge server and the base station/wireless access point are collectively referred to as an edge server, the edge user is an edge device, the user is a user terminal, the user terminal issues a computing task to the cloud computing center, the cloud computing task then issues computing tasks to the upper edge server, the base station/wireless access point, and the edge user layer by layer, then the edge user, the base station/wireless access point, and the upper edge server send registration information to the cloud computing center layer by layer, and the cloud computing center performs network construction, the method comprises the steps of obtaining an edge computing network, obtaining a task unloading proportion and a resource allocation scheme through computing, further issuing task unloading and resource allocation parameters to an upper edge server, a base station/wireless access point and an edge user layer by layer, processing original data by the edge user, the base station/wireless access point and the upper edge server according to corresponding task unloading and resource allocation strategies, uploading processed results and unprocessed original data to a cloud computing center layer by layer, processing received data by the cloud computing center according to the corresponding task unloading and resource allocation strategies, and sending processed computing results to a user terminal.
Based on the same inventive concept, an embodiment of the present invention provides a cloud-edge multilayer cooperation-based delay optimization device in a large-scale edge computing network, and fig. 4 is a schematic diagram of a cloud-edge multilayer cooperation-based delay optimization device in a large-scale edge computing network according to an embodiment of the present invention, and as shown in fig. 4, the device includes:
a first receiving module 401, configured to receive registration information sent by the multiple edge servers, where the registration information includes registration information of each edge device corresponding to the computing task and registration information of each edge server;
the allocation module 402 is configured to establish an edge computing network according to the registration information sent by the edge servers and the resource information of the cloud computing center, obtain a task offload proportion and a resource allocation policy for each device in the edge computing network, and send the task offload proportion and the resource allocation policy to a corresponding device according to the edge computing network, so that the corresponding device processes original data according to the task offload proportion and the resource allocation policy;
a second receiving module 403, configured to receive a first raw data processing result sent by the multiple edge servers and the raw data remaining after processing by the multiple edge servers, where the first raw data processing result includes a raw data processing result obtained by processing the raw data by the multiple edge devices according to the task offload proportions and the resource allocation policies corresponding to the multiple edge devices, and a raw data processing result obtained by processing the received raw data by the multiple edge servers according to the task offload proportions and the resource allocation policies corresponding to the multiple edge devices;
the processing module 404 is configured to process remaining original data after respective processing by each edge server according to a task offload proportion and a resource allocation policy corresponding to the cloud computing center to obtain a second original data processing result, and summarize the second original data processing result and the first original data processing results sent by the plurality of edge servers, to obtain a third original data processing result.
Optionally, the cloud computing center is connected to the user terminal in a communication manner, and before the first receiving module, the apparatus further includes:
the third receiving module is used for receiving the computing task sent by the user terminal and sending the computing task to a corresponding edge server, or sending the computing task to corresponding edge equipment through the edge server, so that the edge server and the edge equipment which receive the computing task upload registration information to the cloud computing center;
after the processing module, the apparatus further comprises:
and the sending module is used for sending the third original data processing result to the user terminal.
Optionally, the registration information of the edge device includes:
the method comprises the steps that the original data generation rate, the upper limit of computing resources, the upper limit of transmission resources, the number of layers and the IP address and port number of edge equipment are obtained;
the registration information of the edge server includes: the upper limit of computing resources, the upper limit of transmission resources, the number of layers, the IP address and the port number of the edge server;
the resource information of the cloud computing center comprises: and calculating the upper limit of resources, the upper limit of transmission resources, the number of layers, and the IP address and the port number.
Optionally, the allocation submodule includes:
the establishing submodule is used for establishing an edge computing network according to the number of layers where each edge device, each edge server and the cloud computing center are located, the IP address and the port number;
and the obtaining submodule is used for calculating and obtaining the task unloading proportion and the resource allocation strategy of each device in the edge computing network by combining the edge computing network according to the original data generation rate, the computing resource upper limit and the transmission resource upper limit of each edge device, the computing resource upper limit and the transmission resource upper limit of each edge server, and the computing resource upper limit and the transmission resource upper limit of the cloud computing center.
Optionally, the obtaining sub-module includes:
a first obtaining unit, configured to obtain raw data of each edge device according to a raw data generation rate and a processing time interval of each edge device;
a second obtaining unit, configured to obtain, according to original data, an upper limit of computing resources, an upper limit of transmission resources of each edge device, an upper limit of computing resources and an upper limit of transmission resources of each edge server, and an upper limit of computing resources and an upper limit of transmission resources of the cloud computing center, in combination with the edge computing network, and by using a cauchy inequality, an equation of a system delay L with a task offloading policy as a target function;
a third obtaining unit, configured to calculate multiple sets of intersection points of the constraint conditions according to all the linear constraint conditions, where each set of intersection points of the constraint conditions corresponds to one task offloading policy;
the determining unit is used for respectively substituting each group of limiting condition intersection points into the equation of the system delay L taking the task unloading strategy as the target function to obtain a plurality of total delays, and determining the task unloading strategy corresponding to the minimum total delay as a final task unloading strategy;
and the fourth obtaining unit is used for obtaining a calculation resource allocation strategy and a transmission resource allocation strategy by utilizing the final task unloading strategy according to the equal sign establishment condition of the Cauchy inequality, so as to obtain the task unloading proportion and the resource allocation strategy.
Optionally, the second obtaining unit includes:
a first establishing subunit for defining a task offload proportion at node i of the nth layer as
Figure BDA0002580092580000241
After the original data lambda is processed, the data volume is reduced to rho lambda, the compression ratio is rho, and the sum L of the calculation time and the transmission time of all nodes at the nth layer in the edge calculation networknThe equation of (1) is:
Figure BDA0002580092580000251
wherein the task unloading proportion is s, the computing resource allocation is theta, the transmission resource allocation is phi,
Figure BDA0002580092580000252
represents the computation time of the nth level node i,
Figure BDA0002580092580000253
the data quantity of the calculation result uploaded by the lower layer received by the nth layer node i,
Figure BDA0002580092580000254
represents the data amount of the calculation result of the nth layer node i,
Figure BDA0002580092580000255
representing the amount of raw data to be uploaded at the node,
Figure BDA0002580092580000256
represents the nth layer sectionThe time of transmission of the point i,
Figure BDA0002580092580000257
represents a set of lower level devices, M, to which the layer n-1 node j is connectedn-1Representing the number of devices of the (n-1) th layer;
a second building subunit for using L0When the computing time of the cloud computing center is represented, the equation of the system delay L of the edge computing network including the N-layer edge servers is:
Figure BDA0002580092580000258
a first obtaining subunit, configured to obtain L by using the Cauchy inequality in combination with the equation of LnThe inequality of (a) is:
Figure BDA0002580092580000259
wherein
Figure BDA00025800925800002510
Being the maximum computing power of the nth level node i,
Figure BDA00025800925800002511
all transmission resources of the node j of the (n-1) th layer;
a second obtaining subunit for obtaining LnSubstituting the inequality into an equation of system delay L of the edge computing network to obtain the equation of the system delay L taking the task unloading strategy as a target function, wherein the equation of the system delay L is as follows:
Figure BDA0002580092580000261
optionally, the linear constraint condition is:
Figure BDA0002580092580000262
layer N +1The unloading ratio of the equipment is between 0 and 1;
Figure BDA0002580092580000263
the wireless transmission data volume of the N +1 layer equipment is less than the transmission resource allocated to the equipment;
Figure BDA0002580092580000264
the total transmission resource distributed by each device of the Nth layer to all the devices connected with the Nth layer does not exceed the maximum transmission resource amount of the server;
Figure BDA0002580092580000265
the unloading proportion of the tasks from the n + 1-layer device i at the n-th layer device j is between 0 and 1;
Figure BDA0002580092580000266
the transmission data volume of the nth layer server j is less than the transmission resource distributed to the nth layer server j;
Figure BDA0002580092580000267
the total transmission resource distributed to all the devices connected by each device in the nth layer does not exceed the maximum transmission resource of the device;
Figure BDA0002580092580000268
the computing resource configuration of each node is not less than the data volume to be computed and does not exceed the maximum computing resource of the node;
Figure BDA0002580092580000271
the computing task amount carried by the cloud computing center is smaller than the upper limit of the computing resource of the cloud computing center;
the equal sign of the Cauchi inequality is that:
Figure BDA0002580092580000272
Figure BDA0002580092580000273
Figure BDA0002580092580000274
fig. 5 is a schematic structural diagram of an electronic device in an embodiment of the present invention, and as shown in fig. 5, the present application further provides an electronic device, including:
a processor 51;
a memory 52 having instructions stored thereon, and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor 51, causing the apparatus to perform a latency optimization method based on cloud-edge multi-layer collaboration in a large scale edge computing network.
The present application further provides a non-transitory computer-readable storage medium, on which a computer program is stored, and when the computer program in the storage medium is executed by a processor 51 of an electronic device, the electronic device is enabled to execute a method for implementing the cloud-edge multi-layer cooperation-based delay optimization in a large-scale edge computing network.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method, the device, the electronic device and the readable storage medium for time delay optimization based on cloud-edge multi-layer collaboration in the large-scale edge computing network provided by the invention are introduced in detail, specific examples are applied in the text to explain the principle and the implementation of the invention, and the description of the above embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A time delay optimization method based on cloud-edge multilayer cooperation in a large-scale edge computing network is applied to a cloud computing center, the cloud computing center is in communication connection with a plurality of edge servers, and each edge server is in communication connection with one or more edge devices, and the method comprises the following steps:
receiving registration information sent by the plurality of edge servers, wherein the registration information comprises registration information of each edge device corresponding to the computing task and registration information of each edge server;
according to the registration information sent by the edge servers and the resource information of the cloud computing center, an edge computing network is established, a task unloading proportion and a resource allocation strategy for each device in the edge computing network are obtained, and the task unloading proportion and the resource allocation strategy are sent to corresponding devices according to the edge computing network, so that the corresponding devices process original data according to the task unloading proportion and the resource allocation strategy;
receiving a first original data processing result sent by a plurality of edge servers and original data left after the processing of the plurality of edge servers, wherein the first original data processing result comprises an original data processing result obtained by processing the original data by a plurality of edge devices according to the corresponding task unloading proportion and the resource allocation strategy of the edge devices, and an original data processing result obtained by processing the received original data by the plurality of edge servers according to the corresponding task unloading proportion and the resource allocation strategy of the edge devices;
processing the remaining original data after respective processing of each edge server according to a task unloading proportion and a resource allocation strategy corresponding to the cloud computing center to obtain a second original data processing result, and summarizing the second original data processing result and the first original data processing results respectively sent by the plurality of edge servers to obtain a third original data processing result.
2. The method of claim 1, wherein the cloud computing center is communicatively connected to a user terminal, and before receiving the registration information sent by the edge server, the method further comprises:
receiving the computing task sent by the user terminal, and sending the computing task to a corresponding edge server, or sending the computing task to corresponding edge equipment through the edge server, so that the edge server and the edge equipment which receive the computing task upload registration information to the cloud computing center;
after obtaining a third raw data processing result, the method further includes:
and sending the third original data processing result to the user terminal.
3. The method of claim 1, wherein the registration information of the edge device comprises:
the method comprises the steps that the original data generation rate, the upper limit of computing resources, the upper limit of transmission resources, the number of layers and the IP address and port number of edge equipment are obtained;
the registration information of the edge server includes: the upper limit of computing resources, the upper limit of transmission resources, the number of layers, the IP address and the port number of the edge server;
the resource information of the cloud computing center comprises: and calculating the upper limit of resources, the upper limit of transmission resources, the number of layers, and the IP address and the port number.
4. The method according to claim 3, wherein the establishing an edge computing network according to the registration information and resource information of the cloud computing center, and obtaining a task offload proportion and a resource allocation policy of each device in the edge computing network comprises:
establishing an edge computing network according to the number of layers of each edge device, each edge server and the cloud computing center, and I P addresses and port numbers;
and calculating to obtain the task unloading proportion and the resource allocation strategy of each device in the edge computing network according to the original data generation rate, the upper limit of computing resources and the upper limit of transmission resources of each edge device, the upper limit of computing resources and the upper limit of transmission resources of each edge server, and the upper limit of computing resources and the upper limit of transmission resources of the cloud computing center in combination with the edge computing network.
5. The method according to claim 4, wherein the step of calculating the task offload proportion and the resource allocation policy of each device in the edge computing network according to the raw data generation rate, the upper limit of computing resources, the upper limit of transmission resources of each edge device, the upper limit of computing resources and the upper limit of transmission resources of each edge server, and the upper limit of computing resources and the upper limit of transmission resources of the cloud computing center, in combination with the edge computing network, comprises:
obtaining the original data of each edge device according to the original data generation rate and the processing time interval of each edge device;
according to the original data, the upper limit of computing resources and the upper limit of transmission resources of each edge device, the upper limit of computing resources and the upper limit of transmission resources of each edge server, the upper limit of computing resources and the upper limit of transmission resources of the cloud computing center, the edge computing network is combined, and the Cauchy inequality is utilized to obtain an equation of system delay L with a task unloading strategy as a target function;
calculating to obtain a plurality of groups of limit condition intersection points according to all linear limit conditions, wherein each group of limit condition intersection points correspond to a task unloading strategy;
respectively substituting each group of limiting condition intersection points into an equation of system delay L with the task unloading strategy as a target function to obtain a plurality of total delays, and determining the task unloading strategy corresponding to the minimum total delay as a final task unloading strategy;
and obtaining a calculation resource allocation strategy and a transmission resource allocation strategy by utilizing the final task unloading strategy according to the equal sign establishment condition of the Cauchy inequality, thereby obtaining the task unloading proportion and the resource allocation strategy.
6. The method of claim 5, wherein obtaining an equation of the system delay L with the task offloading policy as an objective function according to the raw data, the upper limit of the computing resources, the upper limit of the transmission resources of each edge device, the upper limit of the computing resources, the upper limit of the transmission resources of each edge server, the upper limit of the computing resources and the upper limit of the transmission resources of the cloud computing center, and by using the Cauchy inequality in combination with the edge computing network, comprises:
defining a task offload ratio at node i of the nth layer as
Figure FDA0002580092570000038
After the original data lambda is processed, the data volume is reduced to rho lambda, the compression ratio is rho, and the sum L of the calculation time and the transmission time of all nodes at the nth layer in the edge calculation networknThe equation of (1) is:
Figure FDA0002580092570000031
wherein the task unloading proportion is s, the computing resource allocation is theta, the transmission resource allocation is phi,
Figure FDA0002580092570000032
represents the computation time of the nth level node i,
Figure FDA0002580092570000033
the data quantity of the calculation result uploaded by the lower layer received by the nth layer node i,
Figure FDA0002580092570000034
represents the data amount of the calculation result of the nth layer node i,
Figure FDA0002580092570000035
representing the amount of raw data to be uploaded at the node,
Figure FDA0002580092570000036
indicating the transmission time of the nth layer node i,
Figure FDA0002580092570000037
represents a set of lower level devices, M, to which the layer n-1 node j is connectedn-1Representing the number of devices of the (n-1) th layer;
by L0When the computing time of the cloud computing center is represented, the equation of the system delay L of the edge computing network including the N-layer edge servers is:
Figure FDA0002580092570000041
obtaining L by using the Cauchy inequality and combining the equality of LnThe inequality of (a) is:
Figure FDA0002580092570000042
wherein
Figure FDA0002580092570000043
Being the maximum computing power of the nth level node i,
Figure FDA0002580092570000044
all transmission resources of the node j of the (n-1) th layer;
mixing L withnSubstituting the inequality into an equation of system delay L of the edge computing network to obtain the equation of the system delay L taking the task unloading strategy as a target function, wherein the equation of the system delay L is as follows:
Figure FDA0002580092570000045
7. the method of claim 5, wherein the linear constraint is:
Figure FDA0002580092570000046
the unloading ratio of the (N + 1) th layer equipment is between 0 and 1;
Figure FDA0002580092570000047
the wireless transmission data volume of the N +1 layer equipment is less than the transmission resource allocated to the equipment;
Figure FDA0002580092570000048
the total transmission resource distributed to all the connected equipment by each equipment of the Nth layer does not exceed the maximum transmission resource amount of the edge server;
Figure FDA0002580092570000051
the unloading proportion of the tasks from the n + 1-layer device i at the n-th layer device j is between 0 and 1;
Figure FDA0002580092570000052
the transmission data volume of the nth layer server j is less than the transmission resource distributed to the nth layer server j;
Figure FDA0002580092570000053
the total transmission resource distributed to all the devices connected by each device in the nth layer does not exceed the maximum transmission resource of the device;
Figure FDA0002580092570000054
the computing resource configuration of each node is not less than the data volume to be computed and does not exceed the maximum computing resource of the node;
Figure FDA0002580092570000055
the computing task amount carried by the cloud computing center is smaller than the upper limit of the computing resource of the cloud computing center;
the equal sign of the Cauchi inequality is that:
Figure FDA0002580092570000056
Figure FDA0002580092570000057
Figure FDA0002580092570000058
8. a delay optimization device based on cloud-edge multilayer cooperation in a large-scale edge computing network is characterized by comprising:
a first receiving module, configured to receive registration information sent by the plurality of edge servers, where the registration information includes registration information of each edge device corresponding to the computing task and registration information of each edge server;
the distribution module is used for establishing an edge computing network according to the registration information sent by the edge servers and the resource information of the cloud computing center, obtaining a task unloading proportion and a resource distribution strategy for each device in the edge computing network, and sending the task unloading proportion and the resource distribution strategy to corresponding devices according to the edge computing network so that the corresponding devices process original data according to the task unloading proportion and the resource distribution strategy;
the second receiving module is used for receiving a first original data processing result sent by the plurality of edge servers and a plurality of original data left after the processing of the edge servers, wherein the first original data processing result comprises an original data processing result obtained by processing the original data by the plurality of edge devices according to the task unloading proportion and the resource allocation strategy corresponding to the edge devices, and an original data processing result obtained by processing the received original data by the plurality of edge servers according to the task unloading proportion and the resource allocation strategy corresponding to the edge devices;
and the processing module is used for processing the residual original data after the respective processing of each edge server according to the task unloading proportion and the resource allocation strategy corresponding to the cloud computing center to obtain a second original data processing result, and summarizing the second original data processing result and the first original data processing results respectively sent by the plurality of edge servers to obtain a third original data processing result.
9. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the cloud-edge multi-layer collaboration based latency optimization method in a large-scale edge computing network according to any one of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, implements the steps of the cloud-edge multi-layer collaboration-based latency optimization method in a large-scale edge computing network according to any one of claims 1 to 7.
CN202010665235.8A 2020-07-10 2020-07-10 Time delay optimization method and device for cloud-edge multi-layer cooperation in edge computing network Pending CN111970323A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010665235.8A CN111970323A (en) 2020-07-10 2020-07-10 Time delay optimization method and device for cloud-edge multi-layer cooperation in edge computing network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010665235.8A CN111970323A (en) 2020-07-10 2020-07-10 Time delay optimization method and device for cloud-edge multi-layer cooperation in edge computing network

Publications (1)

Publication Number Publication Date
CN111970323A true CN111970323A (en) 2020-11-20

Family

ID=73360344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010665235.8A Pending CN111970323A (en) 2020-07-10 2020-07-10 Time delay optimization method and device for cloud-edge multi-layer cooperation in edge computing network

Country Status (1)

Country Link
CN (1) CN111970323A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887785A (en) * 2021-01-13 2021-06-01 浙江传媒学院 Remote video overlapping interactive computing method based on time delay optimization
CN113114758A (en) * 2021-04-09 2021-07-13 北京邮电大学 Method and device for scheduling tasks for server-free edge computing
CN113125675A (en) * 2021-04-19 2021-07-16 北京物资学院 Storage yard coal spontaneous combustion early warning device and early warning method of edge computing framework
CN113141394A (en) * 2021-03-25 2021-07-20 北京邮电大学 Resource allocation method and device, electronic equipment and storage medium
CN113179296A (en) * 2021-04-08 2021-07-27 中国科学院计算技术研究所 Task unloading method for vehicle-mounted edge computing system
CN113205241A (en) * 2021-03-25 2021-08-03 广东电网有限责任公司东莞供电局 Monitoring data real-time processing method, non-transient readable recording medium and data processing system
CN113298063A (en) * 2021-07-28 2021-08-24 江苏电力信息技术有限公司 Dynamic object detection method based on cloud-edge
CN113377125A (en) * 2021-05-26 2021-09-10 安徽大学 Unmanned aerial vehicle system for air pollution detection
CN113395679A (en) * 2021-05-25 2021-09-14 安徽大学 Resource and task allocation optimization system of unmanned aerial vehicle edge server
CN113747554A (en) * 2021-08-11 2021-12-03 中标慧安信息技术股份有限公司 Method and device for task scheduling and resource allocation of edge computing network
CN113923781A (en) * 2021-06-25 2022-01-11 国网山东省电力公司青岛供电公司 Wireless network resource allocation method and device for comprehensive energy service station
CN114138453A (en) * 2021-10-18 2022-03-04 中标慧安信息技术股份有限公司 Resource optimization allocation method and system suitable for edge computing environment
CN114241002A (en) * 2021-12-14 2022-03-25 中国电信股份有限公司 Target tracking method, system, device and medium based on cloud edge cooperation
CN114301907A (en) * 2021-11-18 2022-04-08 北京邮电大学 Service processing method, system and device in cloud computing network and electronic equipment
CN114780163A (en) * 2021-01-05 2022-07-22 中国移动通信有限公司研究院 Task processing method and device and electronic equipment
US11405456B2 (en) 2020-12-22 2022-08-02 Red Hat, Inc. Policy-based data placement in an edge environment
CN115002731A (en) * 2021-03-02 2022-09-02 阿里巴巴新加坡控股有限公司 Service providing method, system, device, equipment and storage medium
CN115098115A (en) * 2022-06-17 2022-09-23 西安邮电大学 Edge calculation task unloading method and device, electronic equipment and storage medium
CN117370035A (en) * 2023-12-08 2024-01-09 国网浙江省电力有限公司宁波供电公司 Real-time simulation computing resource dividing system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109240818A (en) * 2018-09-04 2019-01-18 中南大学 Task discharging method based on user experience in a kind of edge calculations network
CN109302709A (en) * 2018-09-14 2019-02-01 重庆邮电大学 The unloading of car networking task and resource allocation policy towards mobile edge calculations
CN109814951A (en) * 2019-01-22 2019-05-28 南京邮电大学 The combined optimization method of task unloading and resource allocation in mobile edge calculations network
CN110035410A (en) * 2019-03-07 2019-07-19 中南大学 Federated resource distribution and the method and system of unloading are calculated in a kind of vehicle-mounted edge network of software definition
CN110489176A (en) * 2019-08-27 2019-11-22 湘潭大学 A kind of multiple access edge calculations task discharging method based on bin packing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109240818A (en) * 2018-09-04 2019-01-18 中南大学 Task discharging method based on user experience in a kind of edge calculations network
CN109302709A (en) * 2018-09-14 2019-02-01 重庆邮电大学 The unloading of car networking task and resource allocation policy towards mobile edge calculations
CN109814951A (en) * 2019-01-22 2019-05-28 南京邮电大学 The combined optimization method of task unloading and resource allocation in mobile edge calculations network
CN110035410A (en) * 2019-03-07 2019-07-19 中南大学 Federated resource distribution and the method and system of unloading are calculated in a kind of vehicle-mounted edge network of software definition
CN110489176A (en) * 2019-08-27 2019-11-22 湘潭大学 A kind of multiple access edge calculations task discharging method based on bin packing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MAO Y, YOU C, ZHANG J, ET AL: ""A survey on mobile edge computing:the communication perspective"", 《IEEE COMMUNICATIONS SURVEYS》 *
丛书畅,姚超,王鹏飞,郑子杰,宋令阳: ""EdgeFlow移动边缘计算在物联网中的应用研究"", 《物联网学报》 *
王鹏飞,邸博雅,宋令阳,韩竹: ""6G异构边缘计算"", 《物联网学报》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11405456B2 (en) 2020-12-22 2022-08-02 Red Hat, Inc. Policy-based data placement in an edge environment
US11611619B2 (en) 2020-12-22 2023-03-21 Red Hat, Inc. Policy-based data placement in an edge environment
CN114780163A (en) * 2021-01-05 2022-07-22 中国移动通信有限公司研究院 Task processing method and device and electronic equipment
CN112887785A (en) * 2021-01-13 2021-06-01 浙江传媒学院 Remote video overlapping interactive computing method based on time delay optimization
CN115002731B (en) * 2021-03-02 2023-08-29 阿里巴巴新加坡控股有限公司 Service providing method, system, device, equipment and storage medium
CN115002731A (en) * 2021-03-02 2022-09-02 阿里巴巴新加坡控股有限公司 Service providing method, system, device, equipment and storage medium
CN113205241A (en) * 2021-03-25 2021-08-03 广东电网有限责任公司东莞供电局 Monitoring data real-time processing method, non-transient readable recording medium and data processing system
CN113141394A (en) * 2021-03-25 2021-07-20 北京邮电大学 Resource allocation method and device, electronic equipment and storage medium
CN113179296A (en) * 2021-04-08 2021-07-27 中国科学院计算技术研究所 Task unloading method for vehicle-mounted edge computing system
CN113179296B (en) * 2021-04-08 2022-10-25 中国科学院计算技术研究所 Task unloading method for vehicle-mounted edge computing system
CN113114758A (en) * 2021-04-09 2021-07-13 北京邮电大学 Method and device for scheduling tasks for server-free edge computing
CN113125675A (en) * 2021-04-19 2021-07-16 北京物资学院 Storage yard coal spontaneous combustion early warning device and early warning method of edge computing framework
CN113395679A (en) * 2021-05-25 2021-09-14 安徽大学 Resource and task allocation optimization system of unmanned aerial vehicle edge server
CN113377125A (en) * 2021-05-26 2021-09-10 安徽大学 Unmanned aerial vehicle system for air pollution detection
CN113377125B (en) * 2021-05-26 2022-04-22 安徽大学 Unmanned aerial vehicle system for air pollution detection
CN113923781A (en) * 2021-06-25 2022-01-11 国网山东省电力公司青岛供电公司 Wireless network resource allocation method and device for comprehensive energy service station
CN113298063A (en) * 2021-07-28 2021-08-24 江苏电力信息技术有限公司 Dynamic object detection method based on cloud-edge
CN113747554A (en) * 2021-08-11 2021-12-03 中标慧安信息技术股份有限公司 Method and device for task scheduling and resource allocation of edge computing network
CN114138453A (en) * 2021-10-18 2022-03-04 中标慧安信息技术股份有限公司 Resource optimization allocation method and system suitable for edge computing environment
CN114138453B (en) * 2021-10-18 2022-10-28 中标慧安信息技术股份有限公司 Resource optimization allocation method and system suitable for edge computing environment
CN114301907A (en) * 2021-11-18 2022-04-08 北京邮电大学 Service processing method, system and device in cloud computing network and electronic equipment
CN114301907B (en) * 2021-11-18 2023-03-14 北京邮电大学 Service processing method, system and device in cloud computing network and electronic equipment
CN114241002A (en) * 2021-12-14 2022-03-25 中国电信股份有限公司 Target tracking method, system, device and medium based on cloud edge cooperation
CN114241002B (en) * 2021-12-14 2024-02-02 中国电信股份有限公司 Target tracking method, system, equipment and medium based on cloud edge cooperation
CN115098115A (en) * 2022-06-17 2022-09-23 西安邮电大学 Edge calculation task unloading method and device, electronic equipment and storage medium
CN117370035A (en) * 2023-12-08 2024-01-09 国网浙江省电力有限公司宁波供电公司 Real-time simulation computing resource dividing system and method
CN117370035B (en) * 2023-12-08 2024-05-07 国网浙江省电力有限公司宁波供电公司 Real-time simulation computing resource dividing system and method

Similar Documents

Publication Publication Date Title
CN111970323A (en) Time delay optimization method and device for cloud-edge multi-layer cooperation in edge computing network
US9538134B2 (en) Method and system for resource load balancing in a conferencing session
CN102655503B (en) Use the Resourse Distribute in shared resource pond
JP7174857B2 (en) COMMUNICATION METHOD, APPARATUS, ELECTRONIC DEVICE AND COMPUTER PROGRAM
CN108848530B (en) Method and device for acquiring network resources and scheduling server
CN108683613B (en) Resource scheduling method, device and computer storage medium
WO2016197628A1 (en) Method of terminal-based conference load-balancing, and device and system utilizing same
US20200233773A1 (en) Methods and systems for status determination
US10342058B2 (en) Observation assisted bandwidth management
CN115208812B (en) Service processing method and device, equipment and computer readable storage medium
US9729347B2 (en) System and method for selection of a conference bridge master server
CN113709200B (en) Method and device for establishing communication connection
CN110830604B (en) DNS scheduling method and device
US7707296B2 (en) Method and apparatus for selecting a media processor to host a conference
CN102209262B (en) Method, device and system for scheduling contents
JP2016519462A5 (en)
CN109413117B (en) Distributed data calculation method, device, server and computer storage medium
CN112787952A (en) Service flow adjusting method and device
CN109302302B (en) Method, system and computer readable storage medium for scaling service network element
US20230246921A1 (en) Enterprise port assignment
CN114266357A (en) Federal learning model construction method and device, central server and client
CN114827781B (en) Network collaboration method, device, equipment and storage medium
WO2018171423A1 (en) Method and apparatus for constructing video multicast virtual network
CN115208861A (en) Video communication network based on value function optimization
KR101870390B1 (en) Flow control method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201120