CN112506656A - Distribution method based on distribution Internet of things computing task - Google Patents
Distribution method based on distribution Internet of things computing task Download PDFInfo
- Publication number
- CN112506656A CN112506656A CN202011424440.1A CN202011424440A CN112506656A CN 112506656 A CN112506656 A CN 112506656A CN 202011424440 A CN202011424440 A CN 202011424440A CN 112506656 A CN112506656 A CN 112506656A
- Authority
- CN
- China
- Prior art keywords
- edge
- cloud
- far
- data
- computing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000012545 processing Methods 0.000 claims abstract description 42
- 238000004891 communication Methods 0.000 claims abstract description 26
- 238000005457 optimization Methods 0.000 claims description 6
- 230000010354 integration Effects 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 description 12
- 238000004088 simulation Methods 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 3
- 235000006679 Mentha X verticillata Nutrition 0.000 description 2
- 235000002899 Mentha suaveolens Nutrition 0.000 description 2
- 235000001636 Mentha x rotundifolia Nutrition 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/502—Proximity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
Abstract
The invention relates to a distribution method of computing tasks based on a power distribution Internet of things, which comprises the steps of establishing a side cloud cooperative processing system; setting power line channel parameters, generating data to be processed by a user side, and transmitting the data to be processed to a remote core equipment side; the edge end and the cloud end respectively acquire the computing capacity and the channel capacity information of a communication channel, and transmit the acquired information back to the remote core equipment end; the remote core equipment end formulates a distribution scheme according to the size of the data to be processed and the information returned by the cloud end and the edge end, and interactively transmits the distribution scheme to the cloud end and the edge end to implement: after the data processing is finished, the data are transmitted back and gathered to the far-end core equipment end, integrated and processed by the far-end core equipment end, and transmitted back to the user end. According to the invention, the computing power of the idle edge and the computing power of the cloud end are not wasted, so that the computing power of the far-end core equipment end, the computing power of the edge end and the computing power of the cloud end are effectively combined, the data processing time delay is reduced, and the user experience quality is improved.
Description
Technical Field
The disclosure relates to the technical field of wireless communication, in particular to a distribution method based on a power distribution internet of things computing task.
Background
The power distribution internet of things is a novel power network operation form, and most of the power distribution internet of things adopt an integral framework of cloud, pipe, edge and end in the integral deployment of the current power distribution internet of things. In recent years, with the continuous progress and development of various technologies, power distribution networks are continuously developed, but the construction of power distribution networks still faces many challenges. The quality of users is reduced by a number of factors, such as the protocols and transmission methods used by the distribution network.
Meanwhile, with the continuous development of related technologies of communication and computer networks, the data volume required to be processed by people grows explosively, people have new requirements on timeliness and safety of data transmission, and a data model processed by computing only by taking cloud computing as a center does not meet the requirements of people any more. Edge computing and cloud computing each have their advantages and disadvantages. Cloud computing is more prone to global, non-real-time, long-time and big data analysis processing on data. While edge computation is more prone to localized real-time data processing and analysis. The cloud edge cooperative processing mode can better meet various requirements of people on data processing. In the edge cloud coordination mode, in order to solve the problems of insufficient processing capacity, limited resources, and the like of the terminal device, a computing offload concept is introduced in Mobile Edge Computing (MEC) and Mobile Cloud Computing (MCC).
The processing problem of mass data of a user can be greatly solved by calculation unloading, and the data calculation processing time of the task can be shortened by distributing the processed task to the edge end with strong calculation capacity and the cloud end for processing and calculation, so that the user experience of the user is improved.
The problem of task data distribution is faced by using a calculation unloading technology, data required to be processed by a user can be calculated by being unloaded to an edge end and a cloud end, calculation processing can also be carried out on self equipment, and the problem of how to distribute the data so as to minimize processing time delay needs to be solved.
Disclosure of Invention
The present disclosure addresses one or more of the above-identified problems by providing an API-based query system and method.
According to one aspect of the disclosure, a distribution method of computing tasks based on a power distribution internet of things is provided, and step one, a processing system with edge cloud cooperation is established, wherein the processing system comprises a cloud end and an edge cloud
The cloud end is in wireless communication with the edge end, the edge end is in wireless communication with the far-end core equipment end, the edge end comprises a plurality of edge nodes, the far-end core equipment end comprises a plurality of far-end core equipment, each edge node is connected with the plurality of far-end core equipment, and each far-end core equipment is connected with a plurality of wired user ends through a bus;
secondly, setting power line channel parameters by the user side, generating data to be processed by the user side, and transmitting the data to be processed to the far-end core equipment end;
step three, the edge terminal obtains edge computing capability and channel capacity information of a far-end communication channel, and sends the acquired information to the far-end core equipment terminal;
fourthly, the cloud acquires the cloud computing capacity and the channel capacity information of a local communication channel, and transmits the acquired information back to the far-end core equipment end;
step five, the data to be processed is transmitted to the far-end core equipment end, the far-end core equipment end formulates a distribution scheme according to the size of the data to be processed and information returned by the cloud end and the edge end, and the distribution scheme is interactively transmitted to the cloud end and the edge end to be implemented:
and step six, after the data to be processed is processed, returning and gathering the data to a remote core equipment end, performing integration processing by the remote core equipment end, and finally returning the data to the user end.
In some embodiments, the allocation scheme comprises:
setting the ratio lambda of the processing data distributed at the far-end core equipment end1;
Setting a ratio lambda of processing data allocated at an edge terminal2;
Setting proportion lambda of data needing to be processed and distributed in cloud3;
The proportion of the three components satisfies lambda1+λ2+λ 31, through analyzing the data processing process, I
The gate may represent the delay as several parts, including the end bus propagation delayEnd core equipment computing time delayEdge offload transport latencyEdge computation time delayCloud offload transport latencyCloud computing latencySo that the total delay can be expressed as T ═ T0+max{t1,t2+t3,t2+t4+t5Solving a convex optimization problem minT to obtain a distribution proportion with the minimum time delay:
whereinD (bit) is the task processing data size, fd(bit/s) is the remote core computing power, fe(bit/s) is the edge node computation power, fc(bit/s) is cloud computing capacity, R (bit/s) is wireless channel capacity of local communication, R (bit/s) is channel capacity of remote communication, and W (bit/s) is transmission rate of a tail-end bus.
In some embodiments, in step five, the distributing scheme is interactively transmitted to a cloud end and an edge end for implementation, and the method further includes:
the cloud end and the edge endAnd the far-end core equipment end respectively implements the formulated distribution scheme according to the distribution proportion lambda1、λ2And λ3And acquiring the size of the distributed data, distributing corresponding cloud computing capacity, edge computing capacity and remote core equipment end computing capacity, and reserving the data with the corresponding proportion size for local computing processing.
In some embodiments, the cloud comprises a cloud server, the edge comprises a plurality of edge nodes, each edge node comprises a mobile edge computing server, and a plurality of remote core devices exist under each edge node.
The beneficial effects of this disclosure are: in the background of the power distribution internet of things, by adopting the distribution method, the time delay of the total user is minimized, the computing power of the idle edge and the computing power of the cloud end can not be wasted, the computing power of the far-end core equipment end, the edge end and the computing power of the cloud end are effectively combined, the data processing time delay is reduced, and the user experience quality is improved. Meanwhile, the worst communication situation is considered, and the optimization method can still be used for reducing time delay by using power line channel transmission in remote areas where wireless transmission is not popularized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a structural diagram of a distribution method of computing tasks based on a distribution internet of things according to the present disclosure;
fig. 2 is a flowchart of a distribution internet of things based distribution task allocation method of the present disclosure;
fig. 3 is a simulation diagram of an embodiment of the distribution internet of things-based computing task distribution method of the present disclosure;
fig. 4 is a simulation diagram of an embodiment of the distribution internet of things-based computing task distribution method of the present disclosure;
fig. 5 is a simulation diagram of an embodiment of a distribution internet of things-based computing task allocation method according to the present disclosure;
fig. 6 is a simulation diagram of an embodiment of a distribution internet of things-based computing task allocation method according to the present disclosure;
fig. 7 is a simulation diagram of an embodiment of a distribution internet of things-based computing task distribution method according to the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The present disclosure is described in further detail below with reference to the attached drawing figures.
Example 1:
referring to the accompanying drawings 1-2 in the specification, a distribution method of computing tasks based on a power distribution internet of things is shown, and the distribution method can comprise the following steps:
step one, establishing a side cloud cooperative processing system, wherein the processing system comprises a cloud end, an edge end and a far-end core device end, the cloud end is in wireless communication with the edge end, the edge end is in wireless communication with the far-end core device end, the edge end comprises a plurality of edge nodes, the far-end core device end comprises a plurality of far-end core devices, each edge node is connected with the plurality of far-end core devices, and each far-end core device is connected with a plurality of wired user ends through a bus;
secondly, setting power line channel parameters by the user side, generating data to be processed by the user side, and transmitting the data to be processed to the far-end core equipment end; the transmission channel can be selected at will, and only the transmission rate parameter needs to be changed when the optimal scheme is calculated later.
The edge end obtains edge computing capacity and channel capacity information of a far-end communication channel, and sends the collected information to a far-end core equipment end;
the remote communication channel may be a wireless public network, an optical fiber, an ethernet, or the like, and part of the collected information may change with time, so that the optimized distribution scheme needs to be updated regularly.
And step three, the cloud acquires the cloud computing capacity and the channel capacity information of the local communication channel, and transmits the acquired information back to the remote core equipment.
And the data acquired by the cloud end is the same as the data acquired by the edge end, is updated at regular time and is transmitted back to the remote core equipment at regular time.
And step four, transmitting the data to be processed to a far-end core device end, making a distribution scheme by the far-end core device end according to the data size and information returned by the cloud end and the edge end, and interactively transmitting the scheme to the cloud end and the edge end.
As a priority scheme of this embodiment, a specific allocation scheme is formulated as follows:
let λ be the data size allocated for processing at the local server. Lambda [ alpha ]1Indicating the proportion of processed data, λ, allocated at the remote core device side2Indicating the proportion of processed data, λ, allocated at the edge end node3Indicating the proportion of data to be processed that is distributed in the cloud. The proportion of the three components satisfies lambda1+λ2+λ 31. By analyzing the data processing process, the delay can be expressed as several parts, including end-user bus transmission delayRemote core device computation time delayEdge offload transport latencyEdge computation time delayCloud offload transport latencyCloud computing delaySo that the total delay can be expressed as T ═ T0+max{t1,t2+t3,t2+t4+t5And solving a convex optimization problem minT to obtain a distribution proportion with the minimum time delay.
The method specifically comprises the following steps: when lambda is1,λ2,λ3The overall user delay will be minimized when the following values are taken, respectively.
WhereinD (bit) is the task processing data size, fd(bit/s) is the remote core computing power, fe(bit/s) is the edge node computation power, fc(bit/s) is cloud computing power, R (bit/s) is wireless channel capacity of local communication, and R (bit/s) is channel capacity of remote communication.
And fifthly, implementing the formulated scheme by the cloud end, the edge end and the far-end core equipment end, distributing the data size, distributing the corresponding cloud end computing capacity, the edge end computing capacity and the far-end core equipment end computing capacity, and reserving the data with the corresponding proportion size for local computing processing.
And step six, after the data processing is finished, the data are transmitted back and gathered to the far-end core equipment end, then the far-end core equipment is used for carrying out integration processing, and finally the data are transmitted back to the end user.
The invention has the advantages and positive effects that: in the background of the power distribution internet of things, by adopting the distribution method, the time delay of the total user is minimized, the computing power of the idle edge and the computing power of the cloud end can not be wasted, the computing power of the far-end core equipment end, the edge end and the computing power of the cloud end are effectively combined, the data processing time delay is reduced, and the user experience quality is improved. Meanwhile, the worst communication situation is considered, and the optimization method can still be used for reducing time delay by using power line channel transmission in remote areas where wireless transmission is not popularized.
In order to make the purpose, technical scheme and advantages of the invention more clear, the invention can be further described in detail by combining matlab simulation. It should be understood that the simulations described herein are merely illustrative of the present invention and are not intended to limit the present invention.
As a priority of the present embodiment, matlab is used to perform simulation, thereby verifying the effectiveness of the above-given calculation task allocation method. First, assume that the number n of edge end nodes is 5, the number k of remote core devices existing under each edge end node is 4, and the number m of end users of each remote core device is 5. Second, assume that all device priorities are set the same and the slots are equally divided. Meanwhile, assuming that all the remote core devices do not appear in the range of other edge nodes, the computation unloading is performed only by the edge node.
The parameters are set as follows:
specifically, the matlab is used for simulating by taking the number of edge nodes and the number of core devices under each edge node as independent variables, and observing and comparing the relationship between the number of edge nodes and the average delay of the system, wherein the simulation results are shown in fig. 3 and 4.
For convenience of explanation below, the meanings represented by the lines in the explanatory drawings are explained: as shown in FIGS. 3-4, TIME1 represents a designed allocation scheme; TIME2 represents a scenario where computation offload is not enabled and processing is only done at the remote core device; TIME3 represents a scheme where data is computation offloaded and processed only at edge nodes; TIME4 represents a scheme for performing calculation unloading on data and performing data processing only in the cloud; TIME5 shows that the distribution proportions of the core device end, the edge end and the cloud end are respectively: 0.1, 0.3, 0.6. TIME6 shows that the distribution proportions of the core device end, the edge end and the cloud end are respectively: 0.4, 0.3. From the figure, it can be seen that the method provided by the present invention always maintains the minimum time delay in six schemes.
As shown in FIG. 3, it can be observed from the simulation that TIME2 and TIME3 do not change the total system delay as the number of edge nodes increases. The reason is that TIME2 is a scheme for processing only by the far-end core device, increasing the number of edge-end nodes also increases the number of far-end core devices in equal proportion, and each far-end core device processes its own data locally, and the delay is certainly not changed. TIME3 is a solution that is processed only by the edge end, and although the increase in the number of edge end nodes increases the amount of equipment that needs to be processed, the edge computing power also increases proportionally, so the total latency is not changed. TIME4 is computed only by the cloud, and the overall latency obviously increases as the data throughput increases. The other two schemes also increase along with the number of nodes, so that the time delay is increased, but the time delay is not increased linearly any more.
As shown in fig. 4, the argument is transformed into the number of remote core devices, in which case the method of keeping the total delay constant leaves only TIME2, which is calculated locally only by the remote core devices. This is because the increased tasks of the remote core device are directly proportional to its increased core device computing power. Several other allocation schemes will increase in different forms as the number of remote core device ends increases. The solution provided by the invention also maintains a minimum delay.
As shown in fig. 5 to 7, simulation images are made through matlab with computing capabilities of the edge device, the cloud device, and the remote core device as arguments, respectively, to show a relationship with the total average time delay of the system.
As shown in fig. 5, with the enhancement of the cloud computing capability, only the system delay of the schemes TIME2 and TIME3 does not change, and because neither scheme is allocated to the cloud for computing, the cloud computing capability does not affect the delay. As shown in fig. 6, only TIME2 and TIME4 are in an unchanged state, and since they only use the remote core device and the cloud computing, respectively, the delay is independent of the capability of the edge computing. As shown in fig. 7, fig. 7 is the same as fig. 6, and will not be described again. In summary, as shown in fig. 5-7, except for the above schemes in which the delay is constant, other schemes respectively decrease the delay with the increase of the corresponding computing power. Therefore, as shown in fig. 3-7, the allocation scheme proposed by the present invention keeps the delay time to the minimum at all times, and also proves the effectiveness of the method of the present invention.
The invention has the beneficial effects that: in the background of the power distribution internet of things, by adopting the distribution method, the time delay of the total user is minimized, the computing power of the idle edge and the computing power of the cloud end can not be wasted, the computing power of the far-end core equipment end, the edge end and the computing power of the cloud end are effectively combined, the data processing time delay is reduced, and the user experience quality is improved. Meanwhile, the worst communication situation is considered, and the optimization method can still be used for reducing time delay by using power line channel transmission in remote areas where wireless transmission is not popularized.
The above are but some of the embodiments of the present disclosure. It will be apparent to those skilled in the art that various changes and modifications can be made without departing from the inventive concept of the present disclosure, which falls within the scope of the disclosure.
Claims (4)
1. The distribution method of the computing tasks based on the power distribution Internet of things is characterized by comprising the following steps:
step one, establishing a side cloud cooperative processing system, wherein the processing system comprises a cloud end, an edge end and a far-end core device end, the cloud end is in wireless communication with the edge end, the edge end is in wireless communication with the far-end core device end, the edge end comprises a plurality of edge nodes, the far-end core device end comprises a plurality of far-end core devices, each edge node is connected with the plurality of far-end core devices, and each far-end core device is connected with a plurality of wired user ends through a bus;
secondly, setting power line channel parameters by the user side, generating data to be processed by the user side, and transmitting the data to be processed to the far-end core equipment end;
step three, the edge terminal obtains edge computing capability and channel capacity information of a far-end communication channel, and sends the acquired information to the far-end core equipment terminal;
fourthly, the cloud acquires the cloud computing capacity and the channel capacity information of a local communication channel, and transmits the acquired information back to the far-end core equipment end;
step five, the data to be processed is transmitted to the far-end core equipment end, the far-end core equipment end formulates a distribution scheme according to the size of the data to be processed and information returned by the cloud end and the edge end, and the distribution scheme is interactively transmitted to the cloud end and the edge end to be implemented:
and step six, after the data to be processed is processed, returning and gathering the data to a remote core equipment end, performing integration processing by the remote core equipment end, and finally returning the data to the user end.
2. The distribution method of computing tasks based on the distribution internet of things as claimed in claim 1, wherein the distribution scheme comprises:
setting the ratio lambda of the processing data distributed at the far-end core equipment end1;
Setting a ratio lambda of processing data allocated at an edge terminal2;
Setting proportion lambda of data needing to be processed and distributed in cloud3;
According to λ1+λ2+λ31, obtaining a proportion value when the overall time delay of the user is minimum by a method for solving a convex optimization problem:
3. The distribution method of computing tasks based on the distribution internet of things as claimed in claim 2, wherein in step five, the distribution scheme is interactively transmitted to a cloud end and an edge end for implementation, and further comprising:
the cloud end, the edge end and the far-end core equipment end respectively implement the formulated distribution scheme according to the distribution proportion lambda1、λ2And λ3And acquiring the size of the distributed data, distributing corresponding cloud computing capacity, edge computing capacity and remote core equipment end computing capacity, and reserving the data with the corresponding proportion size for local computing processing.
4. The method for distributing computing tasks based on the internet of things for power distribution according to claim 1, wherein the cloud comprises a cloud server, the edge comprises a plurality of edge nodes, each edge node comprises a mobile edge computing server, and a plurality of remote core devices are connected to each edge node in a wireless communication manner.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011424440.1A CN112506656A (en) | 2020-12-08 | 2020-12-08 | Distribution method based on distribution Internet of things computing task |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011424440.1A CN112506656A (en) | 2020-12-08 | 2020-12-08 | Distribution method based on distribution Internet of things computing task |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112506656A true CN112506656A (en) | 2021-03-16 |
Family
ID=74971483
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011424440.1A Pending CN112506656A (en) | 2020-12-08 | 2020-12-08 | Distribution method based on distribution Internet of things computing task |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112506656A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113055482A (en) * | 2021-03-17 | 2021-06-29 | 山东通维信息工程有限公司 | Intelligent cloud box equipment based on edge computing |
CN114449507A (en) * | 2022-02-16 | 2022-05-06 | 中国神华能源股份有限公司神朔铁路分公司 | Rail transit emergency communication system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017099548A1 (en) * | 2015-12-11 | 2017-06-15 | Lg Electronics Inc. | Method and apparatus for indicating an offloading data size and time duration in a wireless communication system |
US20180263039A1 (en) * | 2017-03-08 | 2018-09-13 | Zte Corporation | Traffic path change detection mechanism for mobile edge computing |
CN109684075A (en) * | 2018-11-28 | 2019-04-26 | 深圳供电局有限公司 | A method of calculating task unloading is carried out based on edge calculations and cloud computing collaboration |
CN110958612A (en) * | 2019-10-24 | 2020-04-03 | 浙江工业大学 | Edge calculation unloading period minimization method under multi-user scene |
CN111585916A (en) * | 2019-12-26 | 2020-08-25 | 国网辽宁省电力有限公司电力科学研究院 | LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation |
CN111884696A (en) * | 2020-07-01 | 2020-11-03 | 广州大学 | Relay cooperation mobile edge calculation method based on multiple carriers |
CN111913723A (en) * | 2020-06-15 | 2020-11-10 | 合肥工业大学 | Cloud-edge-end cooperative unloading method and system based on assembly line |
CN111954236A (en) * | 2020-07-27 | 2020-11-17 | 河海大学 | Hierarchical edge calculation unloading method based on priority |
-
2020
- 2020-12-08 CN CN202011424440.1A patent/CN112506656A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017099548A1 (en) * | 2015-12-11 | 2017-06-15 | Lg Electronics Inc. | Method and apparatus for indicating an offloading data size and time duration in a wireless communication system |
US20180263039A1 (en) * | 2017-03-08 | 2018-09-13 | Zte Corporation | Traffic path change detection mechanism for mobile edge computing |
CN109684075A (en) * | 2018-11-28 | 2019-04-26 | 深圳供电局有限公司 | A method of calculating task unloading is carried out based on edge calculations and cloud computing collaboration |
CN110958612A (en) * | 2019-10-24 | 2020-04-03 | 浙江工业大学 | Edge calculation unloading period minimization method under multi-user scene |
CN111585916A (en) * | 2019-12-26 | 2020-08-25 | 国网辽宁省电力有限公司电力科学研究院 | LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation |
CN111913723A (en) * | 2020-06-15 | 2020-11-10 | 合肥工业大学 | Cloud-edge-end cooperative unloading method and system based on assembly line |
CN111884696A (en) * | 2020-07-01 | 2020-11-03 | 广州大学 | Relay cooperation mobile edge calculation method based on multiple carriers |
CN111954236A (en) * | 2020-07-27 | 2020-11-17 | 河海大学 | Hierarchical edge calculation unloading method based on priority |
Non-Patent Citations (3)
Title |
---|
JINKE REN;: "Collaborative Cloud and Edge Computing for Latency Minimization", IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY * |
周浩;万旺根;: "边缘计算系统的任务调度策略", 电子测量技术, no. 09 * |
张海波;荆昆仑;刘开健;贺晓帆;: "车联网中一种基于软件定义网络与移动边缘计算的卸载策略", 电子与信息学报, no. 03 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113055482A (en) * | 2021-03-17 | 2021-06-29 | 山东通维信息工程有限公司 | Intelligent cloud box equipment based on edge computing |
CN114449507A (en) * | 2022-02-16 | 2022-05-06 | 中国神华能源股份有限公司神朔铁路分公司 | Rail transit emergency communication system |
CN114449507B (en) * | 2022-02-16 | 2023-10-27 | 中国神华能源股份有限公司神朔铁路分公司 | Emergency communication system for rail transit |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107566194B (en) | Method for realizing cross-domain virtual network mapping | |
CN109684075B (en) | Method for unloading computing tasks based on edge computing and cloud computing cooperation | |
Liwang et al. | A truthful reverse-auction mechanism for computation offloading in cloud-enabled vehicular network | |
Nguyen et al. | Cooperative task offloading and block mining in blockchain-based edge computing with multi-agent deep reinforcement learning | |
CN110087318A (en) | Task unloading and resource allocation joint optimization method based on the mobile edge calculations of 5G | |
Guan et al. | Effective data communication based on social community in social opportunistic networks | |
CN111556516B (en) | Distributed wireless network task cooperative distribution method facing delay and energy efficiency sensitive service | |
CN112506656A (en) | Distribution method based on distribution Internet of things computing task | |
CN111182570A (en) | User association and edge computing unloading method for improving utility of operator | |
Kakhbod et al. | An efficient game form for unicast service provisioning | |
CN112650581A (en) | Cloud-side cooperative task scheduling method for intelligent building | |
CN102281290A (en) | Emulation system and method for a PaaS (Platform-as-a-service) cloud platform | |
CN112003660B (en) | Dimension measurement method of resources in network, calculation force scheduling method and storage medium | |
CN111107566A (en) | Unloading method based on collaborative content caching in power Internet of things scene | |
CN115629865B (en) | Deep learning inference task scheduling method based on edge calculation | |
Ahmed et al. | A stackelberg game-based dynamic resource allocation in edge federated 5g network | |
Lu et al. | Truthful multi-resource transaction mechanism for P2P task offloading based on edge computing | |
CN115802389A (en) | Federal learning method for training by utilizing digital twin auxiliary model | |
Zhang et al. | Cellular traffic offloading via link prediction in opportunistic networks | |
CN109089266B (en) | Multi-channel dynamic spectrum allocation method for preventing Sybil attack and computer program | |
CN103532759B (en) | The acceptance controlling method of the aggregated flow of cloud service-oriented | |
CN103581329A (en) | Construction method for topological structure based on clustered peer-to-peer network streaming media direct broadcast system | |
Li et al. | A dynamic game model for resource allocation in fog computing for ubiquitous smart grid | |
CN110971707B (en) | Distributed service caching method in mobile edge network | |
Liu et al. | Scalable traffic management for mobile cloud services in 5G networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |