CN110928691B - Traffic data-oriented edge collaborative computing unloading method - Google Patents

Traffic data-oriented edge collaborative computing unloading method Download PDF

Info

Publication number
CN110928691B
CN110928691B CN201911365517.XA CN201911365517A CN110928691B CN 110928691 B CN110928691 B CN 110928691B CN 201911365517 A CN201911365517 A CN 201911365517A CN 110928691 B CN110928691 B CN 110928691B
Authority
CN
China
Prior art keywords
task
computing
time delay
calculation
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911365517.XA
Other languages
Chinese (zh)
Other versions
CN110928691A (en
Inventor
何琦
刘建圻
尹秀文
辛苗
何威
赵静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201911365517.XA priority Critical patent/CN110928691B/en
Publication of CN110928691A publication Critical patent/CN110928691A/en
Application granted granted Critical
Publication of CN110928691B publication Critical patent/CN110928691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/502Proximity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload

Abstract

In order to solve the problem that the algorithm for calculating unloading is not suitable for the field of intelligent transportation in the prior art, the invention provides a method for unloading terminal edge cooperative calculation for traffic data, which comprises the steps of firstly establishing a system model for communication time delay and calculation time delay; then, establishing a system utility function with time delay and resources as constraint conditions; and finally, an optimization strategy is designed to balance the two strategies. In the invention, the calculation resources and the time delay are comprehensively considered in the calculation unloading process, and an optimization function is designed to obtain the balance between the calculation resources and the time delay. Firstly, establishing a total time delay system model according to a communication time system model, an edge side calculation time system model and a local calculation time system model; then, comprehensively considering the balance of resource allocation, and designing a system utility function; and finally, researching a system optimization strategy to complete the design of the calculation unloading algorithm. Aiming at the characteristics of the application of the intelligent transportation industry, the invention gives consideration to resource allocation and time delay in the design process, and the performance of the invention is superior to that of the traditional algorithm.

Description

Traffic data-oriented edge collaborative computing unloading method
Technical Field
The invention relates to the technical field of task unloading of the Internet of things, in particular to a traffic data-oriented end edge collaborative computing unloading method.
Background
With the continuous development of intelligent transportation technology, data generated by traffic sensing equipment and a control system becomes larger, so that the capacity requirement on data calculation processing is increased continuously. To address such demands, cloud computing technology is certainly considered, and the advantage of the technology is that a centralized cloud server provides ultra-strong computing power and computing resources. However, in the intelligent transportation system, when processing these tasks which are computing intensive and delay sensitive, the edge computing has a great potential because the requirement for delay is higher, and the edge computing has the characteristic that the computing power of the front-end device is closer to the elimination of delay. It is the increasing demand that has prompted the development of mobile/multiple access edge computing (MEC) technology. Under the wave of the rise of the 5G technology, the application, service and content can be locally, closely and distributively deployed by the mobile/multi-access edge computing technology, so that the service requirements of the 5G network in technical scenes of high heat capacity, low power consumption, large connection, low time delay, high reliability and the like are met. In the field of traffic control, a large number of AI algorithms are applied to image or video analysis, and the AI algorithms have higher requirements on GPU and other computing resources, so that the computation and offloading algorithm with the shortest time delay needs to be researched.
The computation unloading is one of the key technologies of the edge computation, and means that the front-end sensing equipment constrained by resources completely or partially unloads computation-intensive tasks to edge computing nodes with sufficient resources, so that the defects of the front-end equipment in the aspects of resource storage, computation performance, energy efficiency and the like are mainly solved. The computation offloading technique can directly perform data analysis (image analysis or video analysis) on the edge computation node close to the camera, which not only reduces the pressure of the core network, but also reduces the time delay caused by transmission. The computation offload mainly comprises two problems of offload decision and resource allocation. In terms of unloading decision, in order to reduce time delay, Mao et al propose a dynamic unloading (low-complexity adaptive based dynamic computing offloading, LODCO) algorithm based on Lyapunov optimization, and experimental results show that the algorithm can shorten the running time by 64%. The optimal unloading scheme aiming at reducing time delay and proposed by Zhang et al provides a layered mobile edge computing deployment architecture, and a multi-user unloading scheme is solved by adopting a Stackelberg game theory method. For resource allocation, the method can be divided into single-node allocation and multi-node allocation, and if the computing task is inseparable or can be divided but the divided parts are linked, the unloading task needs to be unloaded to the same edge computing node in the case; while for a computing task that can be split but where there is no connection to the split part, it can be offloaded to multiple edge computing nodes. Wang et al propose an interference management scheme that reduces the delay by 40% by allocating communication resources and calculating the resource allocation scheme under the condition of minimizing interference. At present, research results on computational offloading are more, but computational offloading schemes for intelligent traffic management are still few.
More importantly, for traffic control, the total latency should be as small as possible, but since the computational resources on the edge compute nodes are limited, if multiple tasks are offloaded to the same edge compute node, the communication or computational load of the whole system is unbalanced.
In a word, the existing research aiming at computation offloading focuses on how to reduce time delay, neglects the limitation of resources, and makes the computation offloading algorithm not applicable in the intelligent transportation field.
Disclosure of Invention
The invention provides a traffic data-oriented end edge collaborative computing unloading method, aiming at solving the problem that an algorithm for computing unloading in the prior art is not suitable for the field of intelligent traffic.
The technical scheme adopted by the invention for solving the technical problems is as follows: a traffic data-oriented end edge collaborative computing unloading method has the technical scheme that: firstly, establishing a system model for communication time delay and calculation time delay; then, establishing a system utility function with time delay and resources as constraint conditions; and finally, an optimization strategy is designed to balance the two strategies.
The traffic data-oriented end edge collaborative computing unloading method comprises the following steps:
1) constructing a communication time delay and calculating a time delay model:
101) defining related parameters, and preparing calculation, specifically: dividing the urban space into M regions according to the proximity of roads, deploying an edge computing node in each region, and defining an edge computing node set as M ═ {1, …, M };
assuming that there are n camera devices requesting the computing service, each computing task is represented as
Figure BDA0002338310800000021
Wherein d isiIndicating the size of the input data of the ith calculation task, ciIndicating the number of computing resources, T, required for the ith taski maxRepresents the maximum delay allowed for task completion;
assuming that each task can select an edge compute node to execute, the edge compute node selects the policy variable xij1 representsIs DiTasks are executed on the jth edge compute server, xijIf 0, the task is not executed on the jth edge computing server;
102) constructing a communication time delay calculation model;
2) balancing delay and resources:
the system utility function is adopted to balance between resources and time delay: u shapei(xij,fij)=α log(1+β-Ti) (ii) a Wherein, alpha is a satisfaction parameter, and the larger alpha is, the higher satisfaction is; β is used to normalize satisfaction to a non-negative parameter; the utility function of the entire system can be written as:
Figure BDA0002338310800000031
Tiprocessing the total time delay for the task;
optimizing a policy function: definition x ═ { xijSelecting a strategy vector for the edge computing node; f ═ fijIf it is a calculation resource vector, the system optimization function is:
Figure BDA0002338310800000032
Figure BDA0002338310800000033
Figure BDA0002338310800000034
Figure BDA0002338310800000035
Figure BDA0002338310800000036
Figure BDA0002338310800000037
the first condition ensures that the task processing time cannot exceed the allowed maximum time delay; conditions II and III ensure that each task can only run on one edge computing node or when the task runs on a camera; the conditions of the fourth and fifth restrict that the sum of the computing resources required by all tasks on the same edge computing node cannot exceed the sum of the computing resources of the edge computing node.
The step 102) of constructing the communication delay calculation model specifically comprises the following steps:
1021) for wireless communication, assume that the ith camera device requests a computation task from the jth edge compute node, hijRepresenting the channel gain, pijRepresenting the transmission power, the achievable rate can then be expressed as:
Figure BDA0002338310800000038
wherein, PnoiseRepresenting the noise power, B representing the system bandwidth, the communication time is:
Figure BDA0002338310800000039
1022) for wired communication: suppose that the ith camera equipment requests a computing task from the jth edge computing node, and the time for sending and receiving the packet by the camera equipment is TwiredThen, the communication time is: t isij com=Twired/2;
1023) Calculating a time system model: assume that the total computational resource of the jth edge compute node is denoted Fj,fijRepresenting the computational resource allocated to the ith task, the computational resource allocation needs to satisfy the constraint
Figure BDA00023383108000000310
Then the task computation time of the ith task at the jth edge node is:
Figure BDA00023383108000000311
1024) when computing is offloadedAnd (3) performing interval calculation: task DiThe total time required to offload to the jth edge compute node is:
Figure BDA0002338310800000041
1025) local computation time system model: if the task is executed in the camera device without computational offloading, the total time is the time the computational task is processed locally:
Figure BDA0002338310800000042
wherein f isiRepresenting computing resources of a camera device;
1026) calculating the total time delay of communication: task processing total delay:
Figure BDA0002338310800000043
the invention has the beneficial effects that: in the invention, the calculation resources and the time delay are comprehensively considered in the calculation unloading process, and an optimization function is designed to obtain the balance between the calculation resources and the time delay. Firstly, establishing a total time delay system model according to a communication time system model, an edge side calculation time system model and a local calculation time system model; then, comprehensively considering the balance of resource allocation, and designing a system utility function; and finally, researching a system optimization strategy to complete the design of the calculation unloading algorithm. Aiming at the characteristics of the application of the intelligent transportation industry, the invention gives consideration to resource allocation and time delay in the design process, and the performance of the invention is superior to that of the traditional algorithm.
Drawings
FIG. 1 is a block diagram of the system of the present invention.
FIG. 2 is a flow chart of the present invention.
Detailed Description
The present application is further described below with reference to the accompanying drawings.
The overall frame diagram of the invention is shown in fig. 1, and is mainly divided into a front-end sensing device, i.e. a camera device integrated with an AI algorithm, and an edge computing server. The n camera devices send computing tasks to the m edge computing nodes arranged nearby the n camera devices through a wired network, 5G or WIFI6 or other communication technologies.
Firstly, establishing a system model for communication time delay and calculation time delay; then, establishing a system utility function with time delay and resources as constraint conditions; and finally, an optimization strategy is designed to balance the two strategies. The following further describes embodiments of the present invention with reference to specific examples:
the flow chart of the present invention is shown in FIG. 2, execute and edge execute; and then, carrying out unloading decision, handing the calculation task to the control layer, and carrying out the unloading decision by an actuator of the control layer, wherein when the calculation task can be completed in the camera equipment, the calculation unloading is not required to be directly executed in the camera equipment, and when the calculation task cannot be completed in the camera equipment, the calculation unloading is required to be executed in the edge calculation node.
Specifically, the method comprises the following steps:
1) firstly, relevant parameter definition is carried out:
definition 1: the urban space is divided into M regions according to the proximity of roads, and each region is provided with one edge computing node, so that the edge computing node set can be defined as M ═ {1, …, M }.
Definition 2: assuming that there are n camera devices requesting computing services, each computing task may be represented as a triplet
Figure BDA0002338310800000054
Wherein d isiIndicating the size of the input data of the ith calculation task, ciIndicating the number of computing resources, T, required for the ith taski maxRepresenting the maximum delay allowed for task completion.
Definition 3: assuming that each task can select an edge compute node to execute, the edge compute node selects the policy variable xij1 is represented by DiTasks are executed on the jth edge compute server, xijAnd 0 indicates that the task is not executed on the jth edge computing server. In particular, when the computation task is not unloaded, the image is takenWhen executing on the head device, let j equal 0, select policy variable xi01, otherwise xi00. For task DiIn other words, the constraint condition needs to be satisfied
Figure BDA0002338310800000051
2) And (3) time delay calculation:
the communication delay of the intelligent transportation system comprises wireless communication delay and wired communication delay. The network communication modes mainly comprise wired communication, 5G communication technology, WIFI6 and the like. The invention will be explained in terms of the calculation of the time delay related to the invention in two communication modes, namely wireless communication and wired communication.
Wireless communication time calculation model: suppose that the ith camera device requests a computation task from the jth edge compute node, hijRepresenting the channel gain, pijRepresenting the transmission power, the achievable rate can then be expressed as:
Figure BDA0002338310800000052
wherein, PnoiseRepresenting the noise power and B the system bandwidth.
The communication time can be expressed as:
Figure BDA0002338310800000053
wired communication time calculation model: supposing that the ith camera equipment requests a computing task to the jth edge computing node, the ith camera equipment and the jth edge computing node are connected in a wired mode, a heartbeat packet is sent from the camera equipment to the edge computing node, then the edge computing node replies a heartbeat packet, and the time for sending and receiving the packet by the camera equipment is recorded as TwiredThen, the communication time is:
Tij com=Twired/2 (3)
calculating a time system model: an edge compute node may execute oneA computing task may also be performed in multiple computing tasks. When executing a computing task, the computing resources need to meet the latency requirements of all computing tasks. Assume that the total computational resource of the jth edge compute node is denoted Fj,fijRepresenting the computational resource allocated to the ith task, the computational resource allocation needs to satisfy the constraint
Figure BDA0002338310800000061
Then the task computation time of the ith task at the jth edge node is:
Figure BDA0002338310800000062
calculating unloading time: task DiThe total time required to offload to the jth edge compute node is:
Figure BDA0002338310800000063
local computation time system model: if the task is executed in the camera device without computational offloading, the total time is the time the computational task is processed locally:
Figure BDA0002338310800000064
wherein f isiRepresenting the computational resources of the camera device.
Calculating the total time delay of communication: according to definition 3, the task can be selected to be unloaded to the edge computing node for computing, and can also be computed in the camera device, and the total time delay of task processing can be obtained by formula (5) and formula (6):
Figure BDA0002338310800000065
3) and (3) optimizing the strategy: for traffic control, the total latency should be as small as possible, but since the computing resources on the edge computing nodes are limited, if multiple tasks are offloaded to the same edge computing node, the communication or computing load of the entire system is unbalanced.
The invention adopts a System utility function (System utility function) to balance between resources and time delay, as shown in a formula (8):
Ui(xij,fij)=αlog(1+β-Ti) (8)
wherein, alpha is a satisfaction parameter, and the larger alpha is, the higher satisfaction is; β is used to normalize satisfaction to a non-negative parameter. The utility function of the entire system can be written as:
Figure BDA0002338310800000066
optimizing a policy function: definition x ═ { xijSelecting a strategy vector for the edge computing node; f ═ fijThe calculation resource vector is adopted, and the system optimization function is shown in formula (9).
Figure BDA0002338310800000071
The first condition ensures that the task processing time cannot exceed the allowed maximum time delay; conditions two and three ensure that each task can only run on one edge compute node (when j > 0) or camera (when j equals 0); the conditions of the fourth and fifth restrict that the sum of the computing resources required by all tasks on the same edge computing node cannot exceed the sum of the computing resources of the edge computing node.
Specific example I: when the front-end camera needs to process tasks with small calculation amount, such as how many vehicles, the vehicle driving diagram captured by the camera equipment does not need to be unloaded to the edge calculation node for execution, and only needs to run in the camera equipment, and the running time is the local processing time:
Figure BDA0002338310800000072
cirepresenting the number of computing resources required for the ith task, fiRepresenting the computational resources of the camera device.
When processing tasks with large calculation amount, such as identifying license plates and vehicle colors, the calculation tasks need to be unloaded to the edge calculation nodes for execution, and only the total time delay of the unloading calculation under the condition of wired communication is explained here. Suppose that the ith camera equipment at the intersection requests a computing task from the jth edge computing node, and the communication time is
Figure BDA0002338310800000073
Wherein T iswiredAnd sending and receiving the total time of the return signals of the edge computing nodes for the camera equipment.
Calculating unloading time: the task calculation time of the ith task at the jth edge node is as follows:
Figure BDA0002338310800000074
wherein f isijIndicating the computing resource allocated to the ith task, then computing task DiThe total time required to offload to the jth edge compute node is:
Figure BDA0002338310800000075
calculating the total time delay of wired communication:
Figure BDA0002338310800000076
for the traffic control system, it is better to control the total delay to be smaller, but because the computing resources on the edge computing nodes are limited, if a plurality of tasks are offloaded to the same edge computing node, the communication or computing load of the whole system is unbalanced. Therefore, the utility function of the design system is balanced between resources and time delay, and the formula is Ui(xij,fij)=αlog(1+β-Ti) Wherein, alpha is a satisfaction parameter, and the larger alpha is, the higher satisfaction is; β is used to normalize satisfaction to a non-negative parameter. The utility function of the entire system can be written as:
Figure BDA0002338310800000081
furthermore, the optimization policy function: definition x ═ { xijSelecting a strategy vector for the edge computing node; f ═ fijIs a calculation resource vector, and the system optimization function is
Figure BDA0002338310800000082
Figure BDA0002338310800000083
Figure BDA0002338310800000084
Figure BDA0002338310800000085
Figure BDA0002338310800000086
Figure BDA0002338310800000087
The first condition ensures that the task processing time cannot exceed the allowed maximum time delay; conditions two and three ensure that each task can only run on one edge compute node (when j > 0) or front-end device (when j equals 0); the conditions of the fourth and fifth restrict that the sum of the computing resources required by all tasks on the same edge computing node cannot exceed the sum of the computing resources of the edge computing node.
In short, due to the shortage of computing resources of the front-end device, the traditional method is to unload the computing task to the edge computing node for running, and provide service for the applications such as traffic control with limited time delay through the cooperation of end-edge. From a traffic control perspective, it is desirable to study computational offload with the goal of reducing latency. However, the AI algorithm has higher requirements for the GPU and other computing resources, and balancing between the time delay and the computing resources is a key to the computation offload.
The method is different from the traditional algorithm, and firstly, a system model is established for communication time delay and calculation time delay; then, establishing a system utility function with time delay and resources as constraint conditions; and finally, an optimization strategy is designed to balance the two strategies. The computation offload process balances between latency and computational resources.
The invention has been described only according to specific examples, and in the method for performing edge-edge collaborative computation offloading on traffic data such as traffic flow, a certain amount of edge computation nodes are deployed according to the proximity of a road, and when a camera device at an intersection cannot process a task with an excessively large computation amount, the computation task is offloaded onto the edge computation nodes through edge-edge collaborative computation to perform computation, thereby solving the problem of time delay in traffic control.

Claims (2)

1. A traffic data-oriented edge collaborative computing unloading method is characterized by comprising the following steps: firstly, establishing a system model for communication time delay and calculation time delay; then, establishing a system utility function with time delay and resources as constraint conditions; finally, an optimization strategy is designed to balance the two strategies;
the method comprises the following steps:
1) constructing a communication time delay and calculating a time delay model:
101) defining related parameters, and preparing calculation, specifically: dividing the urban space into M regions according to the proximity of roads, deploying an edge computing node in each region, and defining an edge computing node set as M ═ {1, …, M };
assuming that there are n camera devices requesting the computing service, each computing task is represented as
Figure FDA0003019750450000011
Wherein d isiIndicating the size of the input data of the ith calculation task, ciIndicating what is required for the ith taskNumber of computing resources, Ti maxRepresents the maximum delay allowed for task completion;
assuming that each task can select an edge compute node to execute, the edge compute node selects the policy variable xij1 is represented by DiTasks are executed on the jth edge compute server, xijIf 0, the task is not executed on the jth edge computing server;
102) constructing a communication time delay calculation model;
2) balancing delay and resources:
the system utility function is adopted to balance between resources and time delay: u shapei(xij,fij)=α log(1+β-Ti) (ii) a Wherein, alpha is a satisfaction parameter, and the larger alpha is, the higher satisfaction is; β is used to normalize satisfaction to a non-negative parameter; the utility function of the entire system can be written as:
Figure FDA0003019750450000012
Tiprocessing the total time delay for the task;
optimizing a policy function: definition x ═ { xijSelecting a strategy vector for the edge computing node; f ═ fijIf it is a calculation resource vector, the system optimization function is:
Figure FDA0003019750450000021
s.t.①
Figure FDA0003019750450000022
Figure FDA0003019750450000023
Figure FDA0003019750450000024
Figure FDA0003019750450000025
Figure FDA0003019750450000026
the first condition ensures that the task processing time cannot exceed the allowed maximum time delay; conditions II and III ensure that each task can only run on one edge computing node or when the task runs on a camera; the conditions of the fourth and fifth restrict that the sum of the computing resources required by all tasks on the same edge computing node cannot exceed the sum of the computing resources of the edge computing node.
2. The method for offloading traffic data oriented edge collaborative computing of claim 1, wherein:
the step 102) of constructing the communication delay calculation model specifically comprises the following steps:
1021) for wireless communication, assume that the ith camera device requests a computation task from the jth edge compute node, hijRepresenting the channel gain, pijRepresenting the transmission power, the achievable rate can then be expressed as:
Figure FDA0003019750450000027
wherein, PnoiseRepresenting the noise power, B representing the system bandwidth, the communication time is:
Figure FDA0003019750450000028
1022) for wired communication: suppose that the ith camera equipment requests a computing task from the jth edge computing node, and the time for sending and receiving the packet by the camera equipment is TwiredThen, the communication time is: t isij com=Twired/2;
1023) Calculating a time system model: assume that the total computational resource of the jth edge compute node is denoted Fj,fijIndicates the assignment to the ithComputing resources of the task, then the computing resource allocation needs to satisfy the constraint
Figure FDA0003019750450000029
Then the task computation time of the ith task at the jth edge node is:
Figure FDA00030197504500000210
1024) calculating unloading time: task DiThe total time required to offload to the jth edge compute node is:
Figure FDA00030197504500000211
1025) local computation time system model: if the task is executed in the camera device without computational offloading, the total time is the time the computational task is processed locally:
Figure FDA0003019750450000031
wherein f isiRepresenting computing resources of a camera device;
1026) calculating the total time delay of communication: task processing total delay:
Figure FDA0003019750450000032
CN201911365517.XA 2019-12-26 2019-12-26 Traffic data-oriented edge collaborative computing unloading method Active CN110928691B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911365517.XA CN110928691B (en) 2019-12-26 2019-12-26 Traffic data-oriented edge collaborative computing unloading method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911365517.XA CN110928691B (en) 2019-12-26 2019-12-26 Traffic data-oriented edge collaborative computing unloading method

Publications (2)

Publication Number Publication Date
CN110928691A CN110928691A (en) 2020-03-27
CN110928691B true CN110928691B (en) 2021-07-09

Family

ID=69862181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911365517.XA Active CN110928691B (en) 2019-12-26 2019-12-26 Traffic data-oriented edge collaborative computing unloading method

Country Status (1)

Country Link
CN (1) CN110928691B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723972B (en) * 2020-05-09 2024-04-16 天津大学 Optimized remote cloud arrangement method based on simulated annealing algorithm
CN112532676B (en) * 2020-07-24 2021-09-28 北京航空航天大学 Vehicle calculation task unloading method based on block chain data sharing
CN112512018B (en) * 2020-07-24 2022-03-04 北京航空航天大学 Method for dynamically unloading tasks among cooperative vehicles based on mobile edge calculation
CN112231097A (en) * 2020-09-27 2021-01-15 沈阳中科博微科技股份有限公司 Capacitive pressure transmitter edge calculation work system and work method
US11853810B2 (en) 2021-01-07 2023-12-26 International Business Machines Corporation Edge time sharing across clusters via dynamic task migration based on task priority and subtask result sharing
CN113259472A (en) * 2021-06-08 2021-08-13 江苏电力信息技术有限公司 Edge node resource allocation method for video analysis task
CN113534829B (en) * 2021-06-11 2024-04-05 南京邮电大学 Unmanned aerial vehicle daily patrol detecting system based on edge calculation
CN113709249B (en) * 2021-08-30 2023-04-18 北京邮电大学 Safe balanced unloading method and system for driving assisting service
CN113873022A (en) * 2021-09-23 2021-12-31 中国科学院上海微系统与信息技术研究所 Mobile edge network intelligent resource allocation method capable of dividing tasks
CN115208894B (en) * 2022-07-26 2023-10-13 福州大学 Pricing and calculating unloading method based on Stackelberg game in mobile edge calculation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106900011A (en) * 2017-02-28 2017-06-27 重庆邮电大学 Task discharging method between a kind of cellular basestation based on MEC
CN110099384A (en) * 2019-04-25 2019-08-06 南京邮电大学 Resource regulating method is unloaded based on side-end collaboration more MEC tasks of multi-user

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8943205B2 (en) * 2012-04-25 2015-01-27 Cisco Technology, Inc. Generalized coordinate system and metric-based resource selection framework
EP2919438B1 (en) * 2014-03-10 2019-06-19 Deutsche Telekom AG Method and system to estimate user desired delay for resource allocation for mobile-cloud applications
CN107122249A (en) * 2017-05-10 2017-09-01 重庆邮电大学 A kind of task unloading decision-making technique based on edge cloud pricing mechanism
CN107819840B (en) * 2017-10-31 2020-05-26 北京邮电大学 Distributed mobile edge computing unloading method in ultra-dense network architecture
CN109067842B (en) * 2018-07-06 2020-06-26 电子科技大学 Calculation task unloading method facing Internet of vehicles
CN109992419A (en) * 2019-03-29 2019-07-09 长沙理工大学 A kind of collaboration edge calculations low latency task distribution discharging method of optimization

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106900011A (en) * 2017-02-28 2017-06-27 重庆邮电大学 Task discharging method between a kind of cellular basestation based on MEC
CN110099384A (en) * 2019-04-25 2019-08-06 南京邮电大学 Resource regulating method is unloaded based on side-end collaboration more MEC tasks of multi-user

Also Published As

Publication number Publication date
CN110928691A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN110928691B (en) Traffic data-oriented edge collaborative computing unloading method
CN107995660B (en) Joint task scheduling and resource allocation method supporting D2D-edge server unloading
CN109684075B (en) Method for unloading computing tasks based on edge computing and cloud computing cooperation
CN111010684B (en) Internet of vehicles resource allocation method based on MEC cache service
CN113950103A (en) Multi-server complete computing unloading method and system under mobile edge environment
CN111918311A (en) Vehicle networking task unloading and resource allocation method based on 5G mobile edge computing
CN111930436A (en) Random task queuing and unloading optimization method based on edge calculation
CN110650457B (en) Joint optimization method for task unloading calculation cost and time delay in Internet of vehicles
CN111586696A (en) Resource allocation and unloading decision method based on multi-agent architecture reinforcement learning
CN112188627B (en) Dynamic resource allocation strategy based on state prediction
CN113438621A (en) Edge computing unloading and resource allocation method based on Internet of vehicles assistance
CN112015545A (en) Task unloading method and system in vehicle edge computing network
CN113918240A (en) Task unloading method and device
CN113641417B (en) Vehicle security task unloading method based on branch-and-bound method
CN115297171B (en) Edge computing and unloading method and system for hierarchical decision of cellular Internet of vehicles
Wu et al. A mobile edge computing-based applications execution framework for Internet of Vehicles
CN113961264A (en) Intelligent unloading algorithm and system for video monitoring cloud edge coordination
CN113114714A (en) Energy-saving method and system for unloading large-scale tasks to 5G edge server
He et al. An offloading scheduling strategy with minimized power overhead for internet of vehicles based on mobile edge computing
CN113473542A (en) Time delay minimization resource allocation method and device for wireless energy supply edge computing network
CN117579701A (en) Mobile edge network computing and unloading method and system
Cui et al. Intelligent task offloading algorithm for mobile edge computing in vehicular networks
Zhao et al. C-LSTM: CNN and LSTM Based Offloading Prediction Model in Mobile Edge Computing (MEC)
CN113573280B (en) Vehicle edge calculation cost-effective optimization method, system, equipment and terminal
CN113660696B (en) Multi-access edge computing node selection method and system based on regional pool networking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant