CN109947574B - Fog network-based vehicle big data calculation unloading method - Google Patents

Fog network-based vehicle big data calculation unloading method Download PDF

Info

Publication number
CN109947574B
CN109947574B CN201910246291.5A CN201910246291A CN109947574B CN 109947574 B CN109947574 B CN 109947574B CN 201910246291 A CN201910246291 A CN 201910246291A CN 109947574 B CN109947574 B CN 109947574B
Authority
CN
China
Prior art keywords
task
fog
layer
computing
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910246291.5A
Other languages
Chinese (zh)
Other versions
CN109947574A (en
Inventor
赵海涛
朱奇星
冯天翼
柏宇
朱洪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201910246291.5A priority Critical patent/CN109947574B/en
Publication of CN109947574A publication Critical patent/CN109947574A/en
Application granted granted Critical
Publication of CN109947574B publication Critical patent/CN109947574B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a fog computing network-based vehicle big data computing and unloading method. The invention can provide a more efficient and reliable computing environment for analyzing vehicle big data. Firstly, providing a fog computing network system architecture, further establishing a network delay model, then establishing a task generation model, then establishing a fog computing resource optimization model, finally using a load balancing computing resource effective task unloading algorithm (Computing Resource-Efficient Task Offloading Algorithm with Load Balancing, CRETOA) provided by the invention to manage computing resources of the fog computing network load balancing, and distributing road vehicle terminal request computing tasks to optimal fog computing resources.

Description

Fog network-based vehicle big data calculation unloading method
Technical Field
The invention relates to the technical field of Internet of vehicles, in particular to a fog network-based vehicle big data calculation and unloading method.
Background
Cloud computing is one of the most important technologies of modern data storage and computing, provides a powerful platform for executing complex large-scale computing tasks, but a big data system architecture based on cloud computing cannot meet the delay-sensitive requirements of ITS application programs. While fog computing provides a high degree of flexibility in architecture, resources, computing power, communication technology, and deployment, and has support for low latency and high mobility, making it an ideal choice for internet of vehicles big data analysis and ITS application computing offloading. There are two main reasons for the high fault tolerance of fog calculations: first, it does not rely on fixed deployment, and can allocate resources in a temporary deployment manner; second, fog computing may employ a multi-tier architecture, allowing servers of higher computing specifications to be deployed at higher levels.
Disclosure of Invention
The invention mainly aims to solve the problems in the prior art, and provides a vehicle big data calculation unloading method based on a fog network, which comprises the following steps of:
a vehicle big data calculating and unloading method based on a fog network comprises the following specific steps:
step 1: providing a fog computing network system architecture;
the fog computing network system architecture is divided into three layers: the cloud computing system comprises an application layer, a cloud computing layer and a cloud computing layer, wherein the application layer, the cloud computing layer comprises a cloud computing node and a cloud computing device;
step 2: establishing a network delay model;
specifically, cloud and fog cooperation is used for calculation unloading, and each task is to be unloaded to the cloud or fog; in this scenario, there are three data transmission phases, including a wireless transmission phase and a wired transmission phase, and a calculation result return phase; calculating a delay, i.e. a response time, for the data transfer phase;
step 3: establishing a task generation model;
specifically, the data size and the task length are appropriately distributed according to the distribution generated by the selected task, and the appropriate task length is selected as far as possible so as to reduce the number of task failures and the average value of network delays; task arrival time is obtained through Poisson distribution modeling which has exponential distribution and consists of independent same distribution;
step 4: establishing a fog computing resource optimization model;
collecting data to obtain an input task set of each vehicle terminal, a virtual machine set in a cloud computing layer and a virtual machine set in a fog computing layer; obtaining delays of tasks respectively unloaded to the cloud computing layer and the virtual machines of the fog computing layer, and further obtaining operation resource occupation of the vehicle terminal for executing corresponding tasks on the virtual machines of the cloud computing layer and the fog computing layer respectively; through the formulation of an objective function and a constraint equation, two factors of delay and calculation resource occupation are balanced, and optimal calculation resource allocation is obtained;
step 5: the method comprises the steps of managing calculation resources of the fog calculation network load balance, and distributing road vehicle terminal request calculation tasks to optimal fog calculation resources;
specifically, an algorithm for requesting an operation task to be distributed to an optimal fog calculation resource by a road vehicle terminal, namely an effective operation resource task unloading algorithm for load balancing, is provided; estimating computing resource requirements by using an expected resource requirement matrix; according to the expected resource demand matrix, the virtual machine needs different time and energy to execute different tasks; the scheduler firstly unloads the task to the fog computing layer, and if the occupation of computing resources of the fog computing layer is too high, the task is unloaded to the cloud.
Further, in the step 1, when the vehicle moves to the corresponding intersection and enters the coverage area of the access point signal, the vehicle terminal joins the corresponding wireless local area network and accesses the fog computing device, and the computing task is sent to the fog computing node; in addition, if the vehicle terminal decides to offload tasks to the cloud computing device, accessing the cloud computing device using the WAN connection provided by the Wi-Fi access point; the fog calculating layer is also connected with the cloud calculating system; in the application layer, the vehicle terminal generates a task request for further processing.
Further, in the step 2, after the request task is submitted to the mist computing layer or the cloud for processing, the service delay, that is, the response time, of the request task may be represented by the sum of the transmission delay and the processing delay of the request task; d, d vf And d fc Is the transmission delay of a single data packet from the fog computing layer to the cloud computing layer from the vehicle terminal to the nearest device and node in the fog computing layer to the access point;
n operating within a mist calculation layer i Average transmission delay d of data packets of individual requesting task application instances fog Given by the formula:
Figure BDA0002011132200000031
wherein P is i And p i (P i >p i ) Is N i The total number of data packets sent to the cloud computing layer from the cloud computing layer; b r The number of total data packets sent for responses to the b request tasks;
mist computation layer processes average transmission delay of request task application instance
Figure BDA0002011132200000032
Given by the formula:
Figure BDA0002011132200000041
in the cloud computing layer, it can be expressed as:
Figure BDA0002011132200000042
the delay of the request task application instance is calculated by the number of request tasks processed by the server side before processing, and the total number of request task application instances which can be processed simultaneously is:
Figure BDA0002011132200000043
equally dividing the total bandwidth B into N request task application instances, so that the frequency which can be occupied by each user is not interfered with each other, and data of the request task application instances are simultaneously sent to a fog calculation layer and a cloud; thus:
Figure BDA0002011132200000044
Figure BDA0002011132200000045
respectively represent vehicle terminals V i ∈{V 1 ,V 2 ,...,V w Uplink and downlink transmission rates of }; where n is 0 Is the noise power spectral density, h i Is base station and user N i Channel gain, p d,i And p u,i Respectively vehicle terminals V i Downlink and uplink power of (a);
setting delta (V) i ,I i ) Is operated atVehicle terminal V i Application instance I of a requesting task in i Is provided with services by the mist computing layer; at N i In the case of a request task application, assume n i (N i >n i ) The application instances of the request tasks are redirected and unloaded to a cloud computing layer for operation; the total number of the request task application instances processed by the cloud computing layer at time t is:
Figure BDA0002011132200000051
and for each request task application instance in the n, the processing waiting time in the cloud is
Figure BDA0002011132200000052
At V i The average processing delay of the internally running requesting task application instance is
Figure BDA0002011132200000053
All vehicle terminals V i Average service delay of (a)
Figure BDA0002011132200000054
Can be expressed as:
Figure BDA0002011132200000055
in the cloud computing layer, on the contrary, all the request task application instances running on the user side interact directly with the core computing module, where the average processing delay of the request task application instances
Figure BDA0002011132200000056
Given by the formula:
Figure BDA0002011132200000057
further, in the step 3, the poisson distribution is modeled as:
Figure BDA0002011132200000058
a fundamental feature of poisson distribution is that the probability of x is an independent discrete value, whose probability is independent of all previous values;
the main attribute of the exponential process is memoryless; identifying a waiting time when the task arrives; the memoryless feature indicates that the distribution of the remaining waiting time is similar to the initial one, as the task arrival interval does not reach the contracted time, as shown in the following formula:
Figure BDA0002011132200000061
since the vehicle terminal does not continuously generate service requests, an idle/active task generation mode is used to simulate a real scenario; according to this mode, the user creates tasks during activity and is in a waiting state during idle, i.e. each terminal has a state machine, and the terminal can be in an active state or an idle state.
Further, in the step 4, specifically, it is assumed that a set of vehicle terminals exists in the system model, where the number of vehicle terminals is M, and m= { M 1 ,M 2 ,...,M m -representation; each vehicle terminal has a limited number of input tasks, and the input task set of the ith vehicle terminal is represented as
Figure BDA0002011132200000062
Wherein t is ij The j-th input task representing the i-th vehicle terminal, vehicle terminal M i Having n i Tasks; set->
Figure BDA0002011132200000063
Represents the cut-off time of the ith vehicle terminal, where d ij Is the deadline of the task; set vc={vc 1 ,vc 2 ,...,vc p The p virtual machines in the cloud computing layer are represented, while the set vfdc= { VFDC 1 ,vfdc 2 ,...,vfdc q Q groups of virtual machines at all fog computing layers; there are k fog calculation layers, each fog calculation center has q i A plurality of virtual machines (VirtualMachine, VM), wherein i is greater than or equal to 1 and k is greater than or equal to k;
at the same time define X ijk ,Y ijk Two variables:
Figure BDA0002011132200000064
Figure BDA0002011132200000065
assume that
Figure BDA0002011132200000071
And->
Figure BDA0002011132200000072
By respectively passing task t ij Delay generated by unloading kth virtual machine to cloud computing layer and kth virtual machine to fog computing layer, task t ij The execution delay of (a) is:
Figure BDA0002011132200000073
the total delay in performing all tasks can then be expressed as:
Figure BDA0002011132200000074
vehicle terminal M i Executing task t on kth virtual machine of fog computing layer ij The occupation of the operation resource is c ijk Executing task t on kth virtual machine of cloud computing layer ij The occupation of the operation resource is c' ijk
Vehicle terminal M i C is occupied by the operation resource of (2) i Comprising two parts: (1) The computing resources occupied by the virtual machines that offload computing tasks to the fog computing layer, (2) the computing resources occupied by the virtual machines that offload computing tasks to the cloud, we represent them as:
Figure BDA0002011132200000075
therefore, the total operation resources occupied by all the vehicle terminals M are:
Figure BDA0002011132200000076
to minimize the computational resources in a fog computing environment, the following objective functions and constraint equations are formulated:
Figure BDA0002011132200000077
Subject to
Figure BDA0002011132200000078
Figure BDA0002011132200000081
Figure BDA0002011132200000082
Figure BDA0002011132200000083
the objective function, equation (21), uses η to maintain a tradeoff between computational resource occupancy and latency; in some cases, if the computational resource of the fog computing layer is not a critical issue compared to the latency of the task, the value of η may be set to a small value or 0, the issue becomes one of minimizing the latency; conversely, if computational resource occupation is a major problem compared to delay, the value of η may be set to a larger value; formulas (22) and (23) represent that tasks may be assigned to one virtual machine in the mist computing layer or cloud; equation (24) indicates that the total latency of any task cannot exceed the set task deadline; equation (25) indicates that the required transmission bandwidth for any task must not exceed the total bandwidth.
Further, in the step 5, the input of the load balancing operation resource effective task unloading algorithm includes a vehicle terminal set, a task deadline, a cloud virtual machine set, and a fog calculation layer virtual machine set;
if the resource occupancy rate of the fog calculating layer is low, tasks can be directly distributed to the virtual machines of the fog calculating layer, further, the fog calculating layer is called to execute a task process, the virtual machine position of the fog calculating layer of the distributed tasks is returned according to the calculation resource occupancy status of the virtual machines of the fog calculating layer in the network, the resource occupancy of the virtual machines for executing the tasks is calculated, and the calculation resource occupancy of all the virtual machines of the fog calculating layer is updated;
if the resource occupancy rate of the cloud computing layer is higher, unloading the task to the cloud, then calling the cloud computing layer to execute the task process, and returning to the virtual machine position in the cloud computing layer, the computing resource occupancy rate of the virtual machine for executing the task and the computing resource occupancy rate of all the virtual machines of the cloud, wherein the task is distributed in the virtual machine position;
and returning service delay of the request task after the whole operation process is finished.
Further, the computing resource occupation of the fog computing layer is the sum of the computing resource occupation of all virtual machines of the fog computing layer; similarly, the cloud computing resource occupancy is the sum of all virtual machine computing resource occupancy in the cloud.
Compared with the prior art, the invention has the beneficial effects that: the number of task failures caused by congestion due to high vehicle density is smaller than that of the existing calculation unloading method; compared with the existing calculation unloading method, the number of failed tasks caused by the network problem is smaller; the number of failed tasks due to the computational resource problem is less than in the existing computational offload methods.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Fig. 2 is a diagram of a fog computing network system architecture.
Detailed Description
The technical scheme of the invention is further described in detail below with reference to the attached drawings.
A vehicle big data calculating and unloading method based on a fog network comprises the following specific steps:
step 1: a fog computing network system architecture is presented.
The fog computing network system architecture is divided into three layers: the cloud computing system comprises an application layer, a cloud computing layer and a cloud computing layer, wherein the application layer, the cloud computing layer comprises a cloud computing node and a cloud computing device, and the cloud computing layer comprises a cloud computing device.
When a vehicle moves to a corresponding intersection and enters a coverage area of an access point signal, a vehicle terminal joins a corresponding wireless local area network and accesses fog computing equipment, and a computing task is sent to a fog computing node; in addition, if the vehicle terminal decides to offload tasks to the cloud computing device, accessing the cloud computing device using the WAN connection provided by the Wi-Fi access point; the fog calculating layer is also connected with the cloud calculating system; in the application layer, the vehicle terminal generates a task request for further processing.
Step 2: and establishing a network delay model.
Specifically, cloud and fog cooperation is used for calculation unloading, and each task is to be unloaded to the cloud or fog; in this scenario, there are three data transmission phases, including a wireless transmission phase and a wired transmission phase, and a calculation result return phase; the delay, i.e. the calculation of the response time, is performed for this data transfer phase.
When the request task is submitted to the cloud computing layer or the cloud for processing, the service delay, namely the response time, can be used for transmitting delay and processing delay of the request taskAnd a representation; d, d vf And d fc Is the propagation delay of the nearest devices and nodes in the fog computing layer from the vehicle terminal to the access point and of a single data packet from the fog computing layer to the cloud computing layer.
N operating within a mist calculation layer i Average transmission delay d of data packets of individual requesting task application instances fog Given by the formula:
Figure BDA0002011132200000101
wherein P is i And p i (P i >p i ) Is N i The total number of data packets sent to the cloud computing layer from the cloud computing layer; b r The number of total data packets sent for responses to the b request tasks.
Mist computation layer processes average transmission delay of request task application instance
Figure BDA0002011132200000102
Given by the formula:
Figure BDA0002011132200000111
in the cloud computing layer, it can be expressed as:
Figure BDA0002011132200000112
the delay of the request task application instance is calculated by the number of request tasks processed by the server side before processing, and the total number of request task application instances which can be processed simultaneously is:
Figure BDA0002011132200000113
equally dividing the total bandwidth B into N request task application instances, so that the frequency which can be occupied by each user is not interfered with each other, and data of the request task application instances are simultaneously sent to a fog calculation layer and a cloud; thus:
Figure BDA0002011132200000114
Figure BDA0002011132200000115
respectively represent vehicle terminals V i ∈{V 1 ,V 2 ,...,V w Uplink and downlink transmission rates of }; where n is 0 Is the noise power spectral density, h i Is base station and user N i Channel gain, p d,i And p u,i Respectively vehicle terminals V i Downlink and uplink power of (a) are provided.
Setting delta (V) i ,I i ) Is operated at the vehicle terminal V i Application instance I of a requesting task in i Is provided with services by the mist computing layer; at N i In the case of a request task application, assume n i (N i >n i ) The application instances of the request tasks are redirected and unloaded to a cloud computing layer for operation; the total number of the request task application instances processed by the cloud computing layer at time t is:
Figure BDA0002011132200000121
and for each request task application instance in the n, the processing waiting time in the cloud is
Figure BDA0002011132200000122
At V i The average processing delay of the internally running requesting task application instance is
Figure BDA0002011132200000123
All vehicle terminals V i Average service delay of (a)
Figure BDA0002011132200000124
Can be expressed as:
Figure BDA0002011132200000125
in the cloud computing layer, on the contrary, all the request task application instances running on the user side interact directly with the core computing module, where the average processing delay of the request task application instances
Figure BDA0002011132200000126
Given by the formula: />
Figure BDA0002011132200000127
Step 3: and establishing a task generating model.
Specifically, the data size and the task length are appropriately distributed according to the distribution generated by the selected task, and the appropriate task length is selected as far as possible so as to reduce the number of task failures and the average value of network delays; task arrival times are modeled by poisson distribution having an exponential distribution and consisting of independent co-distributions.
In the step 3, the poisson distribution is modeled as:
Figure BDA0002011132200000128
a fundamental feature of poisson distribution is that the probability of x is an independent discrete value, whose probability is independent of all previous values.
The main attribute of the exponential process is memoryless; identifying a waiting time when the task arrives; the memoryless feature indicates that the distribution of the remaining waiting time is similar to the initial one, as the task arrival interval does not reach the contracted time, as shown in the following formula:
Figure BDA0002011132200000131
since the vehicle terminal does not continuously generate service requests, an idle/active task generation mode is used to simulate a real scenario; according to this mode, the user creates tasks during activity and is in a waiting state during idle, i.e. each terminal has a state machine, and the terminal can be in an active state or an idle state.
Step 4: and establishing a fog computing resource optimization model.
Collecting data to obtain an input task set of each vehicle terminal, a virtual machine set in a cloud computing layer and a virtual machine set in a fog computing layer; obtaining delays of tasks respectively unloaded to the cloud computing layer and the virtual machines of the fog computing layer, and further obtaining operation resource occupation of the vehicle terminal for executing corresponding tasks on the virtual machines of the cloud computing layer and the fog computing layer respectively; and through the formulation of the objective function and the constraint equation, balancing two factors of delay and calculation resource occupation to obtain optimal calculation resource allocation.
In the step 4, it is specifically assumed that there is a set of vehicle terminals in the system model, the number of which is M, using m= { M 1 ,M 2 ,...,M m -representation; each vehicle terminal has a limited number of input tasks, and the input task set of the ith vehicle terminal is represented as
Figure BDA0002011132200000132
Wherein t is ij The j-th input task representing the i-th vehicle terminal, vehicle terminal M i Having n i Tasks; set->
Figure BDA0002011132200000141
Represents the cut-off time of the ith vehicle terminal, where d ij Is the deadline of the task; set vc= { VC 1 ,vc 2 ,...,vc p The p virtual machines in the cloud computing layer are represented, while the set vfdc= { VFDC 1 ,vfdc 2 ,...,vfdc q Q groups of virtual machines at all fog computing layers; there are k fog calculation layers, each fog calculation center has q i And (5) virtual machines (VirtualMachine, VM), wherein i is more than or equal to 1 and k is more than or equal to k.
At the same time define X ijk ,Y ijk Two variables:
Figure BDA0002011132200000142
Figure BDA0002011132200000143
assume that
Figure BDA0002011132200000144
And->
Figure BDA0002011132200000145
By respectively passing task t ij Delay generated by unloading kth virtual machine to cloud computing layer and kth virtual machine to fog computing layer, task t ij The execution delay of (a) is:
Figure BDA0002011132200000146
the total delay in performing all tasks can then be expressed as:
Figure BDA0002011132200000147
vehicle terminal M i Executing task t on kth virtual machine of fog computing layer ij The occupation of the operation resource is c ijk Executing task t on kth virtual machine of cloud computing layer ij The occupation of the operation resource is c' ijk
Vehicle terminal M i C is occupied by the operation resource of (2) i Comprising two parts: (1) Will operate arbitrarilyThe operation resources occupied by the virtual machines that task offload to the fog computing layer, (2) the operation resources occupied by the virtual machines that offload the operation tasks to the cloud, we express them as:
Figure BDA0002011132200000151
therefore, the total operation resources occupied by all the vehicle terminals M are:
Figure BDA0002011132200000152
to minimize the computational resources in a fog computing environment, the following objective functions and constraint equations are formulated:
Figure BDA0002011132200000153
Subject to
Figure BDA0002011132200000154
Figure BDA0002011132200000155
Figure BDA0002011132200000156
/>
Figure BDA0002011132200000157
the objective function, equation (21), uses η to maintain a tradeoff between computational resource occupancy and latency; in some cases, if the computational resource of the fog computing layer is not a critical issue compared to the latency of the task, the value of η may be set to a small value or 0, the issue becomes one of minimizing the latency; conversely, if computational resource occupation is a major problem compared to delay, the value of η may be set to a larger value; formulas (22) and (23) represent that tasks may be assigned to one virtual machine in the mist computing layer or cloud; equation (24) indicates that the total latency of any task cannot exceed the set task deadline; equation (25) indicates that the required transmission bandwidth for any task must not exceed the total bandwidth.
Step 5: and (3) managing calculation resources of the fog calculation network load balancing, and distributing the road vehicle terminal request calculation task to the optimal fog calculation resources.
Specifically, an algorithm for requesting an operation task to be distributed to an optimal fog calculation resource by a road vehicle terminal, namely an effective operation resource task unloading algorithm for load balancing, is provided; estimating computing resource requirements by using an expected resource requirement matrix; according to the expected resource demand matrix, the virtual machine needs different time and energy to execute different tasks; the scheduler firstly unloads the task to the fog computing layer, and if the occupation of computing resources of the fog computing layer is too high, the task is unloaded to the cloud.
In the step 5, the input of the load balancing operation resource effective task unloading algorithm includes a vehicle terminal set, a task deadline, a cloud virtual machine set, and a fog calculation layer virtual machine set.
If the resource occupancy rate of the fog computing layer is low, tasks can be directly distributed to the virtual machines of the fog computing layer, further, the fog computing layer is called to execute a task process, the positions of the virtual machines of the fog computing layer of the distributed tasks are returned according to the computing resource occupancy conditions of the virtual machines of the fog computing layer in the network, the resource occupancy of the virtual machines for executing the tasks is computed, and the computing resource occupancy of all the virtual machines of the fog computing layer is updated.
And if the resource occupancy rate of the cloud computing layer is higher, unloading the task to the cloud, then calling the cloud computing layer to execute the task process, and returning to the position of the virtual machine in the cloud computing layer, the computing resource occupancy rate of the virtual machine for executing the task and the computing resource occupancy rate of all the virtual machines of the cloud, wherein the task is distributed in the virtual machine.
The computing resource occupation of the fog computing layer is the sum of the computing resource occupation of all virtual machines of the fog computing layer; similarly, the cloud computing resource occupancy is the sum of all virtual machine computing resource occupancy in the cloud.
And returning service delay of the request task after the whole operation process is finished.
The above description is merely of preferred embodiments of the present invention, and the scope of the present invention is not limited to the above embodiments, but all equivalent modifications or variations according to the present disclosure will be within the scope of the claims.

Claims (4)

1. A vehicle big data calculating and unloading method based on a fog network is characterized in that: the specific method comprises the following steps:
step 1: providing a fog computing network system architecture;
the fog computing network system architecture is divided into three layers: the cloud computing system comprises an application layer, a cloud computing layer and a cloud computing layer, wherein the application layer, the cloud computing layer comprises a cloud computing node and a cloud computing device;
step 2: establishing a network delay model;
specifically, cloud and fog cooperation is used for calculation unloading, and each task is to be unloaded to the cloud or fog; in this scenario, there are three data transmission phases, including a wireless transmission phase and a wired transmission phase, and a calculation result return phase; calculating a delay, i.e. a response time, for the data transfer phase;
in the step 2, after the request task is submitted to the mist computing layer or the cloud for processing, the service delay, namely the response time, is represented by the sum of the transmission delay and the processing delay of the request task; d, d vf And d fc Is the transmission delay of a single data packet from the fog computing layer to the cloud computing layer from the vehicle terminal to the nearest device and node in the fog computing layer to the access point;
n operating within a mist calculation layer i Average transmission delay d of data packets of individual requesting task application instances fog From the bottomThe formula is given:
Figure FDA0004074670220000011
wherein P is i And p i Is N i The total number of data packets sent by the individual tasks to and from the mist computing layer to the cloud data center, P i >p i ;b r The number of total data packets sent for responses to r request tasks;
mist computation layer processes average transmission delay of request task application instance
Figure FDA0004074670220000012
Given by the formula:
Figure FDA0004074670220000021
in the cloud computing layer, expressed as:
Figure FDA0004074670220000022
the delay of the request task application instance is calculated by the number of request tasks processed by the server side before processing, and the total number of request task application instances which can be processed simultaneously is:
Figure FDA0004074670220000023
equally dividing the total bandwidth B into N request task application instances, so that the frequency occupied by each user is not interfered with each other, and simultaneously transmitting the data to the fog calculation layer and the cloud; thus:
Figure FDA0004074670220000024
/>
Figure FDA0004074670220000025
respectively represent vehicle terminals V i ∈{V 1 ,V 2 ,...,V w Uplink and downlink transmission rates of }; where n is 0 Is the noise power spectral density, h i Is base station and user N i Channel gain, p d,i And p u,i Respectively vehicle terminals V i Downlink and uplink power of (a);
setting delta (V) i ,I i ) Is operated at the vehicle terminal V i The service delay of the request task application instance Ii is provided by the fog calculation layer; at N i In the case of a request task application, assume n i (N i >n i ) The application instances of the request tasks are redirected and unloaded to a cloud computing layer for operation; the total number of the request task application instances processed by the cloud computing layer at time t is:
Figure FDA0004074670220000031
and for each request task application instance in the n, the processing waiting time in the cloud is
Figure FDA0004074670220000032
At V i The average processing delay of the internally running requesting task application instance is
Figure FDA0004074670220000033
All vehicle terminals V i Average service delay of (a)
Figure FDA0004074670220000034
Expressed as:
Figure FDA0004074670220000035
in the cloud computing layer, on the contrary, all the request task application instances running on the user side interact directly with the core computing module, where the average processing delay of the request task application instances
Figure FDA0004074670220000036
Given by the formula:
Figure FDA0004074670220000037
step 3: establishing a task generation model;
specifically, the data size and the task length are appropriately distributed according to the distribution generated by the selected task, and the task length is selected to reduce the number of task failures and the average value of network delays; task arrival time is obtained through Poisson distribution modeling which has exponential distribution and consists of independent same distribution;
in the step 3, the poisson distribution is modeled as:
Figure FDA0004074670220000038
one feature of poisson distribution is that the probability of x is an independent discrete value, whose probability is independent of all previous values;
the main attribute of the exponential process is memoryless; identifying a waiting time when the task arrives;
since the vehicle terminal does not continuously generate service requests, an idle/active task generation mode is used to simulate a real scenario; according to this mode, the user creates tasks during the active period, while in the waiting state during the idle period, i.e. each terminal has a state machine, the terminal is in the active or idle state;
step 4: establishing a fog computing resource optimization model;
collecting data to obtain an input task set of each vehicle terminal, a virtual machine set in a cloud computing layer and a virtual machine set in a fog computing layer; then, obtaining delays of tasks respectively unloaded to the cloud computing layer and the virtual machines of the fog computing layer, and obtaining computing resource occupation of corresponding tasks respectively executed by the vehicle terminal on the virtual machines of the cloud computing layer and the fog computing layer; through the formulation of an objective function and a constraint equation, two factors of delay and calculation resource occupation are balanced, and optimal calculation resource allocation is obtained;
in the step 4, it is assumed that there is a set of vehicle terminals in the system model, the number of which is M, using m= { M 1 ,M 2 ,...,M m -representation; each vehicle terminal has a limited number of input tasks, and the input task set of the ith vehicle terminal is represented as
Figure FDA0004074670220000041
Wherein t is ij The j-th input task representing the i-th vehicle terminal, vehicle terminal M i Having n i Tasks; set->
Figure FDA0004074670220000042
Represents the cut-off time of the ith vehicle terminal, where d ij Is the deadline of the task; set vc= { VC 1 ,vc 2 ,...,vc p The p virtual machines in the cloud computing layer are represented, while the set vfdc= { VFDC 1 ,vfdc 2 ,...,vfdc q Q groups of virtual machines at all fog computing layers; there are k fog calculation layers, each fog calculation center has q i The virtual machines VM, wherein i is more than or equal to 1 and k is more than or equal to k;
at the same time define X ijk ,Y ijk Two variables:
Figure FDA0004074670220000051
Figure FDA0004074670220000052
assume that
Figure FDA0004074670220000053
And->
Figure FDA0004074670220000054
By respectively passing task t ij Delay generated by unloading kth virtual machine to cloud computing layer and kth virtual machine to fog computing layer, task t ij The execution delay of (a) is:
Figure FDA0004074670220000055
the total delay in performing all tasks is then expressed as:
Figure FDA0004074670220000056
vehicle terminal M i Executing task t on kth virtual machine of fog computing layer ij The occupation of the operation resource is c ijk Executing task t on kth virtual machine of cloud computing layer ij The occupation of the operation resource is c' ijk
Vehicle terminal M i C is occupied by the operation resource of (2) i Comprising two parts: (1) An operation resource occupied by a virtual machine that offloads an operation task to a fog calculation layer, (2) an operation resource occupied by a virtual machine that offloads an operation task to a cloud, expressed as:
Figure FDA0004074670220000057
therefore, the total operation resources occupied by all the vehicle terminals M are:
Figure FDA0004074670220000058
to minimize the computational resources in a fog computing environment, the following objective functions and constraint equations are formulated:
Figure FDA0004074670220000061
/>
Subject to
Figure FDA0004074670220000062
Figure FDA0004074670220000063
Figure FDA0004074670220000064
Figure FDA0004074670220000065
the objective function, equation (21), uses η to maintain a tradeoff between computational resource occupancy and latency; formulas (22) and (23) represent assigning tasks to one virtual machine in the cloud computing layer or cloud; equation (24) indicates that the total latency of any task cannot exceed the set task deadline; equation (25) indicates that the required transmission bandwidth for any task must not exceed the total bandwidth;
step 5: the method comprises the steps of managing calculation resources of the fog calculation network load balance, and distributing road vehicle terminal request calculation tasks to optimal fog calculation resources;
an algorithm for requesting an operation task to be distributed to an optimal fog calculation resource by a road vehicle terminal is provided, namely an operation resource effective task unloading algorithm for load balancing; estimating computing resource requirements by using an expected resource requirement matrix; according to the expected resource demand matrix, the virtual machine needs different time and energy to execute different tasks; the scheduler firstly unloads the task to the fog computing layer, and if the occupation of computing resources of the fog computing layer is too high, the task is unloaded to the cloud.
2. The fog network-based vehicle big data calculation unloading method according to claim 1, wherein: in the step 1, when a vehicle moves to a corresponding intersection and enters a coverage area of an access point signal, a vehicle terminal joins a corresponding wireless local area network and accesses fog computing equipment, and a computing task is sent to a fog computing node; in addition, if the vehicle terminal decides to offload tasks to the cloud computing device, accessing the cloud computing device using the WAN connection provided by the Wi-Fi access point; the fog calculating layer is also connected with the cloud calculating system; in the application layer, the vehicle terminal generates a task request for processing.
3. The fog network-based vehicle big data calculation unloading method according to claim 1, wherein: in the step 5, the input of the load balancing operation resource effective task unloading algorithm comprises a vehicle terminal set, a task deadline, a cloud virtual machine set and a fog calculation layer virtual machine set;
if the resource occupancy rate of the fog calculating layer is low, directly distributing the task to the virtual machine of the fog calculating layer, calling the fog calculating layer to execute the task process, returning the virtual machine position of the fog calculating layer of the distributed task according to the calculation resource occupancy condition of the virtual machine of the fog calculating layer in the network, calculating the resource occupancy of the virtual machine for executing the task, and updating the calculation resource occupancy of all the virtual machines of the fog calculating layer;
if the resource occupancy rate of the cloud computing layer is high, unloading the task to the cloud, then calling the cloud computing layer to execute the task process, and returning to the virtual machine position in the cloud computing layer, the computing resource occupancy rate of the virtual machine for executing the task and the computing resource occupancy rate of all the virtual machines of the cloud, wherein the task is distributed in the virtual machine position;
and returning service delay of the request task after the whole operation process is finished.
4. A fog network-based vehicle big data calculation unloading method according to claim 3, characterized in that: the computing resource occupation of the fog computing layer is the sum of the computing resource occupation of all virtual machines of the fog computing layer; the cloud computing resource occupation is the sum of all virtual machine computing resource occupation of the cloud.
CN201910246291.5A 2019-03-29 2019-03-29 Fog network-based vehicle big data calculation unloading method Active CN109947574B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910246291.5A CN109947574B (en) 2019-03-29 2019-03-29 Fog network-based vehicle big data calculation unloading method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910246291.5A CN109947574B (en) 2019-03-29 2019-03-29 Fog network-based vehicle big data calculation unloading method

Publications (2)

Publication Number Publication Date
CN109947574A CN109947574A (en) 2019-06-28
CN109947574B true CN109947574B (en) 2023-05-30

Family

ID=67012724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910246291.5A Active CN109947574B (en) 2019-03-29 2019-03-29 Fog network-based vehicle big data calculation unloading method

Country Status (1)

Country Link
CN (1) CN109947574B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110519776B (en) * 2019-08-07 2021-09-17 东南大学 Balanced clustering and joint resource allocation method in fog computing system
CN111124531B (en) * 2019-11-25 2023-07-28 哈尔滨工业大学 Method for dynamically unloading calculation tasks based on energy consumption and delay balance in vehicle fog calculation
CN111010434B (en) * 2019-12-11 2022-05-27 重庆工程职业技术学院 Optimized task unloading method based on network delay and resource management
CN112416603B (en) * 2020-12-09 2023-04-07 北方工业大学 Combined optimization system and method based on fog calculation
CN112685186B (en) * 2021-01-08 2023-04-28 北京信息科技大学 Method and device for unloading computing task, electronic equipment and storage medium
CN113015109B (en) * 2021-02-23 2022-10-18 重庆邮电大学 Wireless virtual network access control method in vehicle fog calculation
CN114363215A (en) * 2021-12-27 2022-04-15 北京特种机械研究所 Train communication network time delay analysis method based on supply and demand balance

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160182639A1 (en) * 2014-12-17 2016-06-23 University-Industry Cooperation Group Of Kyung-Hee University Internet of things network system using fog computing network
CN109343904A (en) * 2018-09-28 2019-02-15 燕山大学 A kind of mist calculating dynamic offloading method based on Lyapunov optimization

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160182639A1 (en) * 2014-12-17 2016-06-23 University-Industry Cooperation Group Of Kyung-Hee University Internet of things network system using fog computing network
CN109343904A (en) * 2018-09-28 2019-02-15 燕山大学 A kind of mist calculating dynamic offloading method based on Lyapunov optimization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
云机器人系统研究综述;李波;《计算机工程与应用》;20171230;全文 *

Also Published As

Publication number Publication date
CN109947574A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN109947574B (en) Fog network-based vehicle big data calculation unloading method
CN110505099B (en) Service function chain deployment method based on migration A-C learning
CN108566659B (en) 5G network slice online mapping method based on reliability
Faraci et al. Fog in the clouds: UAVs to provide edge computing to IoT devices
CN107615792B (en) Management method and system for MTC event
CN112822050A (en) Method and apparatus for deploying network slices
CN107666448B (en) 5G virtual access network mapping method under time delay perception
Arzo et al. Study of virtual network function placement in 5G cloud radio access network
CN110069341A (en) What binding function configured on demand has the dispatching method of dependence task in edge calculations
EP4024212A1 (en) Method for scheduling interference workloads on edge network resources
CN109743751B (en) Resource allocation method and device for wireless access network
CN113114738A (en) SDN-based optimization method for internet of vehicles task unloading
US20160269297A1 (en) Scaling the LTE Control Plane for Future Mobile Access
CN112073237B (en) Large-scale target network construction method in cloud edge architecture
Huang et al. Distributed resource allocation for network slicing of bandwidth and computational resource
Cao et al. Towards tenant demand-aware bandwidth allocation strategy in cloud datacenter
Wu et al. A mobile edge computing-based applications execution framework for Internet of Vehicles
Chang et al. Adaptive replication for mobile edge computing
Chen et al. Latency minimization for mobile edge computing networks
CN114024970A (en) Power internet of things work load distribution method based on edge calculation
Tsukamoto et al. Feedback control for adaptive function placement in uncertain traffic changes on an advanced 5G system
Xu et al. Online learning algorithms for offloading augmented reality requests with uncertain demands in MECs
Ma et al. Mobility-aware delay-sensitive service provisioning for mobile edge computing
Laroui et al. Virtual mobile edge computing based on IoT devices resources in smart cities
Yao et al. Multi-agent reinforcement learning for network load balancing in data center

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant