CN117255368A - Edge dynamic integration method for vehicle-mounted edge server and cooperative fixed edge server - Google Patents

Edge dynamic integration method for vehicle-mounted edge server and cooperative fixed edge server Download PDF

Info

Publication number
CN117255368A
CN117255368A CN202311536696.5A CN202311536696A CN117255368A CN 117255368 A CN117255368 A CN 117255368A CN 202311536696 A CN202311536696 A CN 202311536696A CN 117255368 A CN117255368 A CN 117255368A
Authority
CN
China
Prior art keywords
edge server
vehicle
traffic flow
grids
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311536696.5A
Other languages
Chinese (zh)
Other versions
CN117255368B (en
Inventor
李湘儿
宋维
唐昕怡
唐梽海
常乐
蒋丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202311536696.5A priority Critical patent/CN117255368B/en
Publication of CN117255368A publication Critical patent/CN117255368A/en
Application granted granted Critical
Publication of CN117255368B publication Critical patent/CN117255368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0958Management thereof based on metrics or performance parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0129Traffic data processing for creating historical data or processing based on historical data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0925Management thereof using policies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0958Management thereof based on metrics or performance parameters
    • H04W28/0967Quality of Service [QoS] parameters
    • H04W28/0975Quality of Service [QoS] parameters for reducing delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention belongs to the field of wireless communication networks, and aims to provide an edge dynamic integration method of a vehicle-mounted edge server and a fixed edge server, which comprises the following steps: dividing a map of an area to be researched into at least 2 grids, arranging a user terminal in any grid, and transferring a task generated by the user terminal to a vehicle-mounted edge server; predicting traffic flow of any grid through a prediction model to obtain traffic flow prediction data; obtaining the calculation capacity of the vehicle-mounted edge server of the grid within a set time according to the traffic flow prediction data; on the premise that the grids meet the calculation capacity of the vehicle-mounted edge servers, the moving paths of at least 2 vehicle-mounted edge servers among different grids are shortest, so that the optimal dispatching result of the vehicle-mounted edge servers is obtained. The method of the invention realizes the effective utilization of the edge computing resources and reduces the energy consumption of the computing resources.

Description

Edge dynamic integration method for vehicle-mounted edge server and cooperative fixed edge server
Technical Field
The invention relates to a wireless communication network technology, in particular to an edge dynamic integration method of a vehicle-mounted edge server and a fixed edge server.
Background
As is well known, the internet of vehicles is a large system network which is composed of an in-vehicle network, an inter-vehicle network and a cloud network and is used for radio communication and information interaction.
In the prior art, aiming at the problem of space-time variation which cannot be solved by a fixed edge server when a vehicle-mounted edge server and the fixed edge server in the Internet of vehicles are in communication deployment, how to select a proper mobile unit carrier and optimize the deployment of a mobile unit is still an open problem in the field.
Known technical solutions, some of which introduce a distributed vehicle edge calculation solution in the fields of vehicle network and edge calculation, are called automatic driving vehicle edge #AVE) The method comprises the steps of carrying out a first treatment on the surface of the The key of the solution is that the vehicle-to-vehicle is adoptedV2V) Communication and sharing available resources between adjacent vehicles, and a more general online solution is proposed based on the communication, namely, the edge cloud of the hybrid electric vehicleHVC);HVCEffectively sharing all accessible computing resources using a multi-access network, including roadside unitsRSU) And clouds. This architecture allows resource sharing between vehicles, but it only goes in during the access processResource sharing among vehicles can only design static loads; furthermore, by constructing a plurality of unmanned aerial vehiclesUAV) Auxiliary moving edge calculationMEC) The system can be used as unmanned planeMECThe nodes provide calculation unloading service for the ground internet of things nodes with limited local calculation capability, but the unmanned aerial vehicle has limited calculation resources, and has the problems of cruising and some safety.
It can be seen that the existing technology is mainly designed for static load, the load of each area on the map is basically constant, and it is difficult to quickly adapt to dynamic load change, and at the same time, it is difficult to find a suitable carrier, and the carried edge server adapts to dynamic load change; particularly in cities, due to the rapid flow of vehicles, load may fluctuate in time and space, thereby inducing an unbalanced load distribution, resulting in unnecessary energy consumption of computing resources.
Based on this, an edge dynamic integration method fully considering the space-time dynamics of the load and considering various factors such as the vehicle position, the moving distance, the computing power of the edge server, the load condition and the like is needed to be researched and developed.
Disclosure of Invention
The invention aims to provide an edge dynamic integration method of a vehicle-mounted edge server and a fixed edge server, which is used for at least solving one technical problem in the prior art.
The technical scheme of the invention is as follows:
an edge dynamic integration method of a vehicle-mounted edge server and a fixed edge server in a cooperative manner comprises the following steps:
dividing a map of an area to be researched into at least 2 grids, arranging a user terminal in any grid, and transferring a task generated by the user terminal to a vehicle-mounted edge server;
predicting traffic flow of any grid through a prediction model to obtain traffic flow prediction data;
obtaining the calculation capacity of the vehicle-mounted edge server of the grid within a set time according to the traffic flow prediction data;
on the premise that the grids meet the calculation capacity of the vehicle-mounted edge servers, the moving paths of at least 2 vehicle-mounted edge servers among different grids are shortest, so that the optimal dispatching result of the vehicle-mounted edge servers is obtained.
And predicting the traffic flow of the area to be researched through a prediction model to obtain traffic flow prediction data, wherein the method comprises the following steps of:
constructing a neural network model with time consistency;
training the neural network model with the time consistency by using the known traffic flow data to obtain a prediction model;
and taking traffic flow data of all grids corresponding to each time slice every day in a first period before the time point to be predicted as input data, and inputting the input data into the prediction model to obtain traffic flow prediction data of each time slice every day in a second period of the time point to be predicted.
The constructing the neural network model with time consistency comprises the following steps:
constructing a feature extraction block through grouping convolution and point-by-point convolution;
and constructing an encoder and a decoder of the neural network model through the feature extraction block to obtain the neural network model.
For any feature map in a neural networkThe convolution kernel isOutput characteristic diagram isThen the standard convolution operation can be described as:
wherein,the convolution operator is represented as such,andthe number of input and output channels representing the feature map,andthe length and width of the feature map are represented,indicating the size of the convolution kernel,andthe length and width of the feature map are output,the method comprises the steps of carrying out a first treatment on the surface of the Wherein,the step size representing the movement of the convolution kernel,representing the length of the feature map fill.
The feature extraction block is represented as:
wherein,andrepresenting the input and output of the feature extraction block, respectively.
The map of the area to be studied is divided into at least 2 grids, a user terminal is deployed in any grid, and tasks generated by the user terminal are transferred to a vehicle-mounted edge server, including:
dividing a map of an area to be studied intoTwo-dimensional map of grid and index set of grid asWhereinRepresenting the total number of grids;
and deploying a fixed edge server, a user terminal and a vehicle-mounted edge server in any grid, transferring tasks generated by the user terminal to the fixed edge server and the vehicle-mounted edge server, and executing any task through the fixed edge server and/or the vehicle-mounted edge server.
The obtaining the calculation capacity of the vehicle-mounted edge server of the grid within the set time according to the traffic flow prediction data comprises the following steps:
according to the traffic flow prediction data, obtaining average traffic flow and change coefficients of different time periods, deploying the grids with the traffic flow and change coefficients of 10-30% at the front as main candidate grids of the fixed edge server and the vehicle-mounted edge server, and enabling other grids to only depend on the fixed edge server;
detecting the user demand in any grid by using an isolated forest algorithm to obtain a vehicle flow surge period in the grid;
and taking the minimum traffic flow of the traffic flow surge period in the grid as the maximum capacity of the fixed edge server to be deployed, and taking the difference between the maximum traffic flow and the minimum traffic flow of the traffic flow surge period in the grid as the calculation capacity of the vehicle-mounted edge server to be deployed.
On the premise that the grids meet the calculation capacity of the vehicle-mounted edge servers, enabling the moving paths of at least 2 vehicle-mounted edge servers among different grids to be shortest, and comprising the following steps:
setting an initial solution as a scheduling scheme of the vehicle-mounted edge server, and setting iteration times;
clustering is carried out, and a random initial solution is generated for each cluster;
generating a path in the cluster and connecting the cluster paths;
and finally obtaining the shortest moving path of the vehicle-mounted edge server among different grids through iteration.
The generating the intra-cluster path and connecting the cluster path comprises the following steps:
for each cluster in the clusters, randomly generating a path in the clusters, and connecting the paths among different clusters to form an integral route as an initial solution of route planning;
adding noise points not classified into clusters to existing paths, or generating new paths independently of noise points not classified into clusters.
And finally obtaining the shortest moving path of the vehicle-mounted edge server among different grids through iteration, wherein the method comprises the following steps:
continuously improving an initial value by using an iterative local search method until reaching a preset maximum iterative count, generating a final solution, and outputting the current optimal vehicle-mounted edge server scheduling scheme;
and in the process of continuously improving the initial value by the iterative local search method, the method usesThe algorithm randomly selects two nodes for switching, and introduces a penalty matrix to record frequent node pair switching.
The beneficial effects of the invention at least comprise:
according to the method, a map of an area to be researched is divided into at least 2 grids, a user terminal is deployed in any grid, and tasks generated by the user terminal are transferred to a vehicle-mounted edge server; then, predicting the traffic flow to be predicted by using a prediction model to obtain traffic flow prediction data; obtaining the calculation capacity of the vehicle-mounted edge server of the grid within a set time according to the traffic flow prediction data; on the premise that the grids meet the calculation capacity of the vehicle-mounted edge servers, enabling the moving paths of at least 2 vehicle-mounted edge servers among different grids to be shortest so as to obtain the optimal scheduling result of the vehicle-mounted edge servers; according to the method, the space-time dynamic performance of the load is fully considered, the computing resources are actively allocated to prevent the shortage of resources at the peak time, and then the moving route of the vehicle-mounted edge server is designed by combining with the efficient scheduling algorithm and considering various factors such as the vehicle position, the moving speed, the computing power of the vehicle-mounted edge server, the load condition and the like, so that the use efficiency of the resources is improved, the time delay is reduced, the successful execution of the tasks is ensured, the method adapts to the rapid adaptation of the dynamic load change of the city, the unbalanced load distribution is stopped, the effective utilization of the edge computing resources is realized, and the energy consumption of the computing resources is reduced.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a system model diagram;
FIG. 3 is a schematic illustration of convolution;
FIG. 4 is a graph showing the trend of workload of a grid over various time periods;
FIG. 5 is a workload unit diagram of a corresponding grid;
FIG. 6 is a schematic diagram of the assigned workload;
FIG. 7 is a grid numbering schematic diagram requiring the focused deployment of on-board edge servers;
FIG. 8 is a geographic distribution of a surge grid over three time periods;
FIG. 9 is a diagram illustrating the computing resource requirements of user terminals in grid 177;
FIG. 10 is a schematic diagram of the number of average on-board edge servers required to select computing resources for a grid;
FIG. 11 is a graph comparing average computing resource utilization per hour in a static deployment method, a hybrid static dynamic deployment method, and an edge dynamic integrated deployment method;
FIG. 12 is a graph comparing total aggregate resource usage for static deployment methods, hybrid static dynamic deployment methods, and edge dynamic integrated deployment methods.
Detailed Description
To meet the demands of wireless communications, such as autopilot, multi-car collaborative lane change scheduling, and advanced driving assistance systems, more computing resources and shorter response times are urgently needed. However, the computational power of the vehicle is limited and it is difficult to support the above application. Thus, edge computing has evolved, which allows computing tasks to be migrated from vehicles to computationally powerful platforms such as access network base stations, road side units, etc., ensuring that the internet of vehicles is adequately supported for computation. Meanwhile, the traditional centralized cloud computing is difficult to meet the requirements of the internet of vehicles on real time and reliability due to limited access bandwidth, high delay and dependence on stable connection. In comparison, edge computing places computing tasks at the edge of the network, close to the data source, to better meet the user's expectations for computationally intensive and delay-sensitive applications.
The deployment of the edge computing server is crucial to the construction of the automatic driving digital traffic infrastructure, is a comprehensive system engineering, and relates to a plurality of factors such as deployment carrier, position selection, service range, optimization target, vehicle behavior and the like. Accurate prediction of user position and movement trajectory is particularly important for optimizing edge server deployment, establishing a reasonable dynamic resource allocation mechanism, and designing efficient scheduling algorithms.
At present, the embodiment and the application of the problem at home and abroad are still in an initial stage, and a plurality of theoretical and practical problems remain to be solved. The main challenge is the frequent change of user position and movement pattern due to the high speed movement of the vehicle. In cities, policies for deploying edge servers on network base stations and roadside units are commonly employed, known as fixed edge server deployment methods. Although this method is relatively easy to implement, the server is fixed in position after deployment is completed, and the computing power is difficult to adjust in real time. The vehicle networking terminal user is a vehicle moving on the road, and the computing requirement of the vehicle networking terminal user naturally has space-time dynamics, so that the edge computing load also presents the same dynamic characteristic. However, when these fixed edge servers are utilized to serve internet of vehicles users, their fixed location, coverage, and computational power that is not easily adjustable may result in their inefficient response to dynamic loads.
Therefore, the present embodiment introduces a co-deployment of the in-vehicle edge server and the fixed edge server. The vehicle-mounted edge server route planning is based on urban traffic planning, and vehicles carrying the edge servers can pass through all urban arterial roads. And a route can be planned on the branch according to the traffic density, so that good urban coverage rate is ensured. Meanwhile, the change of future demands can be predicted by analyzing the historical workload through a prediction algorithm. Active computing resource pushing is started based on the prediction result, and resource shortage during peak time can be avoided. And an efficient scheduling algorithm is adopted, various factors such as the vehicle position, the moving speed, the computing capacity of the vehicle-mounted edge server, the load state and the like are considered, the resource utilization rate is improved, the delay is reduced, and the task completion is ensured. In addition, considering the problems that the vehicle-mounted edge server consumes resources in the moving process, such as limited computing capacity and energy storage, the vehicle-mounted edge server must plan a reasonable moving route, minimize the total moving distance to save energy consumption, and achieve the purpose of effectively utilizing the edge computing resources.
Edge server: is a server deployed at the edge of the network; like ordinary servers, edge servers may provide computing, networking, and storage functions. And the edge server receives the calculation request of the terminal user, and sends the result back to the terminal user after the corresponding calculation is completed, namely calculation unloading is performed. The edge servers are physically close to the end user and field applications, so the speed of processing requests is faster than centralized servers, such as cloud servers, which can provide low latency, high bandwidth, mass access services to the end user.
Fixed edge server: the edge servers deployed at fixed places such as base stations and road side units cannot move, and once deployed, the edge servers cannot move, and cannot be easily adjusted in real time;
vehicle-mounted edge server: any vehicle is carried with an edge server to form a vehicle-mounted edge server, a calculation unloading service is provided for terminal vehicles in the coverage range of the vehicle in real time in the running process, and calculation requests generated by surrounding terminal nodes, namely vehicles, are processed; the vehicle-mounted edge server can also be communicated with a remote cloud computing and vehicle networking control center, and is used for uploading refined data for summarizing and receiving configuration instructions of the control center.
The general idea of the invention is as follows:
first, a base is establishedUNetAnd predicting traffic volume of each region of each time slice. Next, based on the predicted data, an orphan forest algorithm is applied between the fixed edge server and the onboard edge server to allocate respective working quotas. And finally, designing an optimization algorithm for dispatching the vehicle-mounted edge server, planning a moving path of the vehicle-mounted edge server, and compensating an overload area of the fixed edge server in the running process so as to fully exert flexibility of the fixed edge server and realize efficient dispatching of limited computing resources.
The present application will be further described with reference to the accompanying drawings.
Specific example 1:
referring to fig. 1, an edge dynamic integration method of a vehicle-mounted edge server and a fixed edge server specifically includes the following steps:
step one, data gridding:
the data within a certain geographic range is gridded, namely, the map is divided into a plurality of grids, each grid represents an independent geographic area, and the vehicle position and load are induced from longitude and latitude to the corresponding grid. For example: dividing Beijing city intoTwo-dimensional map of a grid, the index set of the grid being recorded asWhereinRepresenting the total number of grids, 1024; in one grid, the following components may be deployed:
as shown in figure 2 of the drawings,UEUserEquipment) Representing user terminals, i.e.IoVUser equipment or vehicle nodes. The user terminal will generate computing tasks and transfer these to the on-board edge server.BSBaseStation) Representing base stations, the main responsibility of which is to establish and maintain task schedulers, on-board edge servers and user terminalsUEA communication link between them.TSTaskScheduler) On behalf of the task scheduler, computing tasks may be collected by the base station and each task may be performed by selecting an on-board edge server.FES(FixedEdgeSever) For a fixed edge server, it can receive and executeTSAnd (3) distributing calculation tasks.VESVehicle-mountedEdgeServer) As the vehicle-mounted edge server, due to the mobility of the vehicle in urban environments,FESmay not be stable. Therefore, in-vehicle edge servers are introduced to increase the elastic computing resources to accommodate the high space-time dynamic demands of users.
Step two, predicting urban traffic flow:
the present embodiment employs a neural network model that is designed specifically for predicting traffic flow in urban environments. Different from the traditional method, the model firstly uses known, such as last years, traffic flow data to train out a prediction model, then inputs the last week data of the data to be predicted, and predicts the same time point realized in seven days in the future. That is, the trained prediction model is used to input the traffic data of all grids of each time slice every day in the previous week, for example, 30 minutes is a time slice, and one day is divided into 48 time slices, so as to predict the traffic data of each time slice every day in the next week. By the time slice alignment strategy, the prediction result is accurate to each time slice, resources can be deployed more effectively, and load mode fluctuation of different dates is extremely small in the same time slice, so that the time consistency can be selectedUNetArchitecture.
UNetThe architecture mainly comprises an encoder and a decoder, both of which consist of feature extraction blocks. For traffic flow prediction, this embodiment introduces two special convolutions, including a packet convolution and a point-by-point convolution, and the feature extraction block consists of the packet convolution and the point-by-point convolution. In the above network architecture, as shown in fig. 3, for any feature map in the neural networkThe convolution kernel isOutput characteristic diagram isThen the standard convolution operation can be described as:
wherein the method comprises the steps ofThe convolution operator is represented as such,andthe number of input and output channels representing the feature map,andthe length and width of the feature map are represented,indicating the size of the convolution kernel,andthe length and width of the output feature map can be calculated by the following formula:
in the case of the formula (I) of this patent,the step size representing the movement of the convolution kernel,representing the length of the feature map fill.
In particular, the present embodiment adopts a combination of two special convolutions to form a feature extraction block, which are point-by-point convolutions respectivelyAnd packet convolution. For point-by-point convolution, it is a standard convolutionIn a particular form. For packet convolution, if we assume the number of packets isThen for the input featuresWill divide its channel intoParts, each partThe dimension of the corresponding output characteristic diagram is
Based on the two special convolutions, we construct a feature extraction block of a feature map as:
such as in the above modelAndare allWherein, the data of the data set is recorded,traffic data representing a particular time slice for seven consecutive days,beijing City is divided intoA grid; the input isHistory data of the last week, i.e. the vehicle inflow and outflow data by deep learningUNetModel training with internal known data such as Beijing four year historical traffic data, determining internal parameters, and outputtingData of (a), i.e. a future weekTraffic flow prediction data of a certain time slice of each grid;
specifically, when the model is used for traffic flow prediction, the hidden feature dimension of the network can be calculatedSet to 512 and use a lot size of 16; selection ofLionThe optimizer performs training with the initial learning rate of 0.002, and adopts a cosine annealing strategy to dynamically adjust the learning rate, so that the training performance of the model is improved; to train the model, a mean square error loss can be usedMSELossTo minimize the difference between the predicted and actual values.
To verify the above predictive model, a comprehensive comparative study was performed, includingTaxiBJData set up and current causeOptimum method for useTAUAndSimVPa comparison is made. To ensure a fair comparison, predictions were also made using data from the same first 7 days, ensuring that experimental settings matched. In addition, to verify the validity of I prediction model, buildAvgAs a blank reference group, no prediction was performed, and only the average data at the same time point was obtained to obtain data at the same time point for the next 7 days, and the specific data are as in table 1:
table 1: calculation cost and mean square error table of various prediction methods
GICUNe TAU SimVP STResNet
Cost of calculation 1.37 6.12 9.42 3.14
Mean square error 0.0155 0.0185 0.0190 0.0223
As can be seen from table 1, the predictive model disclosed in this example is superior to other methods in terms of traffic flow prediction, and the predictive model maintains superior performance even on sunday where traffic flow is relatively abnormal. In addition, from the point of view of data calculation cost, the highest calculation cost recorded by the method provided by the embodiment is only 1.37GIs significantly lower thanTAU6.12 of (2)GSimVP9.42 of (2)GAndSTResNet3.14 of (2)G. Meanwhile, the prediction model still maintains higher prediction precision, and the average mean square error is 0.0155, which is obviously better than that of the prior artTAUIs a ratio of 0.0185 of (c),SimVP0.0190 and (2)STResNet0.0223 of (2);
in summary, the prediction model disclosed in the embodiment not only realizes absolute difference of prediction results, but also remarkably reduces calculation cost, and provides accurate traffic flow prediction data for future deployment of vehicle-mounted edge servers.
Step three, quota planning of the fixed edge server and the vehicle-mounted edge server:
according to the future traffic flow prediction data of one week, the main candidate grids of the fixed edge server and the vehicle-mounted edge server are deployed to calculate average traffic flow and change coefficients of different time periods, and the scale and change of traffic are captured. The grids with higher traffic and variation coefficients are determined as main candidate grids for deploying the fixed edge server and the vehicle edge server, and optionally, the grids with the traffic and variation coefficients of 10-30 percent, preferably 20 percent, are selected; while other grids rely solely on fixed edge servers; the present embodiment picks up the primary candidate grids for deploying the fixed edge server and the on-board edge server for 48 key studies and uses an orphan forest algorithm to detect time slices within these grids where the user demand increases significantly, these time slices being referred to as surge periods. The isolated forest algorithm effectively distinguishes between abnormal data and normal data by recursively dividing the data space. Data points that are shallower in the tree are more easily identified as anomalies due to independence within the tree structure. The period in which the data point is shallower and the data point flow exceeds 75% of the grid flow is defined as the surge period. This condition ensures that a period of significantly increased traffic compared to the normal period can be captured in order to schedule the on-board edge server to meet the user demand during that period. For example: FIG. 4 is a graph of the traffic flow for a month for grid number 174, with the traffic surge period marked with dots in the graph;
after the grid traffic surge period is selected, the deployment capacity of the edge server and the deployment capacity of the vehicle-mounted edge server are further definitely fixed. The embodiment uses the selected minimum traffic flow in the surge period as the maximum capacity of the fixed edge server to be deployed. Taking the difference between the maximum traffic flow and the minimum traffic flow in the surge period as the calculation capacity of the vehicle-mounted edge server to be deployed. This critical step ensures that the fixed edge server is able to meet the basic computational requirements of each grid most of the time. And in the period of obviously increasing load demands, the computing resources of the vehicle-mounted edge server and the fixed edge server can be cooperatively utilized to process the computing tasks which cannot be completed by the fixed edge server alone.
The following is a detailed description by way of specific examples:
first, the calculation of the average traffic flow was performed for all grids of 1488 different time intervals in month 3 of 2016, and the grids were arranged in descending order of traffic flow. After the top 300 grids with the highest traffic flow are selected, the variation coefficients of the grids are further calculated, and the top 48 grids with the highest variation coefficients are selected from the calculated variation coefficients. These grids not only exhibit a high concentration of vehicle traffic but also exhibit significant wave characteristics over different time intervals.
Next, the grid with high dynamic and varying characteristics is optimized and allocated with edge computing resources for limited purposesFESAndVESthe most efficient deployment is achieved under the resources. For each region, for itFESSufficient computing resources are allocated to meet the maximum workload occurring during the regular time period. At a user terminalUEIn special cases of sudden increase in number, requirementsFESFully exploiting its computational potential to cope with peak workload while properly transferring out-of-capacity workload toVESAnd (5) processing.
Specifically, as shown in fig. 4, the graph shows the trend of the workload in the respective time periods of the grid 177 of the month of 2016 and the workload peak reaches 330 in the 799 th time period. In determining the computing resources of the largest fixed edge server required for the grid, user terminals should be excluded firstUEThe number of surge periods is then chosen from the remaining time slices as the value of the workload maximum. As shown in fig. 5, this value is 221 workload units. At a user terminalUEThe present embodiment allocates part of the workload to the on-board edge server appropriately for a specific period of time in which the number significantly increasesVESAs shown in fig. 6, the assigned workload is 109 units; the method is used for rapidly adapting to dynamic load change.
Step four, mobile scheduling optimization of the vehicle-mounted edge server:
after the calculation capacity and the calculation quantity of the corresponding vehicle-mounted edge servers in each time slice of each grid are determined, the moving paths of the vehicle-mounted edge servers among different grids are planned, so that the following conditions are satisfied: 1. the requirements of all grids on the vehicle-mounted edge server can be met, namely, the service user; 2. the total moving path is the shortest, i.e. energy saving.
As shown in fig. 7, the ordinate in the figure is the grid number for selecting the on-vehicle edge server to be deployed with emphasis, and the abscissa is 48 time slices of a day, and the figure shows the area for deploying the on-vehicle edge server with emphasis and the amount of computing resources to be supplemented for each time slice. The arrow in the figure indicates the planned path of a certain vehicle edge server, namely, 30 minutes at 10 points are moved from grid 177 to grid 179, and then stay on grid 179 all the time in the following time slice. The problem is to find the paths of all the vehicle-mounted edge servers and meet the two-point requirements listed in the previous paragraph.
In order to solve the scheduling problem of the vehicle-mounted edge server, namely how to optimize the path of the vehicle-mounted edge server so as to meet the edge computing resource requirements of different areas in a specific time window and reduce the energy consumption as much as possible. The present embodiment generalizes this problem to vehicle path problems with hard time windows, involving strict time constraints. In order to solve the problem, the embodiment provides a class-priority routing iterative local search algorithm, which specifically comprises the following steps:
1. setting an initial solution as a scheduling scheme of the vehicle-mounted edge serverSetting an iteration counter0, the iteration number for the subsequent control algorithm;
2. usingDBSCANClustering by an algorithm, and generating a random initial solution for each cluster; deployment node of algorithm to vehicle-mounted edge serverAClustering to obtain clusters, e.gC1,C2, ...CKAnd some noise points not categorized into any clustersN
3. Generating a path in the cluster and connecting the cluster paths; for each cluster, a path is randomly generated inside the cluster, and the paths among different clusters are connected to form an integral route, and the integral route is used as an initial solution of route planning. For those noise points that are not classified into clusters, add them to existing paths or generate new paths for them independently; the present embodiment considers that the areas to be deployed are geographically clustered, as shown in FIG. 8, so thatCFRSThe initial solution of the stage construction is expected to increase the convergence speed of the algorithm.
4. And (5) iterative optimization. The present embodiment applies an iterative local search method to continuously improve the initial value. And in the iterative process, useThe algorithm randomly selects two nodes for exchanging so as to enhance the current path; for example: the original path is processedBecomes as followsBy comparing the lengths of the new path and the original path, if the new path is shorter, the change is accepted, and the new path is regarded as the current path; otherwise, keeping the original path unchanged; at the same time, a penalty matrix is introduced to record frequent node pair switching instances, and node pairs with high frequency switching are assigned higher penalty values, which can lead to adverse results in past computation. And selecting distribution nodes according to the magnitude of the penalty values, wherein the probability of selecting node pairs is lower as the penalty value is higher. By this mechanism, the possibility of selecting node pairs that were frequently exchanged in the past but whose results were suboptimal can be reduced. By the method, local optimum can be avoided, and a better global solution can be found. And repeating the steps until the maximum iteration count is reached, and generating a final solution. After a series of iterative optimization, the optimal vehicle-mounted edge server scheduling scheme obtained at present can be output.
And (3) verification: the effectiveness of the method described in this example was verified by comparative experiments: first,: respectively carrying out static deployment and mixed static dynamic deployment; wherein, static deployment is: for each grid, for each gridFESAllocating a sufficient amount of computing resources to satisfy potential compute offload requests under a maximum workload assumption; the hybrid static dynamic deployment is: the fixed edge server and mobile units are used to allocate the workload of the grid for processing by both. However, a key difference is that each of the hybrid static dynamic deploymentsVESReturning to the base after completing the single scheduling task, and then continuing to execute a new scheduling task; the edge dynamic integration deployment is as follows: by using the method provided by the embodiment, the space-time dynamic performance of the load is fully considered, and the dynamic resource scheduling is performed by considering various factors such as the vehicle position, the moving distance, the calculation power of the vehicle-mounted edge server, the load condition and the like;
analysis of the grid 177 as in fig. 7 results in the result as in fig. 9, wherein the user terminals in the grid 177UEIs a hybrid static dynamic systemComputing resource allocation under deployment and static deployment schemes; the static deployment scheme cannot flexibly allocate computing resources according to dynamic changes of terminal requirements due to a static edge server placement strategy, so that the resource utilization rate is low in most cases. In contrast, the edge dynamic integration deployment has flexible resource allocation strategy, and realizes higher resource utilization rate; wherein, static deployment is used in FIG. 9SDA representation; hybrid static dynamic deploymentMSDDA representation; edge dynamic integration deploymentIVAMENAnd (5) expressing.
Meanwhile, the average on-board edge server required to supplement computing resources of 48 selected grids at different times of the day, as shown in FIG. 10VESAs can be clearly seen, there are about three peak periods in which more on-board edge servers need to be deployedVESsTo meet the computational requirements of the terminal. Each vehicle-mounted edge server in hybrid static dynamic deployment methodVESReturning to the base after completion of a single scheduling task can result in unnecessary energy waste; FIG. 11 compares the performance of three different deployment methods in terms of average computing resource utilization per hour. The result shows that the edge dynamic integration deployment method is obviously superior to the other two methods in the period of high terminal computing demand; static deployment in FIGS. 10-11SDA representation; hybrid static dynamic deploymentMSDDA representation; edge dynamic integration deploymentIVAMENAnd (5) expressing.
Finally, counting the total computing resource usage, as shown in fig. 12, the edge dynamic integrated deployment method is about 4.83% higher than the hybrid static dynamic deployment method and about 10.6% higher than the static deployment method in terms of total computing resource utilization; in short, the edge dynamic integrated deployment method not only meets the requirements of the user terminal, but also improves the utilization rate of the computing resources and remarkably reduces the total computing resource usage. From an economic point of view it offers significant advantages. The method for deploying the edge dynamic integration successfully solves the fluctuation of the demand of the computing resources caused by space-time factors in the background of the Internet of vehicles, and provides an economically feasible solution for deploying the vehicle-mounted edge server. Static deployment in FIG. 12By usingSDA representation; hybrid static dynamic deploymentMSDDA representation; edge dynamic integration deploymentIVAMENAnd (5) expressing.
In short, the invention provides a dynamic integration scheme of edge calculation of a vehicle-mounted edge server and a fixed edge server based on urban traffic planning, and establishes a baseUNetThe traffic prediction model of (1) predicts the traffic volume of each time slice of each grid in a time slice alignment manner; based on the predicted data, reasonably distributing the workload of the fixed edge server and the vehicle-mounted edge server by using an isolated forest algorithm; finally, the method for scheduling the vehicle-mounted edge server and planning the path thereof is utilized to realize the efficient scheduling of the limited computing resources, so that the fixed edge server provides basic coverage, and the vehicle-mounted edge server is used for dynamic compensation; the solution provides a comprehensive method for dynamic adjustment of the edge computing resources in the Internet of vehicles environment, realizes effective utilization of the edge computing resources, and reduces energy consumption of the computing resources. The method has the characteristics of high safety, large load capacity and low cost.
The foregoing disclosure is merely illustrative of some embodiments of the invention, and the invention is not limited thereto, as modifications may be made by those skilled in the art without departing from the scope of the invention. The above-mentioned inventive sequence numbers are merely for description and do not represent advantages or disadvantages of the implementation scenario.

Claims (10)

1. The edge dynamic integration method of the vehicle-mounted edge server and the fixed edge server is characterized by comprising the following steps:
dividing a map of an area to be researched into at least 2 grids, arranging a user terminal in any grid, and transferring a task generated by the user terminal to a vehicle-mounted edge server;
predicting traffic flow of any grid through a prediction model to obtain traffic flow prediction data;
obtaining the calculation capacity of the vehicle-mounted edge server of the grid within a set time according to the traffic flow prediction data;
on the premise that the grids meet the calculation capacity of the vehicle-mounted edge servers, the moving paths of at least 2 vehicle-mounted edge servers among different grids are shortest, so that the optimal dispatching result of the vehicle-mounted edge servers is obtained.
2. The method for dynamically integrating the edges of the vehicle-mounted edge server and the fixed edge server according to claim 1, wherein the predicting the traffic flow of the area to be studied by the prediction model to obtain the traffic flow prediction data comprises the following steps:
constructing a neural network model with time consistency;
training the neural network model with the time consistency by using the known traffic flow data to obtain a prediction model;
and taking traffic flow data of all grids corresponding to each time slice every day in a first period before the time point to be predicted as input data, and inputting the input data into the prediction model to obtain traffic flow prediction data of each time slice every day in a second period of the time point to be predicted.
3. The method for dynamically integrating the edge of the on-vehicle edge server and the fixed edge server according to claim 2, wherein the constructing the neural network model with time consistency comprises the following steps:
constructing a feature extraction block through grouping convolution and point-by-point convolution;
and constructing a neural network model through the feature extraction block.
4. The edge dynamic integration method of the on-vehicle edge server and the fixed edge server according to claim 3, wherein the method comprises the following steps:
for any feature map in a neural networkConvolution kernel +.>The output characteristic diagram is->Then the standard convolution operation can be described as: />
Wherein,representing convolution operator ++>And->Input and output channel number of the characteristic diagram, < +.>And->Representing the length and width of the feature map, +.>Representing the size of the convolution kernel +.>And->The length and width of the output feature map, and +.>The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Step size representing convolution kernel movement, +.>Representing the length of the feature map fill.
5. The edge dynamic integration method of the on-vehicle edge server and the fixed edge server according to claim 3, wherein the method comprises the following steps:
the feature extraction block is represented as:
wherein,and->Representing the input and output of the feature extraction module, respectively.
6. The method for edge dynamic integration of an on-vehicle edge server and a fixed edge server according to claim 1, wherein the dividing the map of the area to be studied into at least 2 grids and deploying a user terminal in any one of the grids, transferring the task generated by the user terminal to the edge server comprises:
dividing a map of an area to be studied intoTwo-dimensional map of grid and index set of grid asWherein->Representing the total number of grids;
and deploying a fixed edge server, a user terminal and a vehicle-mounted edge server in any grid, transferring tasks generated by the user terminal to the fixed edge server and the vehicle-mounted edge server, and executing any task through the fixed edge server and/or the vehicle-mounted edge server.
7. The method for dynamically integrating the edge of the fixed edge server in cooperation with the vehicle-mounted edge server according to claim 1, wherein the obtaining the computing capacity of the vehicle-mounted edge server of the grid within a set time according to the traffic flow prediction data comprises the following steps:
according to the traffic flow prediction data, obtaining average traffic flow and change coefficients of different time periods, deploying grids with the traffic flow and change coefficients of 10% -30% in front of each other to form main candidate grids of a fixed edge server and a vehicle-mounted edge server, wherein other grids only depend on the fixed edge server;
detecting the user demand in any grid by using an isolated forest algorithm to obtain a vehicle flow surge period in the grid;
and taking the minimum traffic flow of the traffic flow surge period in the grid as the maximum capacity of the fixed edge server to be deployed, and taking the difference between the maximum traffic flow and the minimum traffic flow of the traffic flow surge period in the grid as the calculation capacity of the vehicle-mounted edge server to be deployed.
8. The method for dynamically integrating edges of on-board edge servers in cooperation with fixed edge servers according to claim 1, wherein said minimizing a moving path of at least 2 on-board edge servers between different grids on the premise that said grids satisfy a calculation capacity of the on-board edge servers comprises:
setting an initial solution as a scheduling scheme of the vehicle-mounted edge server, and setting iteration times;
clustering is carried out, and a random initial solution is generated for each cluster;
generating a path in the cluster and connecting the cluster paths;
and finally obtaining the shortest moving path of the vehicle-mounted edge server among different grids through iteration.
9. The method for dynamically integrating edges of an on-vehicle edge server and a fixed edge server according to claim 8, wherein generating an intra-cluster path and connecting cluster paths comprises:
for each cluster in the clusters, randomly generating a path in the clusters, and connecting the paths among different clusters to form an integral route as an initial solution of route planning;
adding noise points not classified into clusters to existing paths, or generating new paths independently of noise points not classified into clusters.
10. The method for dynamically integrating the edge of the on-vehicle edge server in cooperation with the fixed edge server according to claim 8, wherein the step of obtaining the shortest moving path of the on-vehicle edge server between different grids through iteration includes:
continuously improving an initial value by using an iterative local search method until reaching a preset maximum iterative count, generating a final solution, and outputting the current optimal vehicle-mounted edge server scheduling scheme;
and in the process of continuously improving the initial value by the iterative local search method, the method usesThe algorithm randomly selects two nodes for switching, and introduces a penalty matrix to record frequent node pair switching.
CN202311536696.5A 2023-11-17 2023-11-17 Edge dynamic integration method for vehicle-mounted edge server and cooperative fixed edge server Active CN117255368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311536696.5A CN117255368B (en) 2023-11-17 2023-11-17 Edge dynamic integration method for vehicle-mounted edge server and cooperative fixed edge server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311536696.5A CN117255368B (en) 2023-11-17 2023-11-17 Edge dynamic integration method for vehicle-mounted edge server and cooperative fixed edge server

Publications (2)

Publication Number Publication Date
CN117255368A true CN117255368A (en) 2023-12-19
CN117255368B CN117255368B (en) 2024-02-27

Family

ID=89128050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311536696.5A Active CN117255368B (en) 2023-11-17 2023-11-17 Edge dynamic integration method for vehicle-mounted edge server and cooperative fixed edge server

Country Status (1)

Country Link
CN (1) CN117255368B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117573376A (en) * 2024-01-16 2024-02-20 杭州天舰信息技术股份有限公司 Data center resource scheduling monitoring management method and system
CN117952285A (en) * 2024-03-27 2024-04-30 广东工业大学 Dynamic scheduling method for unmanned aerial vehicle mobile charging station
CN117992230A (en) * 2024-02-21 2024-05-07 北京驭达科技有限公司 Vehicle-mounted edge calculation method and system based on autonomous learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113391647A (en) * 2021-07-20 2021-09-14 中国人民解放军国防科技大学 Multi-unmanned aerial vehicle edge computing service deployment and scheduling method and system
WO2022193511A1 (en) * 2021-03-18 2022-09-22 湖北亿咖通科技有限公司 Map data transmission method and system, edge server, and storage medium
CN115361689A (en) * 2022-08-08 2022-11-18 广东工业大学 Cooperative deployment method for fixed station and unmanned aerial vehicle carrying edge server
WO2023108718A1 (en) * 2021-12-16 2023-06-22 苏州大学 Spectrum resource allocation method and system for cloud-edge collaborative optical carrier network
CN116339748A (en) * 2023-03-06 2023-06-27 南京航空航天大学 Self-adaptive application program deployment method in edge computing network based on mobility prediction
CN116600347A (en) * 2023-05-25 2023-08-15 上海电器科学研究所(集团)有限公司 Edge calculation dynamic adjustment and unloading method based on path prediction
CN116828515A (en) * 2023-06-15 2023-09-29 浙江大学 Edge server load prediction method based on space-time diagram convolution under Internet of vehicles
CN116866931A (en) * 2023-07-18 2023-10-10 广东工业大学 Urban mobile edge server deployment method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022193511A1 (en) * 2021-03-18 2022-09-22 湖北亿咖通科技有限公司 Map data transmission method and system, edge server, and storage medium
CN113391647A (en) * 2021-07-20 2021-09-14 中国人民解放军国防科技大学 Multi-unmanned aerial vehicle edge computing service deployment and scheduling method and system
WO2023108718A1 (en) * 2021-12-16 2023-06-22 苏州大学 Spectrum resource allocation method and system for cloud-edge collaborative optical carrier network
CN115361689A (en) * 2022-08-08 2022-11-18 广东工业大学 Cooperative deployment method for fixed station and unmanned aerial vehicle carrying edge server
CN116339748A (en) * 2023-03-06 2023-06-27 南京航空航天大学 Self-adaptive application program deployment method in edge computing network based on mobility prediction
CN116600347A (en) * 2023-05-25 2023-08-15 上海电器科学研究所(集团)有限公司 Edge calculation dynamic adjustment and unloading method based on path prediction
CN116828515A (en) * 2023-06-15 2023-09-29 浙江大学 Edge server load prediction method based on space-time diagram convolution under Internet of vehicles
CN116866931A (en) * 2023-07-18 2023-10-10 广东工业大学 Urban mobile edge server deployment method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HONG ZHANG,SHENG JIN,ZHIHAI TANG,LE CHANG: "Joint Offloading with Fixed-Site and UAV-Mounted Edge Servers Based on Particle Swarm Optimization", 2023 9TH INTERNATIONAL CONFERENCE ON CONTROL SCIENCE AND SYSTEMS ENGINEERING (ICCSSE) *
蒋丽,谢胜利,田辉: "面向数字孪生边缘网络的区块链分片及资源自适应优化机制", 通信学报 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117573376A (en) * 2024-01-16 2024-02-20 杭州天舰信息技术股份有限公司 Data center resource scheduling monitoring management method and system
CN117573376B (en) * 2024-01-16 2024-04-05 杭州天舰信息技术股份有限公司 Data center resource scheduling monitoring management method and system
CN117992230A (en) * 2024-02-21 2024-05-07 北京驭达科技有限公司 Vehicle-mounted edge calculation method and system based on autonomous learning
CN117952285A (en) * 2024-03-27 2024-04-30 广东工业大学 Dynamic scheduling method for unmanned aerial vehicle mobile charging station

Also Published As

Publication number Publication date
CN117255368B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
CN117255368B (en) Edge dynamic integration method for vehicle-mounted edge server and cooperative fixed edge server
Yadav et al. Energy-latency tradeoff for dynamic computation offloading in vehicular fog computing
Ning et al. Deep reinforcement learning for intelligent internet of vehicles: An energy-efficient computational offloading scheme
Xu et al. A computation offloading method for edge computing with vehicle-to-everything
CN110650457B (en) Joint optimization method for task unloading calculation cost and time delay in Internet of vehicles
Yang et al. Learning based channel allocation and task offloading in temporary UAV-assisted vehicular edge computing networks
Zheng et al. Dynamic performance analysis of uplink transmission in cluster-based heterogeneous vehicular networks
CN106341826B (en) The resource optimal distribution method towards wireless power private network based on virtualization technology
CN111182495A (en) 5G internet of vehicles partial calculation unloading method
Liu et al. Energy-efficiency computation offloading strategy in UAV aided V2X network with integrated sensing and communication
Wang et al. Complex network theoretical analysis on information dissemination over vehicular networks
Wang et al. Radio resource allocation for bidirectional offloading in space-air-ground integrated vehicular network
Wang et al. QoS‐enabled resource allocation algorithm in internet of vehicles with mobile edge computing
Kovalenko et al. Robust resource allocation using edge computing for vehicle to infrastructure (v2i) networks
CN106060145A (en) Profit based request access control method in distributed multi-cloud data center
CN112055335A (en) Uplink vehicle-mounted communication resource allocation method and system based on NOMA
Wang et al. Vehicular computation offloading in UAV-enabled MEC systems
Xia et al. Location-aware and delay-minimizing task offloading in vehicular edge computing networks
Naren et al. A survey on computation resource allocation in IoT enabled vehicular edge computing
Mirza et al. MCLA task offloading framework for 5G-NR-V2X-based heterogeneous VECNs
CN116781144A (en) Method, device and storage medium for carrying edge server by unmanned aerial vehicle
Tian et al. Deep Reinforcement Learning‐Based Dynamic Offloading Management in UAV‐Assisted MEC System
CN114741191B (en) Multi-resource allocation method for correlation of computationally intensive tasks
Cui et al. Load balancing mechanisms of unmanned surface vehicle cluster based on marine vehicular fog computing
Ko et al. Towards efficient data services in vehicular networks via cooperative infrastructure-to-vehicle and vehicle-to-vehicle communications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant