CN115883568B - Tunnel edge computing node deployment method and system - Google Patents

Tunnel edge computing node deployment method and system Download PDF

Info

Publication number
CN115883568B
CN115883568B CN202310148605.4A CN202310148605A CN115883568B CN 115883568 B CN115883568 B CN 115883568B CN 202310148605 A CN202310148605 A CN 202310148605A CN 115883568 B CN115883568 B CN 115883568B
Authority
CN
China
Prior art keywords
edge computing
tunnel
tunnel edge
computing node
constraint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310148605.4A
Other languages
Chinese (zh)
Other versions
CN115883568A (en
Inventor
李朋
李�浩
陈志涛
韩凯旋
付帅
赵倩
罗承成
陆艳铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BROADVISION ENGINEERING CONSULTANTS
Original Assignee
BROADVISION ENGINEERING CONSULTANTS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BROADVISION ENGINEERING CONSULTANTS filed Critical BROADVISION ENGINEERING CONSULTANTS
Priority to CN202310148605.4A priority Critical patent/CN115883568B/en
Publication of CN115883568A publication Critical patent/CN115883568A/en
Application granted granted Critical
Publication of CN115883568B publication Critical patent/CN115883568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a method and a system for deploying tunnel edge computing nodes, wherein the method comprises the steps of setting tunnel edge computing nodes, setting a punishment function according to constraint conditions, defining an objective function, searching the optimal number of nodes in a feasible solution space through a simulated annealing algorithm, and enabling internal energy to be minimized. The method and the system can realize the minimized deployment cost of the tunnel edge computing nodes, fully improve the resource utilization rate of the tunnel edge network, realize the load balance of the tunnel edge computing nodes and provide theoretical support for the deployment of the tunnel edge computing nodes.

Description

Tunnel edge computing node deployment method and system
Technical Field
The invention relates to the field of tunnel edge computing, in particular to a method and a system for deploying tunnel edge computing nodes.
Background
The tunnel system is generally a tunnel body structure in two directions, and whether a power substation exists or not is set according to the length of the tunnel. The core control system of the tunnel is generally set according to the structural characteristics and the equipment distribution condition of the tunnel. A control system cabinet is arranged in the tunnel every 500 meters, and peripheral field devices are connected into the control cabinet. And then the control cabinet forms an optical fiber ring network through a switch in the cabinet, and the control cabinet in the two direction holes and the power substation are connected together. Meanwhile, the switch of the substation is connected to the monitoring center of the upper stage through an optical fiber interface.
At present, the deployment of the tunnel controller is more dependent on the traditional experience value, and the comprehensive consideration of various influencing factors is lacked, wherein the influencing factors comprise coverage constraint, capacity allocation, time delay constraint and service diversity of front-end equipment, are not consistent with the actual application requirement of the tunnel, and cause the phenomenon of resource waste.
The edge computing nodes are deployed in the tunnel environment, a large number of front-end devices in the environment are managed, and data collection, processing and control are performed to meet the requirements of low time delay, high reliability and the like of the nodes. The deployment of edge computing nodes presents many challenges while providing many benefits.
Firstly, to ensure ultra-low time delay in the tunnel edge network, each node needs to provide calculation and storage resources for the intelligent terminals adjacent to the node, so that the placement position and the number of the nodes become very important. Secondly, considering the distribution characteristics and the deployment quantity of the nodes, the purchase, construction, operation and other costs of the nodes become key problems.
Since the capacity of a single node (i.e. the processing power of the CPU) is very limited, the capacity limit of the tunnel edge computation node is also taken into account when considering delay limits and coverage constraints. Finally, because the service types of different terminal devices in the tunnel are different, the service needs to be classified according to real-time performance, accuracy and priority, and service diversity is considered when the node deployment scheme is researched. Because of a plurality of influencing factors to be considered, how to develop a set of perfect node deployment strategies to realize the load balancing of the tunnel edge side under the conditions of considering service diversity, user delay limit, coverage constraint and edge gateway capacity limit of different terminal equipment, improves the resource utilization rate of tunnel electromechanical facilities and minimizes the deployment cost is worth researching.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method and a system for deploying the tunnel edge computing nodes based on a self-adaptive external punishment simulated annealing algorithm, which realize the minimized deployment cost of the tunnel edge computing nodes, fully improve the resource utilization rate of a tunnel edge network, realize the load balance of the tunnel edge computing nodes and provide theoretical support for the deployment of the tunnel edge computing nodes.
The technical scheme of the invention is as follows:
a tunnel edge computing node deployment method, comprising: setting a tunnel edge computing node deployment constraint condition, setting a penalty function according to the constraint condition, and defining an objective function E (X);
setting deployment constraint conditions of tunnel edge computing nodes to minimize deployment cost, and guaranteeing the targets of meeting delay constraint, capacity constraint and coverage constraint:
in terms of coverage constraint, any one tunnel edge computing nodeeCovering a plurality of front-end flow terminalsfEach front-end flow terminalfAlso at multiple tunnel edge computing nodeseIn the coverage area of each front-end flow terminalfNode can only be calculated by nearest tunnel edgeeServed;
in terms of time delay limitation, front-end flow terminalfTransmitting a calculation task from a front end to a tunnel edge calculation node uplink data rate through a cable channel in a tunnelR f The transmission rate is determined by the front-end flow terminalfIs determined by the type of device; offloading computing tasks to tunnel edge computing nodes according to tunnel control typeeIs a time delay constraint of (1);
in terms of capacity constraints, tunnel edge computing nodeseAll pre-flow terminals served in unit timefThe total number of CPU cycles required by the tasks of the (E) is not more than the processing capacity of the CPU of the node;
for the penalty function, a pre-traffic terminalf The time delay of unloading the computing task to the node exceeds the maximum tolerable time delay of the transmission service according to violationIncreasing an objective function under the time delay constraint condition;
when tunnel edge computing nodeeAll front-end traffic terminals servedfWhen the required resources exceed the capacity of the target function, the target function is increased according to the violation of the capacity constraint condition;
by usingE(X)Representing configuration of a single tunnel edge computing nodeeAnd the number of the internal energy is X, the internal energy is defined as the value of an objective function, and the optimal number of the tunnel edge calculation nodes is searched in a feasible solution space through a simulated annealing algorithm, so that the internal energy is minimized.
Further, in the constraint conditions, the following constraint conditions are set to cover the constraint aspect:
Figure SMS_1
wherein ,
Figure SMS_2
is a front-end flow terminalfSetting up a front-end flow terminalfIs f= { F }; tunnel edge computing nodeeIs the candidate set of (1)E={e}Tunnel edge computing nodeeThe total number of candidate positions is |E|, and under the conditions of ensuring coverage constraint, time delay limitation and flow constraint, optimal deployment tunnel edge computing nodes are selected from a candidate position set EeThe minimum number is satisfied; setting a binary variable mu e, wherein the representation method is as follows:
Figure SMS_3
wherein each front-end flow terminalfAre all at least calculated by one tunnel edge nodeeCovered, set up the edge computing node of the tunneleWith front-end flow terminalsfThe distance between them is d ef Tunnel edge computing nodeeIs covered with a radius ofrSetting variables
Figure SMS_4
Front-end flow terminalfWhether or not to be tunneledEdge computing nodeeThe covered representation method is as follows:
Figure SMS_5
setting binary variablesI fe Front-end flow terminalfWhether or not to be tunneled edge computing nodeeThe service is provided by a service provider,nto cover front-end flow terminalfIs a tunnel edge computing nodeeThe number is expressed as follows:
Figure SMS_6
further, in the constraint condition, in terms of time delay limitation, tunnel tasks are offloaded to tunnel edge computing nodeseThe time delay is calculated as follows:
single preposed flow terminal in tunnelfIs modeled as a computational task of (1)
Figure SMS_7
From a front-end flow terminalf Offloading to tunnel edge computing nodeseOn D f Representing the size of the calculated input data, C f Front-end flow terminal for indicating completionf CPU period required by the calculation task; tunnel edge computing nodeeThe computing power per second CPU cycle is O e
Front-end flow terminalfTransmitting input data for calculation task to tunnel edge calculation node through wireless accesseGenerating transmission delay and calculating to obtain front-end flow terminalfWill be of size D f Is offloaded to tunnel edge computing nodeeThe transmission time delay is as follows
Figure SMS_8
, wherein R f The uplink transmission rate of the data is set;
front-end flow terminal is obtained through calculationfOffloading computing tasks to tunnel edge computing nodeseThe total delay of (2) is:
Figure SMS_9
further, for different types of front-end traffic terminalsfInvolving tunnel edge computing nodeseFor front-end flow terminalfConstructing a downlink control characteristic analysis table;
the downlink control business of the tunnel is divided into z types byu,u=1,2,3 …, z mark, front-end flow terminalfIs of the traffic class of (2)uThe maximum time delay is tolerated is
Figure SMS_10
Each front-end flow terminalfAll have self-fixed traffic classes, and the computing tasks are offloaded to the tunnel edge computing nodeseIs a time delay constraint of (1):
Figure SMS_11
in terms of capacity constraints, tunnel edge computing nodeseAll pre-flow terminals served in unit timefThe total number of CPU cycles required for the task of (a) cannot exceed the tunnel edge computing node CPU processing power, and the capacity constraint represents that:
Figure SMS_12
further, the objective function is:
Figure SMS_13
wherein ,
Figure SMS_14
representing the tunnel edge computing nodeeThe number of (2) is the smallest;
Figure SMS_15
for penalty function->
Figure SMS_16
Representing penalty factors;
Figure SMS_17
Figure SMS_18
computing nodes for the tunnel edgeeWhether deployed, a set of vectors that make up all possible solutions for a single tunnel:
Figure SMS_19
further, by simulating an annealing algorithm, the optimal number is found in a feasible solution space, so that the internal energy is minimized, and the method specifically comprises the following steps:
step (1) setting the current temperature as T k The outer layer circulation steps arekAt this timek=0, the current annealing initial temperature value is
Figure SMS_20
Each is provided withxThe maximum number of iterations is iter max I.e. the number of internal cycles is iter max Initial step s=0, randomly generating a corresponding node configuration state value x 0 (0) E, omega, the value is initially defined as a history optimal solution and a current solution;
in the step (2) of simulated annealing, updating is carried out according to the following rules:
Figure SMS_21
wherein a is an annealing rate, and the temperature is gradually reduced to a target value T along with k & gtto + & lt, in order to obtain a constant with a value close to 1 end Wherein k=k+1;
step (3) assume T k The current solution is as follows
Figure SMS_22
At any step s is more than or equal to 0, the previous state is disturbed according to a preset neighborhood function, and the corresponding solution is thatx k (s)
If the internal energy is reduced, the current solution is updated, and if the internal energy is increased, the new solution is accepted as the current solution in the step s with a certain probability, wherein c is a Boltzmann constant, and the probability is valued as follows:
Figure SMS_23
if the new solution is not accepted, then the solution of step s-1 is retained;
step (4) at T k The maximum iteration number is iter max Repeating the steps (2) and (3) max Secondly, when the state is stable, the current solution is the optimal solution of the current state, and the temperature is reduced to the next temperature at the moment, and iteration is continued;
step (5) continuously calculating and keeping T k Reaching the target temperature T end Otherwise, go to step (2).
The invention also relates to a terminal, which is used for unloading respective tasks to the tunnel edge computing nodes in the adjacent edge network in the tunnel for processing, and adjusting the number and the positions of the deployed tunnel edge computing nodes under the condition that the coverage constraint, the capacity limit and the delay limit of the tunnel edge computing nodes are met, so that the deployment cost is minimized, and the method is carried out.
The invention also relates to a computer system comprising a memory, a processor and a computer program on the memory and executable on the processor, which processor implements the steps of the above method when executing the computer program.
The invention also relates to an electronic device comprising a memory, a processor and a computer program on the memory and executable on the processor, which processor implements the steps of the above method when executing the computer program.
Therefore, in order to solve the problems of high deployment cost caused by the number of the tunnel edge computing nodes and deployment by simply relying on experience values in the prior art, the method comprises the steps of firstly analyzing the conditions of the tunnel edge computing nodes for covering and serving the front-end flow terminals, analyzing the factors of the unloading time delay of the computing tasks and the constraint conditions required to be met by the capacity balance distribution of the tunnel edge computing nodes, then providing the optimal objective function for the deployment of the tunnel edge computing nodes, and finally providing a self-adaptive external penalty function, and solving the minimum deployment number of the tunnel edge computing nodes by combining a simulated annealing algorithm to obtain the optimal deployment scheme of the tunnel.
The invention can realize the least deployment quantity of the tunnel edge computing nodes, has optimal effect and saves the construction cost brought by the tunnel edge computing nodes.
Drawings
FIG. 1 is a control architecture diagram of a typical tunnel monitoring system of the prior art;
FIG. 2 is a tunnel "cloud-edge-end" architecture of an embodiment of the present invention;
fig. 3 is an algorithm flow chart of an embodiment of the present invention.
Detailed Description
The following description of the embodiments will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. Based on the embodiments, all other embodiments that may be made by one of ordinary skill in the art without making any inventive effort are within the scope of the present application.
Unless otherwise defined, technical or scientific terms used in the embodiments of the present application should be given a general meaning as understood by one of ordinary skill in the art. The terms "first," "second," and the like, as used in this embodiment, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. "upper", "lower", "left", "right", "transverse", and "vertical", etc. are used only with respect to the orientation of the components in the drawings, these directional terms are relative terms, which are used for descriptive and clarity with respect thereto and which may vary accordingly with respect to the orientation in which the components are disposed in the drawings.
In the prior art, a typical tunnel monitoring system control architecture is shown in fig. 1, where each control node (field area controller) can individually control a field device and exchange information with a substation main controller through a network manner. The main controller of the substation is used for controlling each control node in a linkage way through a pre-stored logic plan and a trigger signal and is used for controlling the scene required by the whole tunnel. The main controller is communicated with the upper system to receive the control command of the upper computer and the feedback of the state of the field device. Each tunnel is connected to the monitoring center platform as a separate physical ring network topology through a respective optical fiber.
The tunnel edge computing node deployment method based on the self-adaptive external punishment simulated annealing algorithm in the embodiment.
First, in tunnel edge networks, terminals, which generate a large number of dense and delay-sensitive requests such as lane indication, air quality monitoring, tunnel ventilation, tunnel lighting, tunnel monitoring, etc., need to offload tasks to neighboring tunnel edge computing nodes to reduce the response delay in acquiring service, are defined as pre-traffic terminals (Front Flow Terminal, FFT).
The model of the embodiment adopts edge-end coordination among cloud-edge-ends, as shown in fig. 2, the cloud end is a central node of traditional cloud computing and is a management end of edge computing, namely a monitoring platform deployed by regional center or tunnel management, the regional center deploys a central cloud platform, and the tunnel management deploys an edge cloud platform; the edges are tunnel field management and control brains of cloud computing, namely tunnel edge computing nodes; the terminal is terminal equipment, namely a front flow terminal in the tunnel.
The front-end flow terminal firstly can offload respective tasks to tunnel edge computing nodes in adjacent edge networks in the tunnel for processing, and under the condition that the coverage constraint, capacity limitation and delay limitation of the tunnel edge computing nodes are met, the number and positions of the deployed tunnel edge computing nodes are regulated, the deployment cost is minimized, the resource utilization rate of the edge networks is improved, and load balancing is realized.
In a single tunnel, a set of front-end flow terminals is set as F= { F }, the total number of the front-end flow terminals is |F|, and a node is calculated at the edge of the tunneleIs the candidate set of (1)E={e}Tunnel edge computing nodeeThe total number of candidate positions is |E|, and under the conditions of ensuring coverage constraint, time delay limitation and flow constraint, optimal deployment tunnel edge computing nodes are selected from a candidate position set EeIs satisfied for the minimum number. Setting a binary variable mu e, wherein the representation method is as follows:
Figure SMS_24
wherein each front-end flow terminalfAre all at least calculated by one tunnel edge nodeeCovered, set up the edge computing node of the tunneleWith front-end flow terminalsfThe distance between them is d ef Tunnel edge computing nodeeIs covered with a radius ofrSetting binary variables
Figure SMS_25
Front-end flow terminalfWhether or not to be tunneled edge computing nodeeThe covered representation method is as follows: />
Figure SMS_26
Setting binary variablesI fe Front-end flow terminalfWhether or not to be tunneled edge computing nodeeThe service is provided by a service provider,nto cover front-end flow terminalfIs a tunnel edge computing nodeeThe number is expressed as follows:
Figure SMS_27
the constraint condition setting in this embodiment is specifically as follows:
computing nodes at deployment tunnel edgeeIn the process of (1), the aim of minimizing the deployment cost is to ensure that the aims of meeting the delay limit, the capacity constraint and the coverage constraint are met, and model constraint conditions are constructed:
in terms of coverage constraint, any one tunnel edge computing nodeeCan cover a plurality of front-end flow terminalsfEach front-end flow terminalfOr can be simultaneously positioned at a plurality of tunnel edge computing nodeseIn the coverage area of each front-end flow terminalfNode can only be calculated by nearest tunnel edgeeServed by the service. From the standpoint of coverage reliability and coverage service reliability, the following constraints are set:
Figure SMS_28
in terms of time delay limitation, tunnel edge computing nodeeThe placement of (a) affects capacity allocation and load, thereby affecting the time delay of calculation tasks, and also includes a front-end flow terminalfTo tunnel edge computing nodeeTransmission delay and tunnel edge computation nodeeProcessing time delay.
Offloading tunnel tasks to tunnel edge computing nodeseThe time delay is calculated as follows:
front-end flow terminalfTransmitting a calculation task from a front end to a tunnel edge calculation node uplink data rate through a cable channel in a tunnel
Figure SMS_29
The transmission rate is determined by the front-end flow terminalfIs determined by the equipment type, is set aiming at network ports, serial ports and switching value signal transmission rates, and is executed according to the national tunnel electromechanical design standard.
The calculation task of a single front-end flow terminal f in a tunnel is modeled as
Figure SMS_30
It can be formed by a front-end flow terminalfOffloading to tunnel edge computing nodeseOn D f Representing the calculation of the size of the input data, e.g. the size of the input parameters and the program code, C f Indicating completionfCPU cycles required for the computing task. Tunnel edge computing nodeeThe computing power per second CPU cycle is O e
Front-end flow terminalfTransmitting input data for calculation task to tunnel edge calculation node through wireless accesseGenerating transmission delay and calculating to obtain front-end flow terminalfWill be of size D f Is offloaded to tunnel edge computing nodeeThe transmission time delay on the transmission line is D f /R f, wherein Rf Is the data uplink transmission rate.
Front-end flow terminal is obtained through calculationfOffloading computing tasks to tunnel edge computing nodeseThe total delay of (2) is:
Figure SMS_31
for different types of front-end flow terminalsfInvolving tunnel edge computing nodeseFor front-end flow terminalfFor example, a lane indication command, a fan control command, a lighting control command, etc., and the time delay in the downstream direction of the data is synchronously considered.
A downstream control profile table was constructed as shown in table 1 below:
TABLE 1
Figure SMS_32
The downlink control traffic of the tunnel is divided into 7 classes and usedu,u=1,2,3 …,7 mark, front-end flow terminalfIs of the traffic class of (2)uThe maximum time delay is tolerated is
Figure SMS_33
Thus, the computation task is offloaded to the tunnel edge computation nodeeThe delay constraint of (2) is as follows:
Figure SMS_34
in terms of capacity constraints, tunnel edge computing nodeseAll pre-flow terminals served in unit timefThe total number of CPU cycles required for the task of (a) cannot exceed the tunnel edge computing node CPU processing power, and the capacity constraint represents that:
Figure SMS_35
the algorithm modeling process of this embodiment is as follows:
first step, node problem solving
Regarding the optimal arrangement of the tunnel edge computing nodes in the tunnel, all the front-end flow terminals in the tunnel need to be ensuredfOn the premise of definite distribution and task demands, the position and the number of the tunnel edge computing nodes and the front-end flow terminals managed by the tunnel edge computing nodes are optimizedfThe service or not relation between the two is combined with the constraint condition, the problem is similar to the NP-hard problem, the multi-constraint optimization problem is converted into the optimization problem with simple constraint for solving, and a self-adaptive external punishment simulated annealing algorithm is adopted.
Second step, setting punishment function
Setting a feasible solution space formed by all constraint conditions as omega, wherein the feasible solution space is formed by a tunnel length and a front flow terminalfIs determined by the number of (3). Searching for a target meeting minimum of single tunnel edge computing nodes in feasible solution space, namely minimizing configurationx. The space of the feasible region becomes smaller, the effectiveness of the simulated annealing algorithm is reduced, the algorithm itself explores the cost function and is divided into a plurality of regions, but the calculation result may not meet the strict constraint condition, the processing equation constraint and the inequality constraint are solved by using the outer point penalty function method, and the region outside the feasible solution space omega is expandedDomain.
Converting the constraint into a punishment function, multiplying the punishment function by a single punishment term, adding the punishment factor to the objective function, and setting the original objective problem as follows:
Figure SMS_36
the method adopts an outer point punishment function method to convert the data as follows:
Figure SMS_37
Figure SMS_38
is a penalty function, wherein->
Figure SMS_39
Is a very large constant, and a common penalty function has the following form:
Figure SMS_40
when (when)xWhen the solution is not feasible, the punishment function value increases the objective function value, is punishment to omega far from the feasible domain, and forces the configurationxThe feasible domain is approached in the optimization process, and if x is a feasible solution, the penalty function value is 0. External searching is used to ensure that the path from initial configuration to optimal configuration is shortened.
Aiming at the logic deployment problem of the tunnel edge computing node, under the constraint condition, an adaptive external penalty function is adopted, and the method is expressed as follows:
Figure SMS_41
for the external penalty function, the first term mainly complies with the time delay constraint condition, and the traffic terminal is arranged at the front endfIs offloaded to a tunnel edge computing nodeeIs delayed beyond the term transmissionMaximum tolerable delay of traffic, increasing objective function according to violating delay constraint condition, and calculating node when tunnel edge according to capacity constraint conditioneAll front-end traffic terminals servedfWhen the required resources exceed their capacity, the objective function is increased in accordance with the violation of the capacity constraint.
wherein ,
Figure SMS_42
for penalty factor, set to constant, +.>
Figure SMS_43
Is too large, the objective function minimizes the increase in difficulty,/->
Figure SMS_44
The smaller the value is, the farther the punishment function minimum point is from the optimal solution of the feasible solution space, and the solution efficiency is greatly reduced. />
Figure SMS_45
Computing nodes for tunnel edgeseWhether deployed, a set of vectors that make up all possible solutions for a single tunnel.
Figure SMS_46
The punishment function is used for guaranteeing the optimal solution which is inversely proportional to the constraint condition, the optimal solution outside the feasible domain omega is explored, the punishment function value is larger as the distance from the feasible domain is larger, the punishment effect is determined by the percentage of the violation constraint part, the percentage of the violation constraint part is multiplied by the objective function and added to the punishment function, and the constraint and the objective function are guaranteed
Figure SMS_47
The order of magnitude remains consistent. The function guarantees a penalty function and an objective function +.>
Figure SMS_48
In proportion, when the values from the feasible solution space are not different, the validity of the values can be checkedSex, such as configured tunnel edge computing nodeseSignificantly lower than the number of existing individual tunnel deployments, a perturbation is required, using the penalty function.
Third step, defining objective function
The objective function for the tunnel edge node deployment model is therefore:
Figure SMS_49
/>
and adding a penalty function on the basis of the objective function with the minimum number of the calculation nodes of the original realization tunnel edge.
Fourth step, fusion simulated annealing algorithm
As shown in FIG. 3, a configuration single tunnel edge computing node is denoted by E (x)eThe internal energy when the number is x is defined as the target function value of the third step. Searching the optimal number in the feasible solution space to minimize the internal energy, wherein the calculation process is as follows:
step (1) setting the current temperature as T k The outer layer circulation steps arekAt this timek=0, the current annealing initial temperature value is
Figure SMS_50
Each is provided withxThe maximum number of iterations is iter max I.e. the number of internal cycles is iter max Initial step s=0, randomly generating a corresponding tunnel edge computation nodeeConfiguration state value x 0 (0) E. Omega. The value is initially defined as the historical optimal solution and the current solution.
In the implementation of simulated annealing, the step (2) is updated according to the following rules:
Figure SMS_51
wherein a is an annealing rate, and the temperature is gradually reduced to a target value T along with k & gtto + & lt, in order to obtain a constant with a value close to 1 end Where k=k+1.
Step (3) assume T k The following currentSolution to
Figure SMS_52
At any step s is more than or equal to 0, the previous state is disturbed according to a preset neighborhood function, and the corresponding solution is x k (s)。
If the internal energy is reduced, the current solution is updated, and if the internal energy is increased, the new solution is accepted as the current solution in the step s with a certain probability, wherein c is a Boltzmann constant, and the probability is valued as follows:
Figure SMS_53
if the new solution is not accepted, the solution of step s-1 is retained.
Step (4) at T k The maximum iteration number is iter max Repeating the steps (2) and (3) max And secondly, when the state is stable, the current solution is the optimal solution of the current state, and the temperature is reduced to the next temperature at the moment, and iteration is continued.
Step (5) continuously calculating and keeping T k Reaching the target temperature T end Otherwise, go to step (2).
In order to implement the above method, the present embodiment further relates to a pre-flow terminal, which offloads respective tasks to edge computing nodes in adjacent edge networks in the tunnel for processing.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a readable storage medium or transmitted from one readable storage medium to another readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center in a wired manner. The readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media.
Optionally, the embodiment of the present application further provides a storage medium, where instructions are stored, when the instructions are executed on a computer, cause the computer to perform the method of the embodiment as shown in the foregoing.
Optionally, the embodiment of the present application further provides a chip for executing the instruction, where the chip is used to perform the method of the foregoing embodiment.
The method of the embodiment is mainly oriented to expressway tunnels, and is particularly suitable for long tunnels or extra-long tunnels with the length of more than 1000 meters. By using the method, the minimum deployment number of the tunnel edge computing nodes under the edge-end model can be calculated, so that the cost is minimized.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (6)

1. A method for deploying a tunnel edge computing node is characterized by comprising the following steps: comprising the following steps: setting a tunnel edge computing node deployment constraint condition, setting a penalty function according to the constraint condition, and defining an objective function E (X);
setting deployment constraint conditions of tunnel edge computing nodes to minimize deployment cost, and guaranteeing the targets of meeting delay constraint, capacity constraint and coverage constraint:
in terms of coverage constraint, any one tunnel edge computing node e covers a plurality of preposed traffic terminals f, each preposed traffic terminal f is also simultaneously in the coverage area of a plurality of tunnel edge computing nodes e, and each preposed traffic terminal f can only be served by the nearest tunnel edge computing node e;
in terms of time delay limitation, the front-end flow terminal f transmits a calculation task from the front end to the tunnel edge calculation node uplink data rate R through a cable channel in the tunnel f The transmission rate is determined by the equipment type of the front-end flow terminal f; according to the tunnel control type, calculating the time delay constraint of the task unloading to the tunnel edge calculation node e;
in terms of capacity constraint, the total number of CPU cycles required by the tasks of all the preposed flow terminals f served by the tunnel edge computing node e in unit time does not exceed the CPU processing capacity of the tunnel edge computing node;
for the penalty function, the time delay of unloading the calculation task of the preset flow terminal f to the tunnel edge calculation node e exceeds the maximum tolerable time delay of the transmission service, and the objective function is increased according to the violation of the time delay constraint condition;
when the resources required by all the front-end flow terminals f served by the tunnel edge computing node e exceed the capacity of the front-end flow terminals f, increasing an objective function according to the capacity constraint violation condition;
e (X) is used for representing the internal energy when the number of the single tunnel edge computing node E is X, the internal energy is defined as the value of an objective function, and the optimal tunnel edge computing node number is searched in a feasible solution space through a simulated annealing algorithm;
in the constraint condition, in terms of time delay limitation, the time delay of the tunnel task to be unloaded to the tunnel edge computing node e is calculated as follows:
the calculation task of a single preposed flow terminal f in a tunnel is modeled as P f =(D f ,C f ) Unloading the front-end traffic terminal f to a tunnel edge computing node e, D f Representing the size of the calculated input data, C f The CPU period required by completing the calculation task of the front flow terminal f is represented; the computing capacity of the tunnel edge computing node e is O per second CPU period e
The front-end flow terminal f transmits the input data for calculation task to the tunnel edge calculation node e through wireless access to generate transmission delay, and calculates to obtain the front-end flow terminal f with the size D f Transmission of input data offloaded onto tunnel edge computing node eTime delay of D f /R f, wherein Rf The uplink transmission rate of the data is set;
the total time delay of the front-end flow terminal f for unloading the calculation task to the tunnel edge calculation node e is calculated as follows:
Figure FDA0004175476290000021
for different types of front-end flow terminals f, the control of a tunnel edge computing node e on the front-end flow terminals f is involved, and a downlink control characteristic analysis table is constructed;
the downlink control traffic of the tunnel is divided into z classes, and is marked by u, u= (1, 2,3 …, z), the traffic class of the front-end flow terminal f is u, and the maximum tolerated time delay is
Figure FDA0004175476290000022
Each front-end flow terminal f has a fixed service class, and the calculation task is unloaded to the delay constraint of the tunnel edge calculation node e:
Figure FDA0004175476290000023
the objective function is:
Figure FDA0004175476290000024
wherein ,
Figure FDA0004175476290000025
representing that the number of the tunnel deployment tunnel edge computing nodes e is minimum;
gamma H (mu) is a penalty function, and gamma represents a penalty factor;
Figure FDA0004175476290000026
Figure FDA0004175476290000027
f (mu) represents the actual number of single tunnel deployment tunnel edge computing nodes e;
μ is whether the tunnel edge computing node e is deployed, constituting a vector set of all possible solutions for a single tunnel:
μ=(μ 12 ,…,μ |E| )。
2. the tunnel edge computing node deployment method of claim 1, wherein: among the constraint conditions, the following constraint conditions are set to cover the constraint aspect:
Figure FDA0004175476290000028
Figure FDA0004175476290000029
Figure FDA00041754762900000210
wherein, |f| is the total number of the front flow terminals F, and the set of the front flow terminals F is set as f= { F }; the candidate set of the tunnel edge computing node E is E= { E }, the total number of candidate positions of the tunnel edge computing node E is |E|, and the minimum number of optimized deployment tunnel edge computing nodes E is selected from the candidate position set E under the condition of ensuring coverage constraint, time delay constraint and flow constraint; setting a binary variable mu e Wherein the expression method is as follows:
Figure FDA0004175476290000031
wherein each front-end flowThe terminal f is covered by at least one tunnel edge computing node e, and the distance between the tunnel edge computing node e and the front flow terminal f is set as d ef The coverage radius of the tunnel edge computing node e is r, and a variable is set
Figure FDA0004175476290000037
Indicating whether the front-end traffic terminal f is covered by the tunnel edge computing node e, the indicating method is as follows:
Figure FDA0004175476290000032
wherein ,/>
Figure FDA0004175476290000033
Setting binary variable I fe The method for indicating whether the front flow terminal f is served by the tunnel edge computing node e is as follows:
Figure FDA0004175476290000034
wherein ,/>
Figure FDA0004175476290000035
wherein ,dfn The nth tunnel edge that covers the front-end traffic termination f calculates the distance between the node e and the front-end traffic termination f.
3. The tunnel edge computing node deployment method of claim 1, wherein:
the total number of CPU cycles required for the tasks of all the pre-traffic terminals f served by the tunnel edge computing node e per unit time cannot exceed the tunnel edge computing node CPU processing power in terms of capacity constraints, which represent the following:
Figure FDA0004175476290000036
4. the tunnel edge computing node deployment method of claim 1, wherein: the optimal number is found in the feasible solution space through a simulated annealing algorithm, so that the internal energy is minimized, and the method is specifically carried out as follows:
step (1) setting the current temperature as T k The outer layer circulation step is k, when k=0, the current annealing initial temperature value is T 0 Setting the maximum iteration number of each x as item max I.e. the number of internal cycles is iter max Initial step s=0, randomly generating a corresponding configuration state value x of the tunnel edge computing node e 0 (0) E, omega, the value is initially defined as a history optimal solution and a current solution; omega is a feasible solution space formed by all constraint conditions;
in the step (2) of simulated annealing, updating is carried out according to the following rules:
T k =T 0 α k ,α<1;
wherein a is an annealing rate, and the temperature is gradually reduced to a target value T along with k & gtto + & lt, in order to obtain a constant with a value close to 1 end Wherein k=k+1;
step (3) assume T k The current solution is as follows
Figure FDA0004175476290000042
At any step s is more than or equal to 0, the previous state is disturbed according to a preset neighborhood function, and the corresponding solution is x k (s);
If the internal energy is reduced, the current solution is updated, and if the internal energy is increased, the new solution is accepted as the current solution in the step s with a certain probability, wherein c is a Boltzmann constant, and the probability is valued as follows:
Figure FDA0004175476290000041
if the new solution is not accepted, then the solution of step s-1 is retained;
step (4) at T k The maximum iteration number is iter max Repeating the steps (2) and (3) max Secondly, when the state is stable, the current solution is the optimal solution of the current state, and the temperature is reduced to the next temperature at the moment, and iteration is continued;
step (5) continuously calculating and keeping T k Reaching the target temperature T end Otherwise, go to step (2).
5. A terminal, characterized by: offloading respective tasks to tunnel edge computing nodes in adjacent edge networks within a tunnel for processing, adjusting the number and location of deployed tunnel edge computing nodes to minimize deployment costs, while satisfying coverage constraints, capacity constraints, and delay constraints of the tunnel edge computing nodes, according to the method of any one of claims 1 to 4.
6. A computer system comprising a memory, a processor, and a computer program on the memory and executable on the processor, characterized by: the processor, when executing the computer program, implements the steps of the method of any of the preceding claims 1 to 4.
CN202310148605.4A 2023-02-22 2023-02-22 Tunnel edge computing node deployment method and system Active CN115883568B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310148605.4A CN115883568B (en) 2023-02-22 2023-02-22 Tunnel edge computing node deployment method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310148605.4A CN115883568B (en) 2023-02-22 2023-02-22 Tunnel edge computing node deployment method and system

Publications (2)

Publication Number Publication Date
CN115883568A CN115883568A (en) 2023-03-31
CN115883568B true CN115883568B (en) 2023-06-02

Family

ID=85761506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310148605.4A Active CN115883568B (en) 2023-02-22 2023-02-22 Tunnel edge computing node deployment method and system

Country Status (1)

Country Link
CN (1) CN115883568B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684075A (en) * 2018-11-28 2019-04-26 深圳供电局有限公司 A method of calculating task unloading is carried out based on edge calculations and cloud computing collaboration
CN110717302A (en) * 2019-09-27 2020-01-21 云南电网有限责任公司 Edge computing terminal equipment layout method for real-time online monitoring service of power grid
CN111586762A (en) * 2020-04-29 2020-08-25 重庆邮电大学 Task unloading and resource allocation joint optimization method based on edge cooperation
CN115081682A (en) * 2022-05-26 2022-09-20 西南交通大学 Traffic organization optimization method for long and large tunnel construction and computer device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7065376B2 (en) * 2003-03-20 2006-06-20 Microsoft Corporation Multi-radio unification protocol
US9072102B2 (en) * 2007-11-27 2015-06-30 Qualcomm Incorporated Interference management in a wireless communication system using adaptive path loss adjustment
CN112052086B (en) * 2020-07-28 2023-06-20 西安交通大学 Multi-user safety energy-saving resource allocation method in mobile edge computing network
CN114126066B (en) * 2021-11-27 2022-07-19 云南大学 MEC-oriented server resource allocation and address selection joint optimization decision method
CN114363962A (en) * 2021-12-07 2022-04-15 重庆邮电大学 Collaborative edge server deployment and resource scheduling method, storage medium and system
CN114500560B (en) * 2022-01-06 2024-04-26 浙江鼎峰科技股份有限公司 Edge node service deployment and load balancing method for minimizing network delay
CN115454650A (en) * 2022-10-11 2022-12-09 广东电网有限责任公司 Resource allocation method, device, terminal and medium for microgrid edge computing terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684075A (en) * 2018-11-28 2019-04-26 深圳供电局有限公司 A method of calculating task unloading is carried out based on edge calculations and cloud computing collaboration
CN110717302A (en) * 2019-09-27 2020-01-21 云南电网有限责任公司 Edge computing terminal equipment layout method for real-time online monitoring service of power grid
CN111586762A (en) * 2020-04-29 2020-08-25 重庆邮电大学 Task unloading and resource allocation joint optimization method based on edge cooperation
CN115081682A (en) * 2022-05-26 2022-09-20 西南交通大学 Traffic organization optimization method for long and large tunnel construction and computer device

Also Published As

Publication number Publication date
CN115883568A (en) 2023-03-31

Similar Documents

Publication Publication Date Title
Dai et al. UAV-assisted task offloading in vehicular edge computing networks
CN109818865B (en) SDN enhanced path boxing device and method
CN113810233B (en) Distributed computation unloading method based on computation network cooperation in random network
CN110290011A (en) Dynamic Service laying method based on Lyapunov control optimization in edge calculations
Ouyang et al. Adaptive user-managed service placement for mobile edge computing via contextual multi-armed bandit learning
CN104158855A (en) Mobile service combined calculation discharge method based on genetic algorithm
CN108650131B (en) Processing system for multi-controller deployment in SDN network
Bayrakdar et al. Artificial bee colony–based spectrum handoff algorithm in wireless cognitive radio networks
CN111988787B (en) Task network access and service placement position selection method and system
Ebrahim et al. A deep learning approach for task offloading in multi-UAV aided mobile edge computing
Wang et al. Reinforcement learning-based optimization for mobile edge computing scheduling game
CN108471357A (en) A kind of terminal access scheduling method and device based on narrowband Internet of Things
Cheng et al. Resilient edge service placement under demand and node failure uncertainties
Chiang et al. Deep Q-learning-based dynamic network slicing and task offloading in edge network
Zheng et al. Data synchronization in vehicular digital twin network: A game theoretic approach
CN108834173B (en) Centralized optimization distribution method of wireless multi-hop network
Li et al. Adaptive controller placement in software defined wireless networks
US20240086715A1 (en) Training and using a neural network for managing an environment in a communication network
CN115883568B (en) Tunnel edge computing node deployment method and system
Lin et al. Joint Optimization of Offloading and Resource Allocation for SDN‐Enabled IoV
US10182433B2 (en) System and method for overlapping rate region zoning
Meng et al. Intelligent routing orchestration for ultra-low latency transport networks
Cao et al. An optimization method for mobile edge service migration in cyberphysical power system
Zhao et al. Deep Q-network for user association in heterogeneous cellular networks
Belkout et al. A load balancing and routing strategy in fog computing using deep reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant