CN115883568A - Tunnel edge computing node deployment method and system - Google Patents

Tunnel edge computing node deployment method and system Download PDF

Info

Publication number
CN115883568A
CN115883568A CN202310148605.4A CN202310148605A CN115883568A CN 115883568 A CN115883568 A CN 115883568A CN 202310148605 A CN202310148605 A CN 202310148605A CN 115883568 A CN115883568 A CN 115883568A
Authority
CN
China
Prior art keywords
tunnel
tunnel edge
constraint
edge computing
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310148605.4A
Other languages
Chinese (zh)
Other versions
CN115883568B (en
Inventor
李朋
李�浩
陈志涛
韩凯旋
付帅
赵倩
罗承成
陆艳铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BROADVISION ENGINEERING CONSULTANTS
Original Assignee
BROADVISION ENGINEERING CONSULTANTS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BROADVISION ENGINEERING CONSULTANTS filed Critical BROADVISION ENGINEERING CONSULTANTS
Priority to CN202310148605.4A priority Critical patent/CN115883568B/en
Publication of CN115883568A publication Critical patent/CN115883568A/en
Application granted granted Critical
Publication of CN115883568B publication Critical patent/CN115883568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention relates to a tunnel edge computing node deployment method and a tunnel edge computing node deployment system. The invention can realize the minimized deployment cost of the tunnel edge computing node, fully improve the resource utilization rate of the tunnel edge network, realize the load balance of the tunnel edge computing node and provide theoretical support for the deployment of the tunnel edge computing node.

Description

Tunnel edge computing node deployment method and system
Technical Field
The invention relates to the field of tunnel edge calculation, in particular to a method and a system for deploying a tunnel edge calculation node.
Background
The general tunnel system is a hole body structure in two directions, and whether a substation exists or not is set according to the length of a tunnel. The core control system of the tunnel is generally set according to the structural characteristics and the equipment distribution condition of the tunnel. A control system cabinet is arranged in the tunnel every 500 meters, and peripheral field devices are connected into the control system cabinet. Then the control cabinet forms an optical fiber ring network through the switch in the cabinet, and the two directional holes are connected with the control cabinet of the substation. Meanwhile, the switch of the substation is connected to the monitoring center of the previous stage through an optical fiber interface.
At present, the deployment of a tunnel controller depends more on the traditional empirical value, and the comprehensive consideration of various influence factors including coverage constraint, capacity allocation, delay constraint and business diversity of front-end equipment is lacked, so that the phenomenon of resource waste is caused because the influence factors are not consistent with the actual application requirements of tunnels.
Edge computing nodes are deployed in a tunnel environment, a large number of front-end devices in the environment are managed, and data collection, processing and control are carried out to meet the requirements of low time delay, high reliability and the like of the nodes. The deployment of edge compute nodes presents many challenges while bringing many benefits.
Firstly, to ensure ultra-low latency in the tunnel edge network, each node needs to provide computing and storage resources for the intelligent terminal adjacent to the node, so the placement position and number of the nodes become very important. Secondly, considering the distribution characteristics and the deployment quantity of the nodes, the costs of purchasing, constructing, operating and the like of the nodes become key problems.
Since the capacity of a single node (i.e., the processing power of the CPU) is very limited, the capacity limit of the tunnel edge computing node is also considered when considering the latency limit and the coverage constraint. Finally, since the service types of different terminal devices in the tunnel are different, the services need to be classified according to real-time performance, accuracy and priority, and the service diversity is considered when a node deployment scheme is researched. Because many influencing factors need to be considered, how to develop a set of perfect node deployment strategies to realize load balancing at the tunnel edge side under the condition of considering service diversity, user delay limitation, coverage constraint and edge gateway capacity limitation of different terminal devices is worthy of research, so that the resource utilization rate of tunnel electromechanical facilities is improved, and the deployment cost is minimized.
Disclosure of Invention
In order to solve the technical problems, the invention provides a tunnel edge computing node deployment method and system based on a self-adaptive external punishment simulated annealing algorithm, which can be used for realizing the minimum deployment cost of the tunnel edge computing node, fully improving the resource utilization rate of a tunnel edge network, realizing the load balance of the tunnel edge computing node and providing theoretical support for the deployment of the tunnel edge computing node.
The technical scheme of the invention is as follows:
a tunnel edge computing node deployment method comprises the following steps: setting a deployment constraint condition of a tunnel edge computing node, setting a penalty function according to the constraint condition, and defining a target function E (X);
setting a deployment constraint condition of the tunnel edge computing node to aim at minimizing deployment cost, and ensuring that the objectives of time delay limitation, capacity constraint and coverage constraint are met:
in the aspect of coverage constraint, any one tunnel edge computing nodeeCovering multiple front-end traffic terminalsfEach prepositive flow terminalfAre also positioned at the sides of a plurality of tunnels at the same timeEdge computing nodeeWithin the coverage of each prepositive traffic terminalfNodes can only be calculated by the nearest tunnel edgeeThe service is carried out;
in terms of time delay limitation, the flow terminal is preposedfUpstream data rate for transmitting computational tasks from front-end to tunnel edge compute nodes over cable channels within tunnelsR f The transmission rate is determined by the front-end traffic terminalfIs determined by the device type of (1); offloading of tunnel control type calculation tasks to tunnel edge calculation nodeseThe delay constraint of (2);
in terms of capacity constraints, tunnel edge compute nodeseAll preposed flow terminals served in unit timefThe total number of CPU cycles required by the tasks does not exceed the processing capacity of the CPU of the node;
for the penalty function, the current flow terminal is setf The time delay of the calculation task to be unloaded to the node exceeds the maximum tolerable time delay of the transmission service, and the objective function is increased according to the condition of violating the time delay constraint;
computing nodes at tunnel edgeeAll served head endfWhen the required resource exceeds the capacity of the resource, increasing the objective function according to the violation of the capacity constraint condition;
by usingE(X)Representing configuration of single tunnel edge compute nodeeAnd the internal energy when the number is X is defined as a value of an objective function, and the number of the optimal tunnel edge calculation nodes is searched in a feasible solution space through a simulated annealing algorithm, so that the internal energy is minimized.
Further, in the constraint conditions, in terms of coverage constraint, the following constraint conditions are set:
Figure SMS_1
wherein ,
Figure SMS_2
for front-end flow terminalsfTotal number of (2), setting a front-end flow terminalfThe set of (d) is F = { F }; tunnel edge computing nodeeIs a candidate set ofE={e}Calculation section of tunnel edgeDoteThe total number of the candidate positions is | E |, and under the condition of ensuring coverage constraint, time delay limitation and flow constraint, the edge computing node of the optimal deployment tunnel is selected from the candidate position set EeSatisfies the minimum number; setting a binary variable μ e, wherein the expression method is as follows:
Figure SMS_3
wherein each prepositive flow terminalfAre all computed by at least one tunnel edge nodeeCovered, set tunnel edge compute nodeseAnd front-end flow terminalfA distance d between ef Computation node at tunnel edgeeHas a radius of coverage ofrSetting variables
Figure SMS_4
Indicating preposition flow terminalfWhether to be calculated by tunnel edgeeThe covered representation method is as follows:
Figure SMS_5
setting binary variablesI fe Indicating head endfWhether to be calculated by tunnel edgeeThe service is carried out by the server in a manner that,nto cover the prepositive flow terminalfTunnel edge computing node ofeThe number is expressed as follows:
Figure SMS_6
further, in a constraint, in terms of delay limitation, the tunnel task is unloaded to the tunnel edge computing nodeeThe time delay is calculated as follows:
single preposed flow terminal in tunnelfIs modeled as
Figure SMS_7
From a front-end traffic terminalf Offloading to tunnel edge compute nodeseTo above, D f To representCalculating the input data size, C f Indicating completion of pre-flow terminationf The computing task of (2) requires a CPU cycle; tunnel edge computing nodeeComputing power is O per second CPU cycle e
Front-mounted flow terminalfTransmitting input data for computing tasks to tunnel edge computing nodes over wireless accesseGenerating transmission delay, calculating to obtain preposed flow terminalfWill have a size of D f Offload of input data to tunnel edge compute nodeseA transmission delay of
Figure SMS_8
, wherein R f Is the data uplink transmission rate;
calculating to obtain preposed flow terminalfOffloading computing tasks to tunnel edge computing nodeseThe total delay of (c) is:
Figure SMS_9
further, for different types of front-end traffic terminalsfInvolving a tunnel edge compute nodeeTo the prepositive flow terminalfConstructing a downlink control characteristic analysis table;
dividing the tunnel downlink control service into z types foru,u=1,2,3 …, z identification, front-end traffic terminalfOf a traffic class ofuMaximum delay tolerated is
Figure SMS_10
Each prepositive flow terminalfAll have fixed business types, and the computing tasks are unloaded to the tunnel edge computing nodeseThe delay constraint of (2):
Figure SMS_11
in terms of capacity constraints, tunnel edge compute nodeseAll preposed flow terminals served in unit timefThe total number of CPU cycles required by the task cannot exceed the CPU processing of the computing node at the edge of the tunnelCapacity, capacity constraint represents the following:
Figure SMS_12
further, the objective function is:
Figure SMS_13
wherein ,
Figure SMS_14
representing the tunnel edge compute nodeeThe number of (2) is minimal;
Figure SMS_15
is a penalty function, is->
Figure SMS_16
Representing a penalty factor;
Figure SMS_17
Figure SMS_18
computing a node for the tunnel edgeeWhether the solution is deployed or not, a vector set of all possible solutions of a single tunnel is formed:
Figure SMS_19
further, by simulating an annealing algorithm, an optimal number is searched in a feasible solution space, so that the internal energy is minimized, specifically according to the following steps:
step (1) setting the current temperature to be T k The outer layer circulation step iskAt this timek=0, then the current annealing initial temperature value is
Figure SMS_20
Set each ofxMaximum iteration ofThe number of times is iter max I.e. the number of internal cycles is iter max Initial step s =0, randomly generating a corresponding node configuration state value x 0 (0) E, determining the value as the historical optimal solution and the current solution initially;
in the simulated annealing in the step (2), updating according to the following rules:
Figure SMS_21
wherein a is an annealing rate, is a constant having a value close to 1, and gradually decreases to a target value T with k → + ∞ end Wherein k = k +1;
step (3) suppose T k The current solution of
Figure SMS_22
In any step s is more than or equal to 0, the previous state is disturbed according to a preset neighborhood function, and the corresponding solution isx k (s)
If the internal energy is reduced, updating the current solution, and if the internal energy is increased, the new solution is accepted as the current solution of the step s with a certain probability, wherein c is a Boltzmann constant, and the probability value mode is as follows:
Figure SMS_23
if the new solution is not accepted, retaining the solution of step s-1;
step (4) at T k The maximum number of iterations is iter max Repeating the steps (2) and (3) to obtain an iter max Secondly, when the state is stable, the current solution is the optimal solution of the current state, and at the moment, the temperature is reduced to the next temperature, and the iteration is continued;
step (5) continuously calculating and keeping T k To the target temperature T end Otherwise, go to step (2).
The invention also relates to a terminal, which unloads respective tasks to tunnel edge computing nodes in adjacent edge networks in the tunnel for processing, and adjusts the number and the positions of the deployed tunnel edge computing nodes under the condition of meeting the coverage constraint, the capacity limitation and the delay limitation of the tunnel edge computing nodes, thereby minimizing the deployment cost.
The invention also relates to a computer system comprising a memory, a processor and a computer program running on the memory and on the processor, which when executed by the processor implements the steps of the above method.
The invention also relates to an electronic device comprising a memory, a processor and a computer program that is executable on the memory and on the processor, the processor implementing the steps of the method described above when executing the computer program.
It can be seen that, in order to solve the problems of high deployment cost caused by the number of tunnel edge computing nodes and the traditional method of deploying by simply relying on empirical values, the invention firstly analyzes the conditions of the tunnel edge computing nodes covering and serving the preposed flow terminals, analyzes the factors of task unloading delay and the constraint conditions to be met by capacity equilibrium distribution of the tunnel edge computing nodes, then provides an optimal target function for deploying the tunnel edge computing nodes, and finally provides an adaptive external penalty function, and solves the minimum deployment number of the tunnel edge computing nodes by combining with a simulated annealing algorithm to obtain the optimal deployment scheme of the tunnel.
The invention can realize the least deployment quantity of the tunnel edge computing nodes and the optimal effect, and saves the construction cost brought by the tunnel edge computing nodes.
Drawings
FIG. 1 is a diagram of a typical tunnel monitoring system control architecture of the prior art;
FIG. 2 is a tunnel "cloud-edge-end" architecture of an embodiment of the present invention;
FIG. 3 is an algorithmic flow diagram of an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the examples without making any creative effort, shall fall within the protection scope of the present application.
Unless otherwise defined, technical or scientific terms used in the embodiments of the present application should have the ordinary meaning as understood by those having ordinary skill in the art. The use of "first," "second," and similar terms in the present embodiments does not denote any order, quantity, or importance, but rather the terms are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. "mounted," "connected," and "coupled" are to be construed broadly and may, for example, be fixedly coupled, detachably coupled, or integrally coupled; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. "Upper," "lower," "left," "right," "lateral," "vertical," and the like are used solely in relation to the orientation of the components in the figures, and these directional terms are relative terms that are used for descriptive and clarity purposes and that can vary accordingly depending on the orientation in which the components in the figures are placed.
In the prior art, a typical tunnel monitoring system control architecture is shown in fig. 1, in the existing system architecture, each control node (site area controller) can individually control a field device, and exchange information with a substation main controller through a network. And the main controller of the substation is used for controlling each control node in a linkage manner through a pre-stored logic plan and a trigger signal and controlling the scene required by the whole tunnel. And the main controller is communicated with the upper computer to receive the control command of the upper computer and the state feedback of the field equipment. Each tunnel is used as an independent physical ring network topology and is connected to the monitoring center platform through respective optical fibers.
The tunnel edge calculation node deployment method based on the adaptive external penalty simulated annealing algorithm is disclosed by the embodiment.
First, in the tunnel edge network, a Terminal generates a large number of intensive and delay-sensitive requests, such as lane indication, air quality monitoring, tunnel ventilation, tunnel lighting, tunnel monitoring, etc., and the task needs to be offloaded to an adjacent tunnel edge computing node to reduce the response delay of acquiring the service, and these terminals are defined as Front Flow Terminal (FFT).
The model of this embodiment adopts edge-to-end cooperation among cloud-edge-to-end, as shown in fig. 2, a cloud is a central node of the conventional cloud computing, and is a control end of the edge computing, that is, a monitoring platform deployed for the area center or the tunnel management, a central cloud platform deployed for the area center, and an edge cloud platform deployed for the tunnel management; the side is a tunnel field control brain of cloud computing, namely a tunnel edge computing node; the terminal is a terminal device, namely a preposed flow terminal in the tunnel.
The preposed flow terminal firstly unloads respective tasks to tunnel edge computing nodes in adjacent edge networks in a tunnel for processing, and under the condition of meeting the coverage constraint, capacity limitation and delay limitation of the tunnel edge computing nodes, the number and the positions of the tunnel edge computing nodes are adjusted and deployed, so that the deployment cost is minimized, the resource utilization rate of the edge networks is improved, and the load balance is realized.
In a single tunnel, setting a set of prepositive flow terminals as F = { F }, the total number of prepositive flow terminals as | F |, and calculating nodes at the edge of the tunneleIs a candidate set ofE={e}Computation node at tunnel edgeeThe total number of the candidate positions is | E |, and under the condition of ensuring coverage constraint, time delay constraint and flow constraint, the optimal deployment tunnel edge calculation node is selected from the candidate position set EeSatisfies a minimum number. Setting a binary variable μ e, wherein the representation method is as follows:
Figure SMS_24
wherein each prepositive flow terminalfAre all computed by at least one tunnel edge nodeeIs covered withSetting a tunnel edge computing nodeeAnd front-end flow terminalfA distance d between ef Computation node at tunnel edgeeHas a radius of coverage ofrSetting binary variables
Figure SMS_25
Indicating preposition flow terminalfWhether to be calculated by tunnel edgeeThe covered representation method is as follows: />
Figure SMS_26
Setting binary variablesI fe Indicating preposition flow terminalfWhether or not to be calculated by tunnel edgeeThe service is carried out by the server in a manner that,nto cover the prepositive flow terminalfTunnel edge computing node ofeThe number is expressed as follows:
Figure SMS_27
the present embodiment specifically sets the constraint conditions as follows:
node is calculated at deployment tunnel edgeeIn the process, the goal of minimizing the deployment cost is taken as the goal, the goals of time delay limit, capacity constraint and coverage constraint are guaranteed to be met, and the model constraint condition is constructed:
in the aspect of coverage constraint, any one tunnel edge computing nodeeCan cover a plurality of preposed flow terminalsfEach prepositive flow terminalfOr simultaneously at a plurality of tunnel edge computing nodeseWithin the coverage of each prepositive traffic terminalfNodes can only be calculated by the nearest tunnel edgeeServed. From the viewpoint of coverage reliability and coverage service reliability, the following constraints are set:
Figure SMS_28
in the aspect of time delay limitation, a tunnel edge computing nodeeCan affect capacity allocation and negativityLoad, thereby affecting the latency of computational tasks, and also including pre-flow terminalsfTo tunnel edge compute nodeeNode for calculating transmission delay and tunnel edgeeAnd (4) processing time delay.
Tunnel task offloading to tunnel edge compute nodeseThe time delay is calculated as follows:
front-end flow terminalfUpstream data rate for transmitting computational tasks from front-end to tunnel edge compute nodes over cable channels within tunnels
Figure SMS_29
The transmission rate is determined by the head traffic terminalfThe method is determined by the type of the equipment, and is carried out according to the national tunnel electromechanical design standard aiming at the setting of the transmission rate of network ports, serial ports and switching value signals.
The calculation task of a single preposed flow terminal f in the tunnel is modeled as
Figure SMS_30
It can be a front-end flow terminalfOffloading to tunnel edge compute nodeseTo above, D f Indicating the size of the input data to be calculated, e.g. the size of the input parameters and the program code, C f Indicating completionfThe computing task of (2) requires CPU cycles. Tunnel edge computing nodeeComputing power is O per second CPU cycle e
Front-mounted flow terminalfTransmitting input data for computing tasks to tunnel edge computing nodes over wireless accesseGenerating transmission delay, calculating to obtain preposed flow terminalfWill have a size of D f Offload of input data to tunnel edge compute nodeeHas a transmission delay of D f /R f, wherein Rf Is the data uplink transmission rate.
Calculating to obtain preposed flow terminalfOffloading computing tasks to tunnel edge compute nodeseThe total delay of (c) is:
Figure SMS_31
for different types of head trafficTerminal devicefInvolving tunnel edge computation nodeseTo the prepositive flow terminalfThe control of (2) is to synchronously consider the time delay of the data downlink direction, such as the following lane indication command, fan control command, illumination control command, and the like.
A downlink control characteristic analysis table is constructed as shown in the following table 1:
TABLE 1
Figure SMS_32
The tunnel downlink control service is divided into 7 typesu,u=1,2,3 …,7 mark, front flow terminalfOf a traffic class ofuMaximum delay tolerated is
Figure SMS_33
Thus, computational tasks are offloaded to tunnel edge compute nodeseThe delay constraints of (a) are as follows:
Figure SMS_34
in terms of capacity constraints, tunnel edge compute nodeseAll preposed flow terminals served in unit timefThe total number of CPU cycles required by the tasks cannot exceed the CPU processing capacity of the tunnel edge computing node, and the capacity constraint is expressed as follows:
Figure SMS_35
the algorithmic modeling process of this embodiment is as follows:
first step, solving node problem
Regarding the optimal arrangement of the tunnel edge computing nodes in the tunnel, all preposed flow terminals in the tunnel need to be ensuredfOptimizing the position and the quantity of the calculation nodes at the edge of the tunnel and the governed preposed flow terminal thereof on the premise of clear distribution and task requirementsfThe problem is similar to the NP-hard problem and converts the multi-constraint optimization problem into an approximationSolving the optimization problem with simple beams by adopting a simulated annealing algorithm with self-adaptive external punishment.
Second, setting penalty function
Setting a feasible solution space composed of all constraint conditions to be omega, wherein the feasible solution space consists of the length of a tunnel and a preposed flow terminalfIs determined by the amount of (c). Finding the object satisfying the minimum of single tunnel edge computing nodes in the feasible solution space, namely, minimizing configurationx. The feasible domain space is reduced, the effectiveness of the simulated annealing algorithm is reduced, the algorithm searches that although the cost function is divided into a plurality of regions, the calculation result possibly cannot meet the strict constraint condition, the processing equality constraint and the inequality constraint are solved by using an external point penalty function method, and the region beyond the feasible solution space omega is expanded.
Converting the constraint into a penalty function, multiplying the penalty function by a penalty factor to be used as a single penalty item to be added to the target function, and setting the original target problem as follows:
Figure SMS_36
the method adopts an external point penalty function method to convert the method as follows:
Figure SMS_37
Figure SMS_38
is a penalty function in which>
Figure SMS_39
Is a very large constant, a common penalty function has the following form:
Figure SMS_40
when the temperature is higher than the set temperaturexPenalty function value increases objective function value when it is not a feasible solution, and is a penalty for omega away from feasible domain, forcing configurationxClose to the feasible region in the optimization process, the penalty function value is 0 provided x is a feasible solution. And the external search is used for ensuring the shortening of the path from the initial configuration to the optimal configuration.
Aiming at the logic deployment problem of the tunnel edge computing node, under the constraint condition, an adaptive external penalty function is adopted and expressed as follows:
Figure SMS_41
aiming at the external penalty function, the first item mainly follows a time delay constraint condition and is currently provided with a flow terminalfTo the tunnel edge compute nodeeThe time delay exceeds the maximum tolerable time delay of the transmission service, the objective function is increased according to the condition of violating the time delay constraint, the second item mainly complies with the capacity constraint condition, and when the node is calculated at the edge of the tunneleAll served head endfWhen the required resource exceeds its capacity, the objective function is increased in violation of the capacity constraint.
wherein ,
Figure SMS_42
set to constant for penalty factor>
Figure SMS_43
Is too large, the minimization difficulty of the objective function is increased, and>
Figure SMS_44
and if the value is too small, the distance between the minimum point of the penalty function and the optimal solution of the feasible solution space is far, and the solution efficiency is greatly reduced. />
Figure SMS_45
Computing nodes for tunnel edgeseAnd whether the solution is deployed or not, forming a vector set of all possible solutions of a single tunnel.
Figure SMS_46
Using penalty letterThe number guarantee is inversely proportional to the constraint conditions, an optimal solution outside the feasible region omega is explored, the farther the feasible region is, the larger the punishment function value is, the punishment effect is determined by the percentage of the part violating the constraint, the proportion of the part violating the constraint is multiplied by the objective function and added to the punishment function, the guarantee constraint and the objective function are
Figure SMS_47
The order of magnitude remains consistent. Which ensures that the penalty function and the target function +>
Figure SMS_48
In proportion, when the value outside the feasible solution space is not much different from the feasible solution space, the validity of the value can be checked, such as the configured tunnel edge computing nodeeAnd when the number of the deployed tunnels is obviously lower than that of the deployed tunnels, disturbance is required, and the penalty function is used.
Thirdly, defining an objective function
Therefore, the objective function for the tunnel edge node deployment model is:
Figure SMS_49
adding a penalty function on the basis of an original objective function for realizing the minimum number of tunnel edge calculation nodes.
Fourth step, fusion simulation annealing algorithm
As shown in FIG. 3, the single tunnel edge computing node is configured as E (x)eThe internal energy when the number is x is defined as the value of the objective function in the third step. Finding the optimal number in a feasible solution space to minimize the internal energy, wherein the calculation process is as follows:
step (1) setting the current temperature to be T k The outer layer circulation step iskAt this timek=0, then the current annealing initial temperature value is
Figure SMS_50
Each of which is providedxHas a maximum number of iterations of iter max I.e. the number of internal cycles is iter max In the initial step s =0, a corresponding tunnel edge computing node is randomly generatedeConfiguring the state value x 0 (0) E omega, the value is initially determined as the historical optimal solution and the current solution.
And (2) in the implementation of simulated annealing, updating according to the following rules:
Figure SMS_51
wherein a is an annealing rate, is a constant having a value close to 1, and gradually decreases to a target value T with k → + ∞ end Where k = k +1.
Step (3) suppose T k The current solution of
Figure SMS_52
In any step s is more than or equal to 0, the previous state is disturbed according to a preset neighborhood function, and the corresponding solution is x k (s)。
If the internal energy is reduced, updating the current solution, and if the internal energy is increased, the new solution is accepted as the current solution of the step s with a certain probability, wherein c is a Boltzmann constant, and the probability value mode is as follows:
Figure SMS_53
if the new solution is not accepted, the solution of step s-1 is retained.
Step (4) at T k The maximum number of iterations is iter max Repeating the steps (2) and (3) to obtain an iter max And secondly, after the state is stable, the current solution is the optimal solution of the current state, and the temperature is reduced to the next temperature at the moment, and the iteration is continued.
Step (5) continuously calculating and keeping T k To the target temperature T end Otherwise, go to step (2).
In order to implement the foregoing method, the present embodiment further relates to a front-end traffic terminal, which offloads respective tasks to edge computing nodes in an edge network adjacent to the tunnel for processing.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a readable storage medium or transmitted from one readable storage medium to another readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wire. The readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like.
Optionally, an embodiment of the present application further provides a storage medium, where instructions are stored in the storage medium, and when the storage medium is executed on a computer, the instructions cause the computer to execute the method according to the above-described embodiment.
Optionally, an embodiment of the present application further provides a chip for executing the instruction, where the chip is configured to execute the method in the foregoing illustrated embodiment.
The method of the embodiment is mainly oriented to the highway tunnel, and is particularly suitable for a long tunnel or an extra-long tunnel with the length of more than 1000 meters. By using the method, the minimum deployment number of the tunnel edge computing nodes under the edge-end model can be calculated, so that the minimum cost is realized.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (8)

1. A tunnel edge computing node deployment method is characterized in that: the method comprises the following steps: setting a deployment constraint condition of a tunnel edge computing node, setting a penalty function according to the constraint condition, and defining a target function E (X);
setting a deployment constraint condition of the tunnel edge computing node to aim at minimizing deployment cost, and ensuring that the objectives of time delay limitation, capacity constraint and coverage constraint are met:
in the aspect of coverage constraint, any one tunnel edge computing nodeeCovering multiple front-end traffic terminalsfEach prepositive flow terminalfCompute nodes at the edge of multiple tunnels simultaneouslyeWithin the coverage of each head-endfNodes can only be calculated by the nearest tunnel edgeeThe service is carried out;
in terms of time delay limitation, the flow terminal is preposedfUpstream data rate transmission of computational tasks from head-end to tunnel-edge computational nodes through cable channels within tunnelsR f With transmission rate from the head-endfDetermined by the device type of (a); calculating task unloading to tunnel edge calculating node according to tunnel control typeeDelay constraints of (2);
in terms of capacity constraints, tunnel edge compute nodeseAll preposed flow terminals served in unit timefThe total number of CPU cycles required by the tasks does not exceed the CPU processing capacity of the tunnel edge computing node;
for the penalty function, the current flow terminal is setfTo the tunnel edge compute nodeeThe time delay exceeds the maximum tolerable time delay of the transmission service, and the target function is increased according to the condition of violating the time delay constraint;
computing nodes at tunnel edgeeAll head end servedfWhen the required resource exceeds the capacity of the resource, increasing the objective function according to the violation of the capacity constraint condition;
configuring single tunnel edge computing node by E (X) expressioneThe internal energy when the number is X is defined as the value of an objective function, and the optimal tunnel edge is searched in a feasible solution space through a simulated annealing algorithm to calculate the number of nodesAnd (4) counting.
2. The tunnel edge computing node deployment method of claim 1, wherein: in the constraint, the following constraint conditions are set in terms of coverage constraint:
Figure QLYQS_1
Figure QLYQS_2
Figure QLYQS_3
wherein ,
Figure QLYQS_4
for front-end traffic terminalsfTotal number of (2), setting a front-end flow terminalfIn a collection of +>
Figure QLYQS_5
(ii) a Tunnel edge computing nodeeIs combined into>
Figure QLYQS_6
Computation node at tunnel edgeeThe total number of the candidate positions is->
Figure QLYQS_7
Under the condition of ensuring coverage constraint, time delay limitation and flow constraint, selecting an optimal deployment tunnel edge computing node from the candidate position set EeSatisfies the minimum number; setting a binary variable->
Figure QLYQS_8
The expression method is as follows:
Figure QLYQS_9
wherein each prepositive flow terminalfAre all computed by at least one tunnel edge nodeeCovered, set tunnel edge compute nodeseAnd front-end flow terminalfIs a distance of
Figure QLYQS_10
Computation node at tunnel edgeeHas a cover radius of r, sets a variable->
Figure QLYQS_11
Indicating preposition flow terminalfWhether or not to be calculated by tunnel edgeeThe covering, representation method is as follows:
Figure QLYQS_12
setting binary variables
Figure QLYQS_13
Indicating preposition flow terminalfWhether to be calculated by tunnel edgeeThe service is carried out by the server in a manner that,nto cover the prepositive flow terminalfThe number of the nodes is calculated at the edge of the tunnel, and the representation method is as follows: />
Figure QLYQS_14
3. The tunnel edge computing node deployment method of claim 2, wherein: in the constraint condition, in the aspect of time delay limitation, the tunnel task is unloaded to the tunnel edge computing nodeeThe time delay is calculated as follows:
the calculation task of a single prepositive flow terminal f in the tunnel is modeled as
Figure QLYQS_15
From a front-end traffic terminalf Offloading to tunnel edge compute nodeseUp and/or>
Figure QLYQS_16
Indicates that the calculated input data is large or small, and>
Figure QLYQS_17
indicating completion of pre-flow terminationf The computing task of (2) requires a CPU cycle; tunnel edge computing nodeeCPU cycle per second of computing power @>
Figure QLYQS_18
Front-mounted flow terminalf Transmitting input data for computing tasks to tunnel edge computing nodes over wireless accesseGenerating transmission delay, calculating to obtain preposed flow terminalf Will have a size of
Figure QLYQS_19
Offload of input data to tunnel edge compute nodeseHas a transmission delay of->
Figure QLYQS_20
, wherein />
Figure QLYQS_21
Is the data uplink transmission rate;
calculating to obtain preposed flow terminalfOffloading computing tasks to tunnel edge compute nodeseThe total delay of (c) is:
Figure QLYQS_22
4. the tunnel edge computing node deployment method of claim 3, wherein: for different types of front-end traffic terminalsfInvolving computation nodes at the edges of tunnelseTo the prepositive flow terminalf Constructing a downlink control characteristic analysis table;
classifying tunnel downlink control service into z typeU, u =1,2,3 …, z mark, leading flow terminalf Has a traffic class of u and a maximum tolerated delay of
Figure QLYQS_23
Each prepositive flow terminalfAll have fixed business types, and the computing tasks are unloaded to the tunnel edge computing nodeseThe delay constraint of (2):
Figure QLYQS_24
in terms of capacity constraints, tunnel edge compute nodeseAll preposed flow terminals served in unit timefThe total number of CPU cycles required by the tasks cannot exceed the CPU processing capacity of the tunnel edge computing node, and the capacity constraint is expressed as follows:
Figure QLYQS_25
5. the tunnel edge computing node deployment method of claim 4, wherein: the objective function is:
Figure QLYQS_26
wherein ,
Figure QLYQS_27
representing the tunnel edge compute nodeeThe number of (2) is minimal;
Figure QLYQS_28
is a penalty function, is->
Figure QLYQS_29
Representing a penalty factor;
Figure QLYQS_30
Figure QLYQS_31
computing nodes for tunnel edgeseWhether the solution is deployed or not, a vector set of all possible solutions of a single tunnel is formed:
Figure QLYQS_32
6. the method for deploying computing nodes at the edge of a tunnel according to claim 1, wherein: through a simulated annealing algorithm, the optimal number is searched in a feasible solution space, so that the internal energy is minimized, and the method specifically comprises the following steps:
step (1) setting the current temperature to be T k The outer layer circulation step is k, and at the time k =0, the current annealing initial temperature value is T 0 Setting the maximum number of iterations of each x to iter max I.e. the number of internal cycles is iter max In the initial step s =0, a corresponding tunnel edge computing node is randomly generatedeConfiguring the state value x 0 (0) E, determining the value as the historical optimal solution and the current solution initially;
in the simulated annealing in the step (2), updating according to the following rules:
Figure QLYQS_33
wherein a is an annealing rate, is a constant having a value close to 1, and gradually decreases to a target value T with k → + ∞ end Wherein k = k +1;
step (3) suppose T k The current solution of
Figure QLYQS_34
In any step s is more than or equal to 0, the previous state is disturbed according to a preset neighborhood function, and the corresponding solution is x k (s);
If the internal energy is reduced, updating the current solution, and if the internal energy is increased, the new solution is accepted as the current solution of the step s with a certain probability, wherein c is a Boltzmann constant, and the probability value mode is as follows:
Figure QLYQS_35
if the new solution is not accepted, retaining the solution of step s-1;
step (4) at T k The maximum number of iterations is iter max Repeating the steps (2) and (3) to obtain an iter max Secondly, after the state is stable, the current solution is the optimal solution of the current state, and at the moment, the temperature is reduced to the next temperature, and the iteration is continued;
step (5) continuously calculating and keeping T k To the target temperature T end Otherwise, go to step (2).
7. A terminal, characterized by: the method comprises the steps of unloading respective tasks to edge computing nodes in adjacent edge networks in a tunnel for processing, adjusting the number and the positions of the deployment nodes under the condition that the coverage constraint, the capacity constraint and the delay constraint of the tunnel edge computing nodes are met, and minimizing the deployment cost, and is carried out according to the method of any one of claims 1 to 6.
8. A computer system comprising a memory, a processor, and a computer program that is executable on the memory and on the processor, wherein: the processor, when executing the computer program, realizes the steps of the method of any of the preceding claims 1 to 6.
CN202310148605.4A 2023-02-22 2023-02-22 Tunnel edge computing node deployment method and system Active CN115883568B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310148605.4A CN115883568B (en) 2023-02-22 2023-02-22 Tunnel edge computing node deployment method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310148605.4A CN115883568B (en) 2023-02-22 2023-02-22 Tunnel edge computing node deployment method and system

Publications (2)

Publication Number Publication Date
CN115883568A true CN115883568A (en) 2023-03-31
CN115883568B CN115883568B (en) 2023-06-02

Family

ID=85761506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310148605.4A Active CN115883568B (en) 2023-02-22 2023-02-22 Tunnel edge computing node deployment method and system

Country Status (1)

Country Link
CN (1) CN115883568B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040185887A1 (en) * 2003-03-20 2004-09-23 Microsoft Corporation Multi-radio unification protocol
EP2755414A1 (en) * 2007-11-27 2014-07-16 Qualcomm Incorporated Interface management in a wireless communication system using subframe time reuse
CN109684075A (en) * 2018-11-28 2019-04-26 深圳供电局有限公司 A method of calculating task unloading is carried out based on edge calculations and cloud computing collaboration
CN110717302A (en) * 2019-09-27 2020-01-21 云南电网有限责任公司 Edge computing terminal equipment layout method for real-time online monitoring service of power grid
CN111586762A (en) * 2020-04-29 2020-08-25 重庆邮电大学 Task unloading and resource allocation joint optimization method based on edge cooperation
CN112052086A (en) * 2020-07-28 2020-12-08 西安交通大学 Multi-user safe energy-saving resource allocation method in mobile edge computing network
CN114126066A (en) * 2021-11-27 2022-03-01 云南大学 MEC-oriented server resource allocation and address selection joint optimization decision method
CN114363962A (en) * 2021-12-07 2022-04-15 重庆邮电大学 Collaborative edge server deployment and resource scheduling method, storage medium and system
CN114500560A (en) * 2022-01-06 2022-05-13 浙江鼎峰科技股份有限公司 Edge node service deployment and load balancing method for minimizing network delay
CN115081682A (en) * 2022-05-26 2022-09-20 西南交通大学 Traffic organization optimization method for long and large tunnel construction and computer device
CN115454650A (en) * 2022-10-11 2022-12-09 广东电网有限责任公司 Resource allocation method, device, terminal and medium for microgrid edge computing terminal

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040185887A1 (en) * 2003-03-20 2004-09-23 Microsoft Corporation Multi-radio unification protocol
EP2755414A1 (en) * 2007-11-27 2014-07-16 Qualcomm Incorporated Interface management in a wireless communication system using subframe time reuse
CN109684075A (en) * 2018-11-28 2019-04-26 深圳供电局有限公司 A method of calculating task unloading is carried out based on edge calculations and cloud computing collaboration
CN110717302A (en) * 2019-09-27 2020-01-21 云南电网有限责任公司 Edge computing terminal equipment layout method for real-time online monitoring service of power grid
CN111586762A (en) * 2020-04-29 2020-08-25 重庆邮电大学 Task unloading and resource allocation joint optimization method based on edge cooperation
CN112052086A (en) * 2020-07-28 2020-12-08 西安交通大学 Multi-user safe energy-saving resource allocation method in mobile edge computing network
CN114126066A (en) * 2021-11-27 2022-03-01 云南大学 MEC-oriented server resource allocation and address selection joint optimization decision method
CN114363962A (en) * 2021-12-07 2022-04-15 重庆邮电大学 Collaborative edge server deployment and resource scheduling method, storage medium and system
CN114500560A (en) * 2022-01-06 2022-05-13 浙江鼎峰科技股份有限公司 Edge node service deployment and load balancing method for minimizing network delay
CN115081682A (en) * 2022-05-26 2022-09-20 西南交通大学 Traffic organization optimization method for long and large tunnel construction and computer device
CN115454650A (en) * 2022-10-11 2022-12-09 广东电网有限责任公司 Resource allocation method, device, terminal and medium for microgrid edge computing terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AIGERIM OSPANOVA; BEHROUZ MAHAM: "Delay-Outage Probability of Capacity Achieving-Based Task Offloading for Mobile Edge Computing", 2022 IEEE INTERNATIONAL CONFERENCES ON INTERNET OF THINGS (ITHINGS) AND IEEE GREEN COMPUTING & COMMUNICATIONS (GREENCOM) AND IEEE CYBER, PHYSICAL & SOCIAL COMPUTING (CPSCOM) AND IEEE SMART DATA (SMARTDATA) AND IEEE CONGRESS ON CYBERMATICS (CYBERMATIC *
郭敏: "面向工业互联网的移动边缘计算任务卸载方法研究", 信息科技辑 *

Also Published As

Publication number Publication date
CN115883568B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
Chen et al. Energy-efficient offloading for DNN-based smart IoT systems in cloud-edge environments
CN110247793B (en) Application program deployment method in mobile edge cloud
CN112286677B (en) Resource-constrained edge cloud-oriented Internet of things application optimization deployment method
CN109151864B (en) Migration decision and resource optimal allocation method for mobile edge computing ultra-dense network
CN113810233B (en) Distributed computation unloading method based on computation network cooperation in random network
Wang et al. An efficient service function chain placement algorithm in a MEC-NFV environment
CN108650131B (en) Processing system for multi-controller deployment in SDN network
CN111988787B (en) Task network access and service placement position selection method and system
CN111953547B (en) Heterogeneous base station overlapping grouping and resource allocation method and device based on service
CN109639833A (en) A kind of method for scheduling task based on wireless MAN thin cloud load balancing
Gu et al. A multi-objective fog computing task scheduling strategy based on ant colony algorithm
Na et al. An evolutionary game approach on IoT service selection for balancing device energy consumption
Jurenoks et al. Sensor network information flow control method with static coordinator within internet of things in smart house environment
Afrin et al. Robotic edge resource allocation for agricultural cyber-physical system
Cheng et al. Resilient edge service placement under demand and node failure uncertainties
Rui et al. Load balancing in the internet of things using fuzzy logic and shark smell optimization algorithm
Li et al. Adaptive controller placement in software defined wireless networks
Hans et al. Controller placement in software defined Internet of Things using optimization algorithm
Sadegh et al. A two-phase virtual machine placement policy for data-intensive applications in cloud
Wu et al. Resource allocation optimization in the NFV-enabled MEC network based on game theory
Wang Collaborative task offloading strategy of UAV cluster using improved genetic algorithm in mobile edge computing
Li et al. Optimal service selection and placement based on popularity and server load in multi-access edge computing
Henna et al. Distributed and collaborative high-speed inference deep learning for mobile edge with topological dependencies
Yujie et al. An effective controller placement algorithm based on clustering in SDN
CN115883568A (en) Tunnel edge computing node deployment method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant