LU508342B1 - DTS deployment and resource allocation method based on edge-cloud collaboration - Google Patents

DTS deployment and resource allocation method based on edge-cloud collaboration Download PDF

Info

Publication number
LU508342B1
LU508342B1 LU508342A LU508342A LU508342B1 LU 508342 B1 LU508342 B1 LU 508342B1 LU 508342 A LU508342 A LU 508342A LU 508342 A LU508342 A LU 508342A LU 508342 B1 LU508342 B1 LU 508342B1
Authority
LU
Luxembourg
Prior art keywords
edge
deployment
resource allocation
task
digital twin
Prior art date
Application number
LU508342A
Other languages
German (de)
Inventor
Tong Liu
Original Assignee
Chongqing Vocational Inst Eng
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Vocational Inst Eng filed Critical Chongqing Vocational Inst Eng
Priority to LU508342A priority Critical patent/LU508342B1/en
Application granted granted Critical
Publication of LU508342B1 publication Critical patent/LU508342B1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0806Configuration setting for initial configuration or provisioning, e.g. plug-and-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a DTS deployment and resource allocation based on edge cloud cooperation, which comprises the following steps: constructing a digital twin network system model based on edge cloud cooperation of a physical communication network and generating various types of digital twin services; Establishing an objective function for minimizing flow overhead and network delay according to the digital twin network system model based on edge cloud cooperation; The intelligent algorithm based on decision tree and DDPG is used to classify and deploy digital twin services, and then the task solving request sent by users is received to allocate communication resources, and the solution task is fed back to users after resource allocation It effectively solves the defects of low resource utilization rate and backbone network traffic overhead and network response time caused by DTS differentiated deployment in the cloud and edge.

Description

DTS deployment and resource allocation method based on edge-cloud collaboration technical fields
The present invention relates to the field of digital twin technology, and specifically relates to a DTS deployment and resource allocation method based on edge-cloud collaboration. background technology
The purpose of building a DTN (digital twin network) is to provide various types of DTS (digital twin services) for the physical network. Generally speaking, these services can be divided into dedicated services and general services. Universal DTS means that such services can be applied in a variety of scenarios. For example: Someone proposed an edge robot DT system based on the idea of DT (digital twin) as a service. This system supports the flexible deployment and switching of different access methods such as 4G, 5G and wireless LAN, and is used to test different network access technologies.
Performance provides the ideal venue. Someone else designed a three-dimensional DT reference architecture model suitable for cross-industry applications, and used this model in the fields of electromechanical products, medical care, construction, transportation, aerospace and energy to verify the versatility of the model. Secondly, the universal DTS also Represents services that support multiple application types. In some models, a random offloading scheme is used to reduce the long-term energy consumption of the system.
The proposed offloading scheme is suitable for multiple video analysis, data mining, intelligent algorithm training, etc. On the other hand, dedicated DTS mainly refers to Services applied in specific fields or for a specific application.
In addition, some people have proposed a DT reconstruction and application method for inland waterways based on 3D video fusion. DTS based on this method can solve the problems of video fragmentation, subsystem data 1 dispersion, and untimely emergency response in inland waterway safety LUS08342 monitoring. It is worth noting that not all services provided by DTS are novel.
Part of DTS is still used to solve some traditional problems, but compared with the original solution, DTS can effectively improve performance. For example, someone uses DT to solve the traffic flow prediction problem in the field of loV (Internet of Vehicles). Although other literature has also solved this problem, the traffic flow prediction scheme based on DTN performs better than traditional solutions in reducing network delay.
Overall, DTS can improve network operation efficiency, improve network autonomy and intelligence, and promote smooth network evolution and functional upgrades. However, the deployment of DTS faces a series of challenges and problems: (1) Differentiated deployment of DTS cloud and edge can easily lead to resource waste and increase additional resource consumption.
DTS is generally generated in a CC (cloud computing center) with powerful computing capabilities. Deploying DTS in the cloud can effectively reduce the traffic overhead of the backbone network, but it ignores the importance of edge-cloud collaboration and edge-edge collaboration in supporting small-scale DTS deployment. effectiveness, resulting in a waste of resources in the edge system. At the same time, the distance between CC and users is relatively far, and deploying DTS in the cloud will increase the time it takes for users to obtain services. On the other hand, deploying DTS at the edge can provide users with fast network response and improve user satisfaction. However, delivering DTS from CC to edge nodes will cause additional traffic overhead and increase the transmission pressure on the backbone network. (2) The joint optimization of DTS deployment and service result delivery increases the difficulty of resource management.
There is a close relationship between the deployment of DTS and the delivery of service results. The deployment location and deployment method of 2
DTS have an important impact on user task transmission time, calculation time, LU508342 and power consumption. Therefore, the deployment of DTS must comprehensively consider service result delivery. When making joint optimization decisions, it is necessary to fully consider the storage resources required by DTS, combined with the communication resources and computing resource status of the edge system, as well as factors such as service type, service object, user delay requirements, etc., to make decisions on how to achieve joint optimization of multi-dimensional resources. Make decisions. The increasing number of constraints increases the difficulty of developing resource allocation strategies
Therefore, a DTS deployment and resource allocation method is urgently needed to solve the problem of low resource utilization caused by differentiated deployment of DTS in the cloud and edge, thereby increasing backbone network traffic overhead and network response time. invention content
In view of the shortcomings of the existing technology, the purpose of the present invention is to provide a DTS deployment and resource allocation method based on edge-cloud collaboration, deploy DTS in an edge-cloud collaboration manner, and reduce the backbone network traffic overhead and consumption of deploying DTS in the cloud and edge. Conduct mathematical modeling of delay consumption, analyze the effectiveness of edge-cloud collaboration in deploying DTS, combine DTS service delivery and user task offloading, and design efficient communication resource and computing resource allocation plans to improve system resource utilization, enabling DTS to be used in The low resource utilization caused by differentiated deployment of cloud and edge increases the technical problems of backbone network traffic overhead and network response time.
In order to achieve the above purpose, the technical solutions adopted in the present invention are as follows. 3
The key to a DTS deployment and resource allocation method based on LUS08342 edge-cloud collaboration is that it includes the following steps:
Step 1: Obtain historical operation data of the physical communication network, construct a digital twin network system model based on edge-cloud collaboration of the physical communication network in the cloud computing center, and generate various types of digital twin services; and
Step 2. establishing an objective function for minimizing traffic overhead and network latency based on the service deployment decision and resource allocation decision of the said digital twin network system model based on edge-cloud collaboration; and
Step 3. Use the intelligent algorithm based on decision trees and DDPG to obtain the classified deployment strategy of the digital twin service that satisfies the objective function and perform classified deployment of the digital twin service. Then, when receiving the task solution request sent by the user, obtain the classified deployment strategy that satisfies the current state. The optimal allocation strategy of communication resources and computing resources of the objective function, and solve the task after allocating communication resources and computing resources, and feed back the solution results to the user.
Further, the objective function described in step 2 is constructed as follows.
Step 2.1:analyze the service deployment of said edge-cloud collaboration-based digital twin network system model to obtain digital twin service deployment decisions and task solving decisions; and
Step 2.2: Model backbone network traffic overhead based on digital twin service deployment decisions.
Step 2.3: analyzing the resource allocation of the said digital twin network system model based on edge-cloud collaboration, obtaining a wireless communication resource allocation decision and a collaboration system computational resource allocation decision; and 4
Step 2.4: Establish a model of the time consumed for task solving based LUS08342 on the decision of task solving, the decision of allocating wireless communication resources and the decision of allocating computational resources of the collaborative system.
Step 2.5: composing a cost function based on said backbone network traffic overhead model and task solving consumption duration model; and
Step 2.6: construct said objective function based on the cost function.
Further, the intelligent algorithm based on decision trees and DDPG includes a decision tree-based digital twin service classification deployment algorithm model, a DDPG-based wireless communication resource allocation algorithm model, and a DDPG-based computing resource allocation algorithm model.
Further, in step 3, the process of solving the task based on the intelligent algorithm of decision tree and DDPG is as follows:
Step 3.1: obtain a classification and deployment strategy for digital twin services using a decision tree-based digital twin service classification and deployment algorithm model, and classify and deploy digital twin services according to the classification and deployment strategy; and
Step 3.2: Get the task solving request sent by the user; and
Step 3.3: Input the channel status between the user and the edge computing server, use the wireless communication resource allocation algorithm model based on DDPG to obtain the optimal allocation strategy of communication resources that satisfies the objective function under the current status, and allocate communications according to the optimal allocation strategy of communication resources. resource;
Step 3.4: the user offloads the task to a digital twin network system model based on edge-cloud collaboration.
Step 3.5: The digital twin network system model based on edge-cloud collaboration judges the deployment of the digital twin service that solves the task based on the task. If it is deployed in the cloud, the task will be offloaded 5 to the cloud, and the solution result will be sent after the task is solved. LU508342
Feedback to the user, if deployed at the edge node, based on the user's demand for computing resources and the computing resources that each edge computing server in the edge node can provide, the computing resource allocation algorithm model based on DDPG is used to obtain the edge that satisfies the objective function in the current state. The optimal allocation strategy of node computing resources, and allocate computing resources according to the optimal allocation strategy of computing resources;
Step 3.6: Edge nodes collaborate to solve the task and feedback the results to the user.
The significant effects of the present invention are: (1) À solution DDRAS for joint DTS deployment and service result delivery based on edge-cloud collaboration is proposed. This solution makes full use of the advantages of cloud computing and MEC (mobile edge computing) to classify and deploy DTS according to its characteristics. (2) Constructed the creation, update, deployment and invocation cost model of DTS. We mathematically modeled the backbone network traffic overhead involved in building the DT model, updating the DT model, and deploying DTS. We also analyzed the time consumption in the process of users calling DTS and DTS result delivery, and constructed a joint model based on the backbone network traffic overhead and delay consumption.
Characterized cost model. (3) An objective function aimed at minimizing the cost function is constructed. With the objective function as the purpose, a mathematical optimization model is constructed based on the optimal allocation of communication resources and computing resources. Combined with the service process of DTN, the communication resource allocation and computing resource allocation are described as MDP, and an intelligent algorithm based on decision trees and DDPG is proposed. It effectively solves the shortcomings of low resource utilization caused by differentiated deployment 6 of DTS in the cloud and edge, thereby increasing backbone network traffic LUS08342 overhead and network response time.
The accompanying illustration
Figure 1 shows a flowchart of the method of the present invention;
Figure 2 shows a schematic diagram of the structure of the digital twin network system model based on edge-cloud collaboration in the present invention:
Figure 3 is a flow chart of the intelligent algorithm based on decision trees and DDPG;
Figure 4 shows the comparison of traffic overhead for different digital twin service deployment strategies;
Figure 5 shows the comparison of transmission time consumed by different communication resource allocation algorithms with different total bandwidths;
Figure 6 shows a comparison of the transmission time consumed by different communication resource allocation algorithms under the condition of incremental user bandwidth demand;
Figure 7 shows a comparison of the computation time consumed by different computational resource allocation schemes with different total computational resources;
Figure 8 shows a comparison of the computation time consumed by different computing resource allocation schemes under the condition of growth in users' computational resource demand;
Figure 9 shows a comparison of the cost function values for the different scenarios.
Specific implementations
The specific embodiments of the present invention and the working principle are described in further detail below in conjunction with the accompanying drawings.
As shown in Figure 1, a DTS deployment and resource allocation method 7 based on edge-cloud collaboration, the specific steps are as follows: LUS08342
Step 1: Obtain historical operation data of the physical communication network, construct a digital twin network system model based on edge-cloud collaboration of the physical communication network in the cloud computing center, and generate various types of digital twin services;
The digital twin network system model based on edge-cloud collaboration is shown in Figure 2. The lower left part in Figure 2 is the physical communication network, which is composed of the user layer, edge layer, backbone network and cloud layer. Terminals in the user layer include mobile phones, tablets, laptops, etc. Each TU (end user) uses 5G or wireless LAN to access the network. The upper layer of the user layer is the network edge layer, which is composed of WAP (wireless access point) and MES (edge computing server). WAP provides wireless access services to TU and is responsible for routing the original data transmitted by TU to the cloud through the backbone network. The MES in the edge layer includes computing servers and storage servers, which provide task solving and data storage services for TU. In order to shorten the distance between users and network resources, WAP and MES are installed in the same location and connected to each other through optical fiber. The edge layer and the cloud layer are connected through the backbone network. The backbone network consists of a series of high-performance switches and routers, and these network elements are also connected through high-speed optical fibers. Extremely powerful computing and storage servers are deployed in the cloud layer of the physical network. Although these servers are geographically far away from TU, they can provide TU with abundant computing resources and storage resources.
Since the computing and storage servers deployed in CC are extremely powerful, the digital twin network DTN is considered to be built in the cloud.
The DTN created in CC is shown in the lower right part of Figure 2. Including digital models of physical network elements such as user terminals TU, edge computing servers MES, switches, and routers in the digital twin space. These 8 digital models are connected together through virtual channels to form a DTN LUS08342 that is highly consistent with the real physical network. At the same time, the
DTN and the physical network are connected through virtual and real interactive channels, and the updated data from the physical communication network is transmitted to the digital twin network DTN through the virtual and real interactive channels, realizing rapid updates of digital models in the digital twin network DTN. The construction process of digital twin service DTS is shown in the upper part of Figure 2. The cloud computing center CC cleans, filters and mines the twin data sent by users, and extracts valuable data to build a virtual twin model of the physical network element. On this basis, with the help of historical operational data of the physical communication network, the interaction model between various virtual models is created in the digital twin network DTN, and then a digital twin network DTN that can accurately restore the physical network is constructed. In the digital twin network DTN, various types of digital twin service DTS can be generated with the help of intelligent algorithms such as neural network, logistic regression, and random forest.
In the embodiment of the present invention, the digital twin service DTS created in CC can assist the physical network to achieve efficient resource configuration and management. At the same time, it can also provide various services to physical network elements, such as assisting TU in optimizing task offloading. However, due to the long distance between CC and TU, calling
DTS in CC will increase the data transmission time. On the other hand, deploying DTS in MES will be limited by memory capacity and will also cause additional backbone network traffic consumption. Therefore, this embodiment uses the edge-cloud collaboration method to store the digital twin service DTS based on the attribute differences of DTS and combining the respective advantages of cloud computing and edge computing.
The model considered in this embodiment can be easily extended to many specific application scenarios. Taking the Internet of Vehicles loV as an 9 example, cameras, sensors, RSUs, vehicles, etc. installed on the car body and LUS08342 roadside can be regarded as TUs. They jointly collect basic data in loV, such as vehicle engine status, road width, and traffic flow at intersections, etc., transmit these data to CC, and build the DT of loV in CC. The RSU in loV supports the access of various TUs, and its function is equivalent to the WAP in the system model. MES installed near RSU (Roadside Unit) provides various edge services to users. Take 3D visual navigation as an example.
Since the call of this DTS does not involve data interaction between multiple
DT models, and users generally request services frequently, it can be deployed at the edge of the network in advance to reduce duplication in the backbone network. The traffic overhead caused by the request. On the other hand, due to storage space limitations, it is impossible to deploy many types of
DT models in MES. If the service requested by the user requires calling multiple DT models and there is a large amount of interactive information between each model, deploying such services at the edge will result in no response to the user's request. Take the vehicle requesting multi-vehicle collaborative intelligent driving services from DTN as an example. This service not only needs to retrieve the DT model of each collaborative vehicle and multiple roads, but also needs to realize the transmission of various communication information and control information between vehicles. The limitations of MES in storage capabilities and global information make the above operations difficult to achieve. However, since a large number of DT models are stored in CC, and CC also has interactive information between DT models, in this case, such DTS can be considered to be deployed in the cloud.
Step 2: Establish an objective function for minimizing traffic overhead and network delay based on the service deployment decision and resource allocation decision of the digital twin network system model based on edge-cloud collaboration, as follows.
Step 2.1: analyze the service deployment of said edge-cloud collaboration-based digital twin network system model to obtain digital twin 10 service deployment decisions and task solving decisions; and LUS08342
For Digital Twin Service Deployment Decisions.
Let there be a total of, in the physical communication network m edge computing servers MES, forming a collection of MES M ={1,2,... m3. Since the wireless access point WAP and the edge computing server MES are installed in pairs, the number of WAPs is also m. Assume that the number of
TUs in the network isn , which together constitute the set of users
N={,2,..n%. u,, Was used to denote access to the No m WAP and offload the task to the m MES usersn .
Usersu,, The amount of data sent to the CC to build the DTN. F,,, express.
It is easy to get that the total amount of data sent to CC for building DTN is > >. F,. After receiving these data, CC will extract valuable data from the massive data to create a DT model, and combine the interactive data between digital twin models to build a DTN that is highly consistent with the real network. The constructed DTN is expressed as:
DT =1KF,, ,S,,) VmeM ne N (1)
Among them. F, Represents the raw data used to construct the DTN.S, represents the DTS provided by DTN, and assumes that the number of DTS generated in DTN is n, .
Sn The size of the data volume used, the H, express. It is worth noting that after the initial modeling is completed, the DT model in DTN, the virtual interactive channel used to connect the DT model, and DTS will be updated synchronously with changes in the physical network. Designed for update service S,,, The size of the data of the G,, If. If S,, Being deployed in the cloud, the updated data will be transmitted to the CC through the edge layer and backbone network. like S,, Deployed at the edge, updated S,,, It will become easier. For the convenience of analysis, the updated data is set to take the nominal value in the unit time slot.
Setting again the binary number, the a,, said S,, deployment decisions. If the services, It's saved in the cloud. a, takes the value of 0. If the value of , 11 the S,,, Deployed in MES m"Ordera,,, =1The above analysis has been done LU508342 by the Ministry of Education. Based on the above analysis. à, can be expressed as follows: oo je S,, HEH BAR Zp 2 "oS, CAE EUR
It is worth noting that, even if the services are to be S,,, Deployed at the edge, a copy will also be saved in CCS, Copy. A deployment decision configuration document can further be obtained, expressed as a= {Ay mers men ©
For task solving decisions.
Deploying the digital twin service DTS in the cloud computing center means that tasks can only be solved in the cloud. The difference is that there are two possible solutions for deploying DTS at the edge. If the computing resources provided by the edge node are sufficient to support task solving, and the storage space of the edge node can store the solving task, the task can be solved directly at the edge. When any of the above conditions cannot be met, the task can only be offloaded to CC for solution. To facilitate analysis, the task solving decision is defined asb,,, . If the task is solved at the edge, such thats, is equal to 1. Conversely, theb, Equal to 0 means solving the task in CC.
According to the above analysis, we geth,, The expression is:
N Be fe (SERRE a "1 EEE b=1b,,} ner nen 1S USEd to represent the decision document for solving each task.
Step 2.2: Model backbone network traffic overhead based on digital twin service deployment decisions.
For the convenience of analysis, it is assumed that the user terminal TU has only one solving task at a certain time, and the task is expressed as: w.,.={P,.C..T ,0..S,. 6 YMEM,neN} (4)
In the formula, the P, Indicates the original input raw data needed for the 12 solution task. C,, represents the computational resources needed to solve the LUS08342 task.7 is the maximum tolerated delay of TU. If the solution time exceeds this tolerated value, the task solution fails. O,, Used to indicate the amount of data size of the solution result. §, Represents the digital twin service DTS required to solve the task.
The data stored in the CC memory includes: Data sent by the TU to the
CCF,, Generated in CCS,,. whose data size is H,„..; for updating §,, of the data, the size of the data volume isG,, ; the size of the data volume is equal to
P of the raw data; the amount of data takes the value of O, The solution result of the
Further obtain the storage space capacity value that CC needs to provide:
Dy, = > > Fo + Hy + G,, +P, + On) (S)
When the digital twin service DTS is deployed in CC, the traffic overhead of the backbone network includes: TU data transmitted to the cloud for building
DTNE,,; for updating S,,, of updated data G,, ;Task raw data sent by TU to CC
P,, After CC successfully solves the task, the solution results will be fed back to the data of TUO,, - Based on the above analysis, the expression to obtain the backbone network traffic overhead in CC deployment mode is:
BED ar Sem En + A Ay Gy + Pan + Qu] (6)
On the other hand, if the service §, Deployed on the edge computing server MES, the data that MES needs to store includes: data sent from the cloud to the edge S,,,, which has a data size of H,, ; for updating §, of the data
G„„; the raw data of the task sent by the user P, ; the size of the data volume of the solution result O,, Based on the above analysis, the expression of storage space to be provided by the edge system is obtained as follows.
Based on the above analysis, the expression for the storage space to be provided by the edge system is obtained as follows:
Dy, = > > aH, + CA + Pr + Op) (7) 13
In addition, since DTS updating, task solving and result feedback in edge LU508342 deployment mode are all completed at the edge, the overhead of the backbone network only includes F,, The overhead transferred to the CC and the CC will §. Consumption deployed at the edge, expressed as: > Ba Dear Fin Fat) (8)
Combining the above analysis with Eqs. (2), (6) and (8), the backbone traffic overhead can be expressed as follows:
Bo 5 D pot Ze Fam + A Ay Cap + P+ Qu) + a, H (9)
Step 2.3: analyzing the resource allocation of the said digital twin network system model based on edge-cloud collaboration, obtaining a wireless communication resource allocation decision and a collaboration system computational resource allocation decision; and
For wireless communication resource allocation decisions.
R“_ and R“ used to represent the user, respectively u,, with wireless access point WAPm The wireless bandwidth resources between uplink and downlink,Let the total wireless bandwidth resources of the system be RYes
Ri =r‘ xR 4 R! =r’ xR and m +r’ =1 Established. 7“, [0,1] and re e[0,1]Represents WAP respectively m Assigned to user terminal TU» of wireless bandwidth as a percentage of total bandwidth.
In order to improve spectrum resource utilization, it is assumed that the network works in TDD mode, that is, the uplink and downlink have exactly the same bandwidth resources, satisfying{z,, },../ ..v = fre ten For ease of analysis, harmonized with, denote, satisfy» >) r <1, VM,n The At the same time.» ={r, },.. is used to represent the spectrum allocation decision document.
According to Shannon's formula, the uplink channel rate can be expressed as follows: 14
Re =r Rlog,(1+ Puy (10) 0, +n,
In the formula, the p, on behalf of the useru,, of the transmitter power, the h,, is the channel coefficient, the o> and n° represent interference and noise in the wireless channel, respectively.
Similarly, the downlink rate can be expressed as:
RE =r Rlog,(1+ Putty (11) o, + n,
In the formula, the p, on behalf of MECm The transmission power of the h,,. is the channel coefficient of the downlink channel, the o, and n/ represent noise and interference, respectively. In addition, since the total communication bandwidth of the system is limited, there are >" >) R, € R Established.
For collaborative systems computing resource allocation decisions.
Due to equipment performance limitations, MES cannot provide sufficient computing resources like CC. Therefore, it is necessary to formulate an efficient edge computing resource allocation plan based on the current resource conditions of edge nodes and the resource requirements of tasks.
Let the edge node, the m Assigned tou, The computing resources forC”
Cr €[0,1] stands for MES m assigned to tasks of its computing resources to its total computing resources,is satisfied >. > Cam <lVmn. C,, represents the total amount of computational resources that can be provided by the edge system, with CC}, =c, xC, = Established. ¢={c,,},../ cv represents the computational resource allocation decision document. In addition, because of the limited computational resources provided by the edge system, the > > C, <C,,, Established.
Step 2.4: Establish a model of the time consumed for task solving based 15 on the decision of task solving, the decision of allocating wireless LUS08342 communication resources and the decision of allocating computational resources of the collaborative system.
In this example, the said task solving consumption time model includes cloud solving consumption time and edge solving consumption time, as follows. 1) Cloud solving consumes a long time
In the cloud solving mode, TU first sends the task to the edge layer through the uplink channel. After receiving the task, MES immediately determines whether the DTS required to solve the task is deployed locally. If
DTS has been deployed and the edge system currently has sufficient computing resources and storage space remaining, this task will be solved by the edge. On the contrary, if MES does not deploy DTS, or the current remaining calculations cannot support task solving, MES will send the task to
CC through the backbone network and use CC's powerful computing power to solve the task. After successfully solving the task, CC feeds back the results to
TU.
Therefore, the time required to solve the task in the cloud includes: the time consumed by TU sending the task to MES; the time consumed by MES sending the task to CC through the backbone network; the computing time consumed by CC to solve the task; the time consumed by CC feeding the solution result back to MES Transmission time; the sending time for MES to deliver the results to TU. Based on the above analysis, the expression of the total time consumed by solving tasks in cloud solving mode can be obtained as: 1° = am use 4 Con Boon + Qu (12)
Ram Ÿ fiber Con V fiber Re,
In the formula, thed,_. =d_ Representative from MEC m distance to CC,
V.-represents the transmission speed of information in the optical fiber,which 16 takes the value of2x10* m/s. C: It is the computing resource provided by 79900942
CC for the task. 2) the length of time consumed by edge solving
In the edge solving mode, TU only needs to send the original data of the task to the edge node and use the DTS deployed at the edge node to solve the task. After the task is successfully solved, the edge node feeds back the results to the TU. Therefore, the expression for the time consumption of the edge solving task is:
Te = as (13)
In the formula, the C,, Represents the computing resources allocated to the TU by the edge system.
Combining task-solving decisions b = {bn} mer nev » Obtained by solving for the useru,, The time required for the task is: 1, =(=b,)I + On (14)
Step 2.5: composing a cost function based on said backbone network traffic overhead model and task solving consumption duration model; and
Building DTN requires various types of data in the physical network as support. On the one hand, the transmission of large amounts of data will inevitably bring great operational burdens to the backbone network; on the other hand, the purpose of building DTN is to provide users with rapid network response. In order to comprehensively measure the impact of backbone network traffic overhead and network delay on system performance, a cost function including backbone network traffic overhead and network delay is constructed. U, expressed as: 17
U,,=aB,,+(1-a)T, (15) SE where, weighting factora Representing the different levels of importance that systems place on backbone network traffic overhead and network response latencymeet0<a <1. a Higher value means that the system is currently more concerned about the traffic overhead in the backbone network.
Lower œ The value taken represents that the system is currently more concerned with the ability of the network to provide fast network response to the user.
Step 2.6: construct said objective function based on the cost function.
Based on the above analysis, it can be seen that in this embodiment, the ultimate goal is to reduce the cost function jointly represented by backbone network traffic overhead and network response time. The above goals can be achieved through reasonable DTS deployment, optimal allocation of communication resources, and optimal allocation of computing resources.
Therefore, the objective function constructed based on the cost function is expressed as: 18
11 LU508342 min ZU m st. C1: Bi. <B c2:T., <Tm
C3: D!, <D
C4:a,, €[0,1]
CS:b, €[0,1] (16)
C6: Cm S Cyan C comp
C7:0<C, <1
CB: esr Zune Con < Comp
C9:0<r, <1 10: De Rum SR
If the amount of data sent by all network users to CC exceeds the upper limit of the backbone network throughput, it will inevitably lead to network congestion and data loss. Therefore, it is proposed C1lt is guaranteed that the traffic carried by the backbone must be below its maximum capacity. C2
Ensure that the task can be successfully solved within the user's tolerance of latency. C3It is a limitation on MES storage space. Since the storage capacity of MES is limited, the amount of data saved in MES cannot exceed its storage capacity. C4 and C5is a restriction on deployment decisions and calculation decisions, both variables are binary variables. Since MEC computing resources are limited, it is proposed C6 Ensure that the total computing resources allocated to all TUs do not exceed the total computing resources owned by the MEC collaboration system. C7is a limit on the proportion of computing resources that can be allocated. (8 It is used to ensure that the computing resources allocated to TU by edge nodes can effectively support task solving. C9is a restriction on the proportion of communication resources that can be allocated. C'101t is the limitation of communication resources to ensure that the sum of bandwidth allocated to all users cannot exceed the total system bandwidth. 19
The above objective function is a typical mixed integer nonlinear LUS08342 programming problem, and solving this type of model is extremely complex.
Further analysis can find that DTS deployment and task solving can be divided into three steps, namely DTS deployment, communication resource allocation and computing resource allocation. Based on the above analysis, this embodiment proposes an algorithm based on decision trees and DDPG to solve the optimization model. Please refer to step 3 for details.
Enter step 3, Use the intelligent algorithm based on decision trees and
DDPG to obtain the classified deployment strategy of the digital twin service that satisfies the objective function and perform classified deployment of the digital twin service. Then, when receiving the task solution request sent by the user, obtain the current status The optimal allocation strategy of communication resources and computing resources that satisfies the objective function, and solves the task after allocating communication resources and computing resources, and feeds back the solution results to the user. The specific process is as follows:
In this embodiment, the intelligent algorithm based on decision trees and
DDPG consists of a decision tree-based digital twin service classification deployment algorithm model, a DDPG-based wireless communication resource allocation algorithm model, and a DDPG-based computing resource allocation algorithm model. Specifically, of:
Regarding the described decision tree based algorithmic model for digital twin service classification deployment.
The decision tree algorithm can construct a decision tree model based on the characteristics of the data to achieve data classification. The DTS edge-cloud collaborative deployment considered in this embodiment essentially classifies the digital twin service DTS according to characteristics, and then makes deployment decisions based on the classification results.
Therefore, the decision tree algorithm is very suitable for solving the digital twin service DTS deployment problem. 20
In this embodiment, the experimental data set, the Q The traffic data from LU508342 the real network can be directly called the decision tree algorithm package encapsulated in the Matlab platform during implementation. The process of implementing digital twin service deployment using the decision tree algorithm package encapsulated in the Matlab platform is shown in Table 1: table DTS classification deployment algorithm based on decision tree
Input:Experimental datasetQ , reliability threshold @
Output: DTS deployment decision 1. initialize the decision tree parameters 2. Labeling the dataset, and the experimental dataset, the Q partitioned into a training dataset and a test dataset © 3. Using the labeled training dataset, the A training decision tree models 4. Using the test dataset, the ® obtaining the reliability of the decision tree model w 5 ff œ>0 6. Perform step 9 7. Else adjust the parameters of the decision tree model and return to step 3 8 End 9. Enter the attributes of DTS into the decision tree model 10. Decision tree algorithm outputs DTS deployment decision
As can be seen from Table 1, in the case of the experimental dataset, the
Q After preprocessing, it is divided into training datasets in step 2A and test datasets ®. The training data set is used to train the decision tree model in step 3, and the test data set is used to test the reliability of the decision tree model in step 4, and the model reliability values are obtained based on the output of the test dataw . In order to ensure the effectiveness of the decision tree algorithm, a reliability threshold is set in the algorithm @ . If the reliability of the test results is higher than the reliability threshold, the model training is considered successful and can be applied to the classified deployment of the digital twin service DTS, and the algorithm enters step 9. If the test reliability is lower than the reliability threshold, it proves that the training results of the decision tree model are not ideal, and the parameters of the decision tree 21 model need to be adjusted, and the algorithm returns to step 3. LUS08342
Regarding the DDPG-based wireless communication resource allocation algorithm model:
First of all, the DDPG (Deep Deterministic Policy Gradient) algorithm is a deeply deterministic policy gradient algorithm, which is an algorithm proposed to solve continuous action control problems. DDPG can be regarded as an extended version of DQN. The difference is that the previous DQN finally outputs an action vector, while DDPG only outputs one action ultimately.
Moreover, DDPG allows DQN to be extended to continuous action spaces.
The specific process is shown in Table 2: table2 _DDPG algorithm process
Input: DDPG Parameters: Maximum number of iterations M , soft updating factor z, attenuation factor y , Target Q network parameter update frequency, motion noise a, » the number of experience bars extracted
I.
Output. 7(s| 67) 1. Initialize the experience pool H ; 2. Randomly initialize Actor network parameters (97,07 )and Critic network parameters (62,69); 3. For episode=1,2,... M} 4. initialize the environments; 5. Actions are taken according to the current strategy, with the addition of random noise a = (s|0”) + Ayo) 6. If the constraints in C1-C6 are satisfied Then 7. Take actiona "Getting into the swing of things s’ "to get immediate returns R; 8. Will the sample, the ex = {s,a,R,s’} Stored in the experience pool. 9. randomly selected from the experience pool I (i) Lessons learned. 10. Obtained through the target critic network and Bellman equation and iteration y= R+ymax Oo (s,a) 11. Update Critic to estimate network parameters by minimizing the 22 loss function LU508342 12. Estimating the parameters of the network by updating the Actor via policy gradient 13. End If 14. If arrival update frequency Then 15. update the target network parameters by means of a soft update 16. End If 17. End For 18. Return z(s|9”)
Then, this embodiment names the DDPG based wireless communication resource allocation algorithm model as DDPG Comm. The communication resource allocation process meets all elements of MDP. First of all, the global data used to build the DTN provides massive data for the training of neural networks. Secondly, CC has powerful computing power to ensure rapid convergence of training. Therefore, the agent executing the DDPG Comm algorithm is the DTN in CC. Secondly, the environment for communication resource allocation includes the connection status between each TU and each
MES and the CSI of the wireless channel. Set the state space to 0S ={(,h)|VieL,h e H} of which. Among them. / Indicates the connection status between the user and MES, L denotes the connected state space,defined as L, = {(,,1,,771}]Vñ € N,1, €{0,1}} . When! A value of 1 indicates that the user can establish a wireless connection with MES.
Conversely, if a connection cannot be created between MES and users, / is 0. hRepresents the channel coefficient of the wireless channel connecting the user and MES, H represents the state space of channel coefficients,there are h, ={(h,h, SES € M, h, € [Ay > mx 1} Established. h, is a discrete value,whose value range is[h,, A... 1e An Indicates the channel coefficient with the largest value, i.e., the link with the most ideal channel conditions. A...
Indicates the channel coefficient with the smallest value, ie, the communication channel with the worst channel quality. 23
CC's behavior is defined as the action of allocating different proportions of LUS08342 communication resources to users, expressed as 4, = Ua, a, ,...a MN. Ym eM,neN.
In the state s The system receives an immediate reward for executing an action under this program, and this reward is defined as:
R=%Y -T (17)
In the formula, the Y, which takes the value of a constant, the 7
Represents the task transfer duration. 7, The larger the value of "9", the longer the transmission time consumed, and the lower the reward obtained by the intelligent body. In order to obtain higher rewards, the intelligence will continuously adjust the parameters of the neural network to reduce the data transmission delay.
Therefore, the process of the DDPG based wireless communication resource allocation algorithm model is shown in Table 3. First, train the DDPG com according to the process shown in Table 2. After successful training, DTN will change the current status s, As the input of DDPG comm, get the optimal allocation strategy that meets the objective function in the current state x, =arg max O(s,,a, ;9,) ‚and then allocate communication resources ‘ Brn Arn we according to the optimal allocation policy.
Table 3 Communication resource allocation algorithm based on DDPG
Input:Statuss,
Output: Communication resource allocation program x, 1. Use the algorithm in Table 2 to train the DDPG comm network model 2. Initialize the collection A = © 3. observing current state information s, 4. wills deposited in the collection A 5. detecting requests for allocation of communication resources 24
6. If has no communication resource allocation request LU508342 7. Return to step 3 8. Else 9. Will state s As the input of DDPG Comm x, —arg max O(s,,a, 38.) 10. End 11. Based on x, allocation of communication resources
About the DDPG based computing resource allocation algorithm model:
The DDPG based computing resource allocation algorithm model is named DDPG comp, and its algorithm process is very similar to the communication resource allocation algorithm in Table 3. The agent executing the DDPG Comp algorithm is still the DTN stored in CC.
The environment for allocating computing resources includes computing resources required by tasks and computing resources provided by edge computing server MES. Set the state space to S. ={(d,e)|vd e D,ee F}of which. Among them. d,, Represents the CPU cycle required for solving tasks,
D denotes the state space, defined as
D, ={(d,,d, FSW c M,d, eld, dl. d,The range of values of [d...d..1- Admin represents the minimum computational resource requirement, thed Represents the maximum computational resource fetch.e, On behalf of the computing resources provided by MES, Eis the state space and is expressed as FE, = {(e,, e, SE € Me, c[e,,.¢,. 1} + €, The range of values of[e_. ,e 1. Cmn Represents the minimum amount of computing resources that all MES can provide, e¢ , Represents the maximum number of computing resources.
DTN behavior is defined as the action of allocating different proportions of computing resources to users, and the corresponding allocation action 4, can be expressed as follows 4, = {(a, .a, ,....a, u. Vme M,neN. LUS08342
Upon completion of the computational resource allocation, the intelligence will also receive an immediate reward, which R_ is defined as a constant Y, with the length of the calculation 7. The difference between the two, expressed as:
R=%Y -T (18)
From this, we can see that? The larger the value of, the longer the computing time is consumed, and the lower the reward the agent will get. In order to obtain higher rewards, the intelligent experience constantly adjusts the parameters of the neural network in DDPG to reduce the calculation time consumption.
The DDPG algorithm based on computing resource allocation is shown in
Table 4. Use the process shown in Table 2 to train the DDPG comp network model. After the training, change the current status s, As an input, the optimal allocation strategy in this state is obtained at the output x, =arg max O(s,,a, ;9.), and then according to the strategy =; Allocate
Cc Fn Et mn c computing resources.
Table 4 Computing resource allocation algorithm based on DDPG
Input:Statuss,
Output: Communication resource allocation program x, 1. Train DDPG comp network using the algorithm in Table 2 in
Chapter 4 2. Initialize the collection ® = D 3. observing current state information s, 4. wills, deposited in the collection ® middle 5. detecting requests for allocation of computing resources 6. If does not calculate resource allocation request 7. Return to step 3 8. Else 9. Will state s As the input of DDPG Comp 26 x, = arg max O(s,.a, 33.) LU508342 10. End 11. Based on x, allocation of computing resources
To sum up, the intelligent algorithm based on the decision tree and DDPG described in this embodiment is shown in Figure 3 and Table 5. First, DTN uses the historical operation data and the decision tree algorithm to classify and deploy DTS. After that, when the user requests DTN to assist in task execution, the channel state between the user and MES is taken as the input of DDPG Comm, and the optimal allocation strategy of communication resources is obtained. After that, the system combines the user's demand for computing resources and the computing resources that each MES can provide as the input of DDPG Comp to obtain the optimal allocation strategy of computing resources in the edge system under the current state. Then, the edge node calculates the task and feeds back the result to the user.
Table 5 Intelligent algorithm based on decision tree and DDPG
Inputs: including all inputs in tables 1, 3 and 4 1. Use the algorithm shown in Table 1 to deploy DTS by category 2. the user makes a request for a task solution 3. Use the DDPG Comm algorithm in Table 3 to allocate communication resources 4. users offloading tasks to the network 5. If DTS for solving tasks is deployed in the cloud 6. The task is offloaded to the cloud. The cloud solves the task and feeds the results back to the user 7. Else 8. DTN uses the DDPG Comp algorithm in Table 4 to allocate computing resources 9. edge collaboration to solve the task and provide the results back to the user 10. End
It can be seen that in this embodiment, the specific process of task solving based on the above intelligent algorithm based on decision tree and DDPG is as follows:
Step 3.1: Obtain a classification and deployment strategy for digital twin 27 services using a decision tree-based digital twin service classification and LUS08342 deployment algorithm model, and classify and deploy digital twin services according to the classification and deployment strategy, as follows:
S101. Obtain the experimental data set © , setting the reliability threshold ©;
S102. Initialize decision tree parameters;
S103. Preprocess the experimental data set, and make training data set and test data set;
S104. Training decision tree model with training data set;
S105. Evaluating the Reliability of the Trained Decision Tree Model
Using Test Datasets & ;
S106. Judgment reliability @ not less than the reliability threshold @ If it is, the trained decision tree model will be obtained; otherwise, the parameters of the decision tree model will be updated and return to step S104;
S107. Input the attributes of digital twin service into the trained decision tree model;
S108. The decision tree model outputs the deployment decisions of digital twin services.
Step 3.2: Get the task solving request sent by the user;
Step 3.3: Input the channel status between the user and the edge computing server, use the DDPG based wireless communication resource allocation algorithm model to obtain the optimal allocation strategy of communication resources that meet the objective function in the current state, and allocate communication resources according to the optimal allocation strategy of communication resources. The process is as follows:
S201. Obtain the channel status between the user and the edge computing servers, ;
S202. Obtain the DDPG comm network model trained by the DDPG algorithm; 28
$203. Initialize Collection A = @ ; LUS08342 5204. Observe the current status informations, ;
S205. takes, deposited in the collection A ;
S206. Detect the communication resource allocation request. If not, return to step S203. If yes, change the status tos Enter the DDPG comm network model to get the current status s, The optimal allocation strategy under; the
S207. Allocate communication resources according to the optimal allocation strategy.
Step 3.4: the user offloads the task to a digital twin network system model based on edge-cloud collaboration.
Step 3.5: First, the digital twin network system model based on edge cloud collaboration judges the deployment of the digital twin service to solve the task according to the task. If it is deployed in the cloud, unload the task to the cloud, and feed back the solution result to the user after the task is solved. If it is deployed in the edge node, Then, according to the user's demand for computing resources and the computing resources that each edge computing server in the edge node can provide, use the DDPG based computing resource allocation algorithm model to obtain the optimal allocation strategy of computing resources of edge nodes that meet the objective function in the current state, and allocate computing resources according to the optimal allocation strategy of computing resources. The specific process is as follows:
S301. Obtain the user's demand for computing resources and the status of computing resources that each edge computing server in the current edge layer can provides, ;
S302. Obtain the DDPG comp network model trained by the DDPG algorithm;
S303. Initialize Collection À = © ; 29
S304. Observe the current status informations, ; LUS08342
S305. takes, deposited in the collection A ;
S306. Check the computing resource allocation request. If no, return to step S303. If yes, change the status s, Enter the DDPG comp network model to get the current status s, the optimal allocation strategy under the x, ;
S307. According to the best allocation strategy x” Allocate computing resources.
Step 3.6: Edge nodes collaborate to solve the task and feedback the results to the user.
Finally, this embodiment also uses Moore dataset, which is widely used for network traffic identification and classification, to test the effectiveness of the proposed algorithm. Moore dataset records many data generated during network operation, such as the number of bits and arrival time of data packets in Ethernet and Internet. In the simulation, the transmission time, task arrival time and bit size are selected as the key characteristics (No. 6, 17, 27). Based on these data, the popularity of each business is further generated. Since the network models of DDPG Comm and DDPG Comp are completely consistent, the main parameter settings during simulation are shown in Table 6.
Table 6 Simulation parameter setting table parameters parameter values
TU transmission power 0.2 W
Number of TUs 10
Number of MES 3 reliability threshold 0.99 total system bandwidth 100 MHz
CPU cycle of MES [1,9]x10" Hz data compression rate 0.001 the number of layers in each neural 4 network LU508342 the number of neurons in the actor's network 40 the number of neurons in the critic's network 30 attenuation factor 0.99 behavioral noise 0.0001 the capacity of the empirical pool 6000
The number of bars of experience in 100 each batch, the
The interval at which the target network is updated 100 the soft renewal factor 0.001
Analysis of simulation results.
See Figure 4, which shows the backbone traffic overhead caused by different DTS deployment schemes. The comparison schemes are DTS random deployment, support vector machine, logic regression, feedforward neural network and nearest neighbor algorithm. As can be seen from Figure 4, compared with other deployment decisions, the decision tree algorithm proposed in this embodiment can effectively save backbone network traffic overhead. Taking the DTS quantity equal to 60 as an example, the network traffic consumed by the scheme proposed in this embodiment is 45.05MB. In contrast, the overhead values of the nearest neighbor algorithm, feedforward neural network, logical regression, support vector machine and random deployment on traffic are 46.22MB, 48.23MB, 49.03MB, 49.52MB, 52.47MB and 91.47MB respectively. Compared with other schemes, the scheme proposed in this embodiment can significantly reduce the burden of the backbone network.
Then the transmission time consumed by different communication resource allocation schemes is compared, as shown in Figure 5. The comparison schemes are feedforward neural network, multivariate linear fitting, fairness first communication resource allocation scheme and random 31 communication resource allocation scheme. The abscissa of Figure 5 LUS08342 represents the total bandwidth resources provided by the system. It can be seen from Figure 5 that the DDRAS proposed in this embodiment is superior to the other four schemes in saving transmission time and overhead. Taking the bandwidth of 50MB as an example, the transmission time consumed by the scheme proposed in this embodiment is 3.91s, and the time consumed by other schemes is 4.02s, 4.06s, 4.71s and 14.54s respectively.
The reason for this phenomenon is that the fairness first resource allocation scheme allocates less resources to TUs requiring higher communication resources in order to take care of the fairness between nodes, which leads to the overall performance degradation, and its performance is even lower than that of the random allocation algorithm. Although the random allocation algorithm is better than the fairness first resource allocation scheme, due to the complexity and time variability of the network environment, the random algorithm shows poor adaptability Compared with fairness first allocation scheme and random allocation scheme, multivariate linear fitting performs better in reducing transmission time, because it can construct reliable linear function through repeated training. However, because the relationship between CSI, interference, noise, etc. and the action is nonlinear, the multivariate linear fitting scheme can only reduce the transmission time to a certain extent. The performance of feedforward neural network in reducing transmission time is slightly better than that of multivariate linear fitting. The main reason is that feedforward neural network uses loss function to update parameters, which can better fit the relationship between action and state.
Although the performance of the feedforward neural network is better than the other three schemes, it is still not as effective as the scheme proposed in this embodiment in reducing the transmission time. The communication resource allocation scheme based on DDPG combines the advantages of DL and RL.
After multiple training, the agent can make allocation decisions dynamically according to the changes of the current environment, so its performance 32 improvement in transmission time is better than other comparison schemes. LUS08342
The same conclusion can be drawn in Figure 6. Unlike Figure 5, the abscissa in Figure 6 represents the bandwidth required by TU. Compared with other comparison schemes, DDRAS continues to show better performance in saving transmission time. Taking 50MB task as an example, the communication time required by fairness first, random allocation, multivariate linear fitting and feedforward neural network is 33.81s, 11.32s and 10.19s respectively. Compared with the 9.87s consumed by DDRAS, other communication resource allocation schemes require 23.94s, 1.45s, 0.32s and 0.25s more time to transmit the same number of tasks.
Thereafter, the effectiveness of DDRAS in saving computing time is shown in Figures 7 and 8. The diversity of TU business determines its different requirements for computing resources. In addition, the differences in the current busy levels of MES lead to the differences in the computing resources they can provide. Therefore, the fairness policy of resource allocation scheme that cannot be dynamically adjusted according to the real-time changes of the environment shows poor performance. In addition, it is worth noting that, compared with communication resource allocation, multivariate linear fitting shows a better performance gain in saving computing time. The main reason is that there is a linear relationship between the various factors that affect the computing time, while the relationship between the state and the action in the communication resource allocation scenario is nonlinear. The abscissa in
Figure 7 represents the total computing resources provided by the MEC system. If the computing resources required by the TU remain unchanged, the computing time consumed will decline with the growth of the total computing resources available, regardless of which computing resource allocation method is used. It can also be seen from Figure 7 that even though the computing time consumed by all schemes is decreasing, the scheme proposed in this embodiment still shows better performance.
Figure 8 shows the relationship between the calculation time consumed 33 by the solution task and the calculation resources required by the TU. The LUS08342
CPU demand of TU is 5x10" For example, circle/s, the calculation time required by the scheme proposed in this embodiment is only 5.47s, which is far lower than 6.06s, 6.97s, 7.41s and 18.71s of multivariate linear fitting, random allocation strategy, feedforward neural network and fairness priority.
Finally, various joint DTS deployment and resource allocation schemes are compared. Since the value of backbone network traffic overhead and time are quite different, the two groups of data are normalized before comparison.
In addition, x It is also an important factor that affects the value of the cost function. In the simulation, thea The value of is set to 0.5. In addition, each strategy in Figure 9 includes joint optimization of DTS deployment and resource allocation. For example, the feedforward neural network strategy means that the feedforward neural network is used not only to deploy DTS, but also to allocate communication resources and computing resources. As can be seen from Figure 9, DDRAS performs better than other comparison schemes in reducing the cost function value.
In conclusion, in order to effectively reduce the backbone network traffic overhead and network response time caused by deploying and calling DTS, this embodiment proposes a DTS deployment and resource allocation method based on edge cloud collaboration. Considering the difference between DTS deployed in the cloud and deployed on the edge, a DTN system model based on edge cloud collaboration is first established, which supports DTS deployment under edge cloud collaboration mode; Then, the traffic overhead caused by constructing DT space, deploying DTS, and updating DTS in the system model is modeled mathematically; Then, combined with DTS deployment and invocation, based on the time cost of user task solving at the edge and in the cloud, a cost function characterized by the backbone network traffic cost and time consumption is constructed to measure the system performance; At the same time, in order to improve the system performance, 34 an objective function is constructed to minimize the cost function; Finally, in LU508342 order to solve the objective function, an intelligent algorithm integrating decision tree and DDPG is proposed. Among them, the decision tree algorithm is used to realize the intelligent deployment of DTS, and DDPG is used to solve the allocation of communication resources and computing resources, which effectively solves the problem of low resource utilization caused by
DTS's differentiated deployment in the cloud and at the edge, thus increasing the backbone network traffic overhead and network response time.
It should be understood that although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily performed in the order indicated by the arrows. Unless expressly stated herein, the execution of these steps is not strictly limited in order, and these steps may be executed in other orders.
Moreover, at least part of the steps in each embodiment may include a plurality of sub-steps or phases, these sub-steps or phases are not necessarily executed at the same time, but can be executed at different moments, the order of execution of these sub-steps or phases is not necessarily sequential, but can be alternated with other steps or at least a portion of sub-steps or phases of other steps or steps. (b) the sub-steps or stages are not necessarily sequential, but may be performed in turn or alternately with other steps or at least parts of other steps.
The technical solutions provided by the present invention are described in detail above. In this paper, specific examples of the principles of the present invention and the way of implementation are described, the above examples are only used to help understand the method of the present invention and its core ideas. It should be noted that, for the ordinary technical personnel in the field of technology, in the principle of the present invention, under the premise of the invention, but also a number of improvements and modifications to the invention, these improvements and modifications also fall within the scope of protection of the claims of the invention.

Claims (10)

Claims LU508342
1. A DTS deployment and resource allocation method based on edge cloud collaboration, which is characterized by the following steps: Step 1: Obtain historical operation data of the physical communication network, construct a digital twin network system model based on edge-cloud collaboration of the physical communication network in the cloud computing center, and generate various types of digital twin services; and Step 2. establishing an objective function for minimizing traffic overhead and network latency based on the service deployment decision and resource allocation decision of the said digital twin network system model based on edge-cloud collaboration; and Step 3. Use the intelligent algorithm based on decision trees and DDPG to obtain the classified deployment strategy of the digital twin service that satisfies the objective function and perform classified deployment of the digital twin service. Then, when receiving the task solution request sent by the user, obtain the classified deployment strategy that satisfies the current state. The optimal allocation strategy of communication resources and computing resources of the objective function, and solve the task after allocating communication resources and computing resources, and feed back the solution results to the user.
2. The DTS deployment and resource allocation method based on edge cloud cooperation according to claim 1, which is characterized in that the physical communication network is composed of a user layer, an edge layer, a backbone network and a cloud layer, wherein the edge layer is composed of a wireless access point and an edge computing server installed at the same location and connected through optical fibers.
3. The DTS deployment and resource allocation method based on edge cloud collaboration according to claim 1, which is characterized in that the service deployment decision includes digital twin service deployment decision 36 and task solving decision, and the resource allocation decision includes LU508342 wireless communication resource allocation decision and collaboration system computing resource allocation decision.
4. The DTS deployment and resource allocation method based on edge cloud collaboration according to claim 3, which is characterized in that the construction process of the objective function described in step 2 is as follows: Step 2.1: analyze the service deployment of said edge-cloud collaboration-based digital twin network system model to obtain digital twin service deployment decisions and task solving decisions; and Step 2.2: Model backbone network traffic overhead based on digital twin service deployment decisions. Step 2.3: analyzing the resource allocation of the said digital twin network system model based on edge-cloud collaboration, obtaining a wireless communication resource allocation decision and a collaboration system computational resource allocation decision; Step 2.4: Establish a model of the time consumed for task solving based on the decision of task solving, the decision of allocating wireless communication resources and the decision of allocating computational resources of the collaborative system. Step 2.5: composing a cost function based on said backbone network traffic overhead model and task solving consumption duration model; and Step 2.6: construct said objective function based on the cost function.
5. The DTS deployment and resource allocation method based on edge cloud cooperation according to claim 4, which is characterized in that the expression of the backbone network traffic cost model is: Ba 5 ner Pine Fon + =a, )G,,, +P, + Qu) +a, H,, 1, The expression for the described task solving consumption time model is: rr =(0-b)T¢ +b, Tm, The expression for said cost function is: 37
U = Bi, + —a)T, ,Ç LU508342 The expression for said objective function is: min 11 > SU abr. M N EM nen mo Among them, B! for backbone network traffic overhead. F,, for user terminalsu,, data sent to a cloud computing center for constructing a digital twin network system model based on edge-cloud collaboration. a, is a binary number representing the service S, of digital twin service deployment decisions. G, For updating services S,,, Of the updated data. P for user terminalsu,,, Raw mission data sent to computing centers. Q, for with the computational solving task and then feedback to the user terminal, of the data. H,, For generating services in cloud computing centers S, Raw data required. h,, Decision making for task solving. 7 is the total time consumed by the solving task in the cloud solving mode; the I" Is the total time consumed by the solution task in the edge solution mode; a is the weight factor, meeting 0<La<lI; Mis the collection of wireless access points and edge computing servers in the physical communication network, N is the collection of users, meM,neN, ais the service deployment decision configuration document, b is the decision document for solving each task, r is the communication resource allocation decision document, and c is the calculation resource allocation decision document.
6. The DTS deployment and resource allocation method based on edge cloud cooperation according to claim 1, which is characterized in that the intelligent algorithm based on decision tree and DDPG includes the digital twin service classification deployment algorithm model based on decision tree, the wireless communication resource allocation algorithm model based on DDPG, 38 and the computing resource allocation algorithm model based on DDPG. LUS08342
7. The DTS deployment and resource allocation method based on edge cloud collaboration according to claim 6, which is characterized in that the process of solving the task based on the intelligent algorithm of decision tree and DDPG in step 3 is as follows: Step 3.1: obtain a classification and deployment strategy for digital twin services using a decision tree-based digital twin service classification and deployment algorithm model, and classify and deploy digital twin services according to the classification and deployment strategy; Step 3.2: Get the task solving request sent by the user; Step 3.3: Input the channel status between the user and the edge computing server, use the wireless communication resource allocation algorithm model based on DDPG to obtain the optimal allocation strategy of communication resources that satisfies the objective function under the current status, and allocate communications according to the optimal allocation strategy of communication resources. resource; Step 3.4: the user offloads the task to a digital twin network system model based on edge-cloud collaboration. Step 3.5: The digital twin network system model based on edge-cloud collaboration judges the deployment of the digital twin service that solves the task based on the task. If it is deployed in the cloud, the task will be offloaded to the cloud, and the solution result will be sent after the task is solved. Feedback to the user, if deployed at the edge node, based on the user's demand for computing resources and the computing resources that each edge computing server in the edge node can provide, the computing resource allocation algorithm model based on DDPG is used to obtain the edge that satisfies the objective function in the current state. The optimal allocation strategy of node computing resources, and allocate computing resources according to the optimal allocation strategy of computing resources; Step 3.6: Edge nodes collaborate to solve the task and feedback the 39 results to the user. LUS08342
8. The DTS deployment and resource allocation method based on edge cloud collaboration according to claim 7, which is characterized in that the specific process of obtaining the optimal allocation strategy of communication resources in the current state by using the decision tree based digital twin service classification deployment algorithm model in step 3.1 is as follows: S101. Obtain the experimental data set and set the reliability threshold © ; S102. Initialize decision tree parameters; S103. Preprocess the experimental data set, and make training data set and test data set; S104. Training decision tree model with training data set; S105. Evaluating the Reliability of the Trained Decision Tree Model Using Test Datasets ; S106. Judgment reliability @ not less than the reliability threshold @w If it is, the trained decision tree model will be obtained; otherwise, the parameters of the decision tree model will be updated and return to step S104; S107. Input the attributes of digital twin service into the trained decision tree model; S108. The decision tree model outputs the deployment decisions of digital twin services.
9. The DTS deployment and resource allocation method based on edge cloud cooperation according to claim 7, which is characterized in that step 3.3 uses the DDPG based wireless communication resource allocation algorithm model to obtain the optimal allocation strategy of communication resources in the current state and the process of allocating communication resources is as follows: S201. Obtain the channel status between the user and the edge computing servers, ; S202. Obtain the DDPG comm network model trained by the DDPG algorithm: LU508342 S203. Initialize Collection A = @ ; S204. Observe the current status information s, ; S205, takes, deposited in the collection A ; S206. Detect the communication resource allocation request. If not, return to step S203. If yes, change the status tos Enter the DDPG comm network model to get the current statuss, The optimal allocation strategy under; S207. Allocate communication resources according to the optimal allocation strategy.
10. The DTS deployment and resource allocation method based on edge cloud collaboration according to claim 7, which is characterized in that in step
3.5, the optimal allocation strategy of edge layer computing resources in the current state is obtained by using the computing resource allocation algorithm model based on DDPG, and the process of allocating computing resources is as follows: S301. Obtain the user's demand for computing resources and the status of computing resources that each edge computing server in the current edge layer can provide s, ; S302, Obtain the DDPG comp network model trained by the DDPG algorithm; S303, Initialize Collection A = © ; S304. Observe the current status informations, ; S305. takes, deposited in the collection A ; S306. Check the computing resource allocation request. If no, return to step S303. If yes, change the status s, Enter the DDPG comp network model to get the current status s_the optimal allocation strategy under the x, ; 41
$307. According to the best allocation strategy 7, Allocate computing LUS08342 resources.
42
LU508342A 2024-09-24 2024-09-24 DTS deployment and resource allocation method based on edge-cloud collaboration LU508342B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
LU508342A LU508342B1 (en) 2024-09-24 2024-09-24 DTS deployment and resource allocation method based on edge-cloud collaboration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
LU508342A LU508342B1 (en) 2024-09-24 2024-09-24 DTS deployment and resource allocation method based on edge-cloud collaboration

Publications (1)

Publication Number Publication Date
LU508342B1 true LU508342B1 (en) 2025-03-25

Family

ID=95123857

Family Applications (1)

Application Number Title Priority Date Filing Date
LU508342A LU508342B1 (en) 2024-09-24 2024-09-24 DTS deployment and resource allocation method based on edge-cloud collaboration

Country Status (1)

Country Link
LU (1) LU508342B1 (en)

Similar Documents

Publication Publication Date Title
Lu et al. Communication-efficient federated learning and permissioned blockchain for digital twin edge networks
CN113836796B (en) Cloud-edge cooperation-based power distribution Internet of things data monitoring system and scheduling method
CN111093203B (en) A low-cost intelligent deployment method of service function chain based on environment awareness
CN111445111B (en) Electric power Internet of things task allocation method based on edge cooperation
CN113316118A (en) Unmanned aerial vehicle cluster network self-organizing system and method based on task cognition
CN109491790B (en) Container-based industrial Internet of things edge computing resource allocation method and system
CN113364859B (en) MEC-oriented joint computing resource allocation and unloading decision optimization method in Internet of vehicles
US7982336B2 (en) Power sharing with stackable switches
CN113542376A (en) Task unloading method based on energy consumption and time delay weighting
CN116541106B (en) Calculation task offloading method, computing device and storage medium
CN118265025A (en) Autonomous decentralized wireless ad hoc network communication method, device, equipment and medium
Li et al. Adaptive and resilient model-distributed inference in edge computing systems
CN109062668A (en) A kind of virtual network function moving method of the multipriority based on 5G access network
Yi et al. Energy‐aware disaster backup among cloud datacenters using multiobjective reinforcement learning in software defined network
CN117614949A (en) DTS deployment and resource allocation method based on edge cloud cooperation
Galán-Jiménez et al. Energy-efficient deployment of IoT applications in remote rural areas using UAV networks
Shen et al. Computing resource allocation strategy based on cloud-edge cluster collaboration in internet of vehicles
Chen et al. Resource allocation and collaborative offloading in multi-UAV-assisted IoV with federated deep reinforcement learning
Moon et al. Client selection for federated learning in vehicular edge computing: A deep reinforcement learning approach
Schwarzmann et al. Native support of ai applications in 6g mobile networks via an intelligent user plane
LU508342B1 (en) DTS deployment and resource allocation method based on edge-cloud collaboration
CN111422078A (en) Electric vehicle charging data allocation monitoring method based on block chain
Joshi et al. Delay-energy aware task offloading and vm migration policy for mobile edge computing
CN119893588B (en) A resource management method for edge intelligent systems based on blockchain
Mei et al. The architecture of computing power network towards federated learning: paradigms and perspectives

Legal Events

Date Code Title Description
FG Patent granted

Effective date: 20250325