WO2022021176A1 - Cloud-edge collaborative network resource smooth migration and restructuring method and system - Google Patents

Cloud-edge collaborative network resource smooth migration and restructuring method and system Download PDF

Info

Publication number
WO2022021176A1
WO2022021176A1 PCT/CN2020/105691 CN2020105691W WO2022021176A1 WO 2022021176 A1 WO2022021176 A1 WO 2022021176A1 CN 2020105691 W CN2020105691 W CN 2020105691W WO 2022021176 A1 WO2022021176 A1 WO 2022021176A1
Authority
WO
WIPO (PCT)
Prior art keywords
cloud
edge
delay
user
path
Prior art date
Application number
PCT/CN2020/105691
Other languages
French (fr)
Chinese (zh)
Inventor
陈伯文
刘玲
沈纲祥
高明义
向练
陈虹
Original Assignee
苏州大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州大学 filed Critical 苏州大学
Publication of WO2022021176A1 publication Critical patent/WO2022021176A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1012Server selection for load balancing based on compliance of requirements or conditions with available server resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1029Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer

Definitions

  • the invention relates to the technical field of cloud computing and edge computing networks, in particular to a method and system for smooth migration and reconstruction of cloud-edge collaborative network resources.
  • MEC Mobile Edge Computing
  • the MEC system allows devices to offload computing tasks to network edge nodes, such as base stations, wireless access points, etc., which not only satisfies the expansion requirements of terminal devices' computing capabilities, but also makes up for the long delay of cloud computing.
  • MEC technology helps to achieve key technical indicators such as ultra-low latency, ultra-high energy efficiency, and ultra-high reliability for 5G services.
  • both of these allocation methods are resource allocation methods designed for only one purpose.
  • the delay-first allocation method can effectively reduce the data transmission delay, but it cannot improve the computing capability of the node to process services, and the data processing delay still needs to be optimized; the resource-first allocation method can improve the speed of the node processing services, but the transmission Latency is likely to increase significantly.
  • the technical problem to be solved by the present invention is to overcome the problems of low data transmission delay and high node processing business computing capability in the prior art, so as to provide a method that can ensure low data transmission delay while ensuring node processing A method and system for smooth migration and reconstruction of cloud-edge collaborative network resources with high business computing capability.
  • a method for smooth migration and reconstruction of cloud-edge collaborative network resources of the present invention includes: reading cloud-edge collaborative network topology information, initializing the parameters of the cloud-edge collaborative network; A set of connection requests; process any connection request in the connection request set until the processing of all connection requests is completed; the method for processing any connection request is: first determine whether the local area requested by the user is There are enough computing resources. If so, the user request is sent to the edge computing server in the local area for processing. If not, continue to judge whether the server in the adjacent area has enough computing resources to process the user request. If so, migrate the user request.
  • network topology information in the cloud-edge collaboration network, network topology information, network connection status, number of user requests, number of edge computing servers, number of base stations and switches are configured.
  • the method for calculating the sum of the path transmission delay and data processing delay of multiple candidate paths from the user request to the edge computing server is: calculating the path transmission delay and data processing delay of the multiple paths respectively. Delay, and then calculate the sum of the path transmission delay and data processing delay.
  • the method for allocating spectrum resources to the working path is: assigning spectrum resources to the working path according to the constraints of spectrum consistency and spectrum continuity.
  • the computing resources of the edge computing server processing user requests are updated in real time, the number of successfully established connection requests is recorded, and the cloud-edge collaboration network status is updated.
  • the spectrum resources occupied by the working path are released; at the same time, the computing resources of the edge computing server processing the user request are released; finally, the connection request is established the working path to clear the information.
  • the method for calculating the average path transmission delay and the average data processing delay requested by the user is: according to the formula Calculate the average path transmission delay aveTRD requested by the user, where sucConReq represents the number of successful connection requests; according to the formula Calculate the average path transmission delay aveCOD requested by the user, where sucConReq indicates the number of successful connection establishment requests.
  • step S5 after the step S5 is completed, the state of the cloud-edge collaboration network is continued to be monitored.
  • the present invention also provides a system for smooth migration and reconstruction of cloud-edge collaborative network resources, comprising: a cloud-edge collaborative network initialization module for reading cloud-edge collaborative network topology information, and performing parameters of the cloud-edge collaborative network. initialization; a connection request generation module, used to generate a set of connection request sets according to user requests; a connection request processing module, used to process any connection request in the connection request set until the processing of all connection requests is completed;
  • the method for processing any connection request is as follows: first determine whether the local area requested by the user has sufficient computing resources, if so, the user request is sent to the edge computing server in the local area for processing, if not, continue to determine the relevant Whether the server in the adjacent area has enough computing resources to process the user request, if so, migrate the user request to other regional edge computing servers on the same switch as the local area; if not, send the user request to the cloud server through the switch Perform data processing on the server; calculate the sum of the path transmission delay and data processing delay of multiple candidate paths from the user request to the edge computing
  • the working path is set according to the sensitivity of the service to the delay, and the working path with low delay sensitivity is selected for transmission and processing, which is conducive to reducing the delay of the service.
  • Fig. 1 is the flow chart of the method for smooth migration and reconstruction of cloud-edge collaborative network resources according to the present invention
  • Fig. 2 is the network diagram of cloud-side collaborative network resource migration and reconstruction of the present invention
  • FIG. 3 is a schematic diagram of the cloud-edge collaborative network resource smooth migration and reconstruction system of the present invention.
  • this embodiment provides a method for smooth migration and reconstruction of cloud-edge collaborative network resources, including: step S1: reading cloud-edge collaborative network topology information, and initializing the parameters of the cloud-edge collaborative network; Step S2: generate a group of connection request sets according to user requests; Step S3: process any connection request in the connection request set until the processing of all connection requests is completed; wherein the method for processing any connection request It is: first judge whether the local area requested by the user has sufficient computing resources, if so, the user request is sent to the edge computing server in the local area for processing, if not, continue to judge whether the server in the adjacent area has sufficient computing resources for processing If there is a user request, migrate the user request to other regional edge computing servers on the same switch as the local area; if not, send the user request to the cloud server through the switch for data processing; calculate the data from the user request separately The sum of the path transmission delay and data processing delay of multiple candidate paths to the edge computing server, and the path with the lowest delay is used as the working path;
  • the computing is: first judge whether
  • step S1 the cloud-edge collaborative network topology information is read, and the parameters of the cloud-edge collaborative network are initialized, which is beneficial to the operation of services;
  • step S2 a set of connection requests is generated according to the user request, which is conducive to processing the connection request; in the step S3, for each connection request, first determine whether the local area requested by the user has sufficient computing resources, If yes, the user request is sent to the edge computing server in the local area for processing.
  • the network transmission delay can be ignored, and the end-to-end processing delay only needs to consider the calculation delay; if not, continue to judge the adjacent area Whether the internal server has enough computing resources to process user requests, and if so, migrate user requests to other regional edge computing servers on the same switch in the local area.
  • the end-to-end delay includes network transmission delay and computing Resource delay, if not, the user request is sent to the cloud server through the switch for data processing.
  • the end-to-end delay of the user request includes the network transmission delay and computing resource delay in the edge computing area and cloud computing area.
  • the path transmission delay and data processing delay of the service calculate the sum of the path transmission delay and data processing delay of multiple candidate paths from the user request to the edge computing server respectively, and calculate the delay The lowest path is used as the working path. Since the working path with low delay sensitivity is selected for transmission and processing, it is beneficial to reduce the path transmission delay and data processing delay of the service;
  • the requested edge computing server updates the computing resources in real time, which is conducive to recording the number of connection requests successfully established; in the step S4, the average path transmission delay and average data processing delay requested by the user are calculated, and the present invention is conducive to reasonable planning.
  • Resource scheduling, migration, and reconstruction in edge computing and cloud computing areas effectively balancing the relationship between network resources and service latency, and reducing latency as much as possible while reasonably allocating resources to minimize service latency.
  • the nodes have high computing capacity for processing business, so as to improve the service quality of the network.
  • the topology information of the network, the network connection state, the number of user requests, the number of edge computing servers, the number of base stations and switches are configured.
  • the cloud-edge collaboration network G U, B, J, S
  • the topology information of the network, the network connection status, the number of user requests, the number of edge computing servers, the number of base stations and switches are configured.
  • step S2 when a set of connection requests is generated according to the user request, information such as the number of connection requests, the number of spectrum gaps required by different connection requests, and computing resources are configured.
  • step S3 for any connection request, it is determined whether the local server requested by the user has the computing resources required by the user request, and if the computing resources of the local server are sufficient, the user request is directly processed locally. If the computing resources of the local server are insufficient, consider whether the edge computing servers in other regions outside the local region have the computing resources required by the user request. If the servers in other regions have sufficient computing resources, the user request will be migrated to other regions through the switch for processing. ; If the computing resources of the local and other regions do not meet the computing resources requested by the user, the user request will be migrated to the cloud server through the switch for processing.
  • the above-mentioned hierarchical deployment will help reduce the path transmission delay and data processing time of the service. extension.
  • the method for calculating the sum of the path transmission delay and data processing delay of multiple candidate paths from the user request to the edge computing server is as follows: calculate the path transmission delay and data processing delay of the multiple paths respectively, and then calculate the path transmission delay.
  • the sum of the data processing delay and the path transmission delay of the service is beneficial to reduce the service path transmission delay and data processing delay.
  • the K shortest path algorithm is used to calculate the K candidate paths from the user request to the server, so as to find the optimal path path as the working path.
  • the K shortest path algorithm is used to calculate the working path from the user request to the edge computing server.
  • the K shortest path algorithm calculates K candidate paths and arranges them in ascending order of distance, that is, the smaller the path distance, the higher the priority.
  • paths with lower priorities are selected in turn to allocate spectrum resources until the resources are allocated successfully or all paths are blocked.
  • the method for allocating spectrum resources to the working path is: assigning spectrum resources to the working path according to the constraints of spectrum consistency and spectrum continuity.
  • the selected working path is searched for the bandwidth resources required to satisfy the connection request. If both spectrum continuity and spectrum consistency are satisfied at the same time If the constraint conditions are met, the connection request is successfully established; if the dual constraints of spectrum continuity and spectrum consistency cannot be satisfied at the same time, the connection request establishment fails.
  • connection request CR(u, f, r) successfully establishes the working path
  • the spectrum allocation algorithm that hits the first time is used, according to all links on the path.
  • a spectrum resource table is generated for numbering the spectrum resource status, and the available spectrum gaps are searched from the end with the smaller number. If an available spectrum gap is found, spectrum resource allocation is performed and spectrum status update is performed; if no spectrum is found, spectrum allocation fails and services are blocked.
  • the spectrum resources occupied by the working path are released; at the same time, the computing resources of the edge computing server processing user requests are released; finally, the information of the working path established by the connection request is cleared.
  • the method for calculating the average path transmission delay and the average data processing delay requested by the user is: according to the formula Calculate the average path transmission delay aveTRD requested by the user, where sucConReq represents the number of successful connection requests; according to the formula Calculate the average path transmission delay aveCOD requested by the user, where sucConReq indicates the number of successful connection establishment requests.
  • Transmission delay After all connection requests are processed, record the cumulative computing resources of each connection request and the initial computing resources of the edge computing server, and use the average path transmission delay aveCOD to calculate the average data processing delay of a group of connection requests .
  • step S4 continue to monitor the state of the cloud-edge collaboration network. Specifically, it mainly completes the initialization of cloud-edge collaborative network, connection request generation, service priority selection, edge computing server selection, working path establishment, spectrum resource allocation, computing resource update, resource release, network transmission delay calculation and data processing time.
  • the state monitoring function of delayed computing is implemented to achieve the goal of reducing service delay as much as possible when computing resources are allocated.
  • the method also includes the steps of judging and warning. Specifically, the coordination function between each module is performed, and the judgment and early warning function of each module is successfully established, so as to achieve the goal of reducing service delay in mobile edge computing.
  • the business with higher delay sensitivity is preferentially processed according to the business delay sensitivity, and the edge computing server processing the business is selected according to the mobile edge computing hierarchical deployment strategy.
  • the K shortest path algorithm is used to calculate the working path between the service and the edge computing server.
  • the spectrum allocation algorithm with the first hit is used to allocate spectrum resources to the path, which needs to satisfy both spectrum consistency and spectrum continuity. Constraints; then update the status of network computing resources and spectrum resources in real time; after each connection request is successfully established, the path transmission delay is calculated according to the path length and path transmission rate, and the resource capacity and Data processing delay.
  • the present invention effectively balances the relationship between network resources and service delay through cloud-side coordination network resource migration and reconstruction sensitive to service delay, and reduces the delay as much as possible under the condition of rational resource allocation.
  • the present invention adopts the classification processing method of service delay sensitivity according to the difference of service transmission sensitivity to time delay, and preferentially transmits and processes services with high time delay sensitivity.
  • the present invention only needs to consider two different delays, including network transmission delay and computing resource delay.
  • the network transmission delay refers to the shortest path length between the user's service area and the edge computing server, which is calculated by the cumulative delay of the link delay; the computing resource delay is related to the computing resource requirements of each user and the edge computing server. computing power.
  • the following describes how to allocate computing resources through hierarchical deployment of mobile edge computing, how to process user requests according to service delay sensitivity, how to calculate path transmission delay through link accumulation delay, and how to use edge computing server resources and user request resources. Calculate the data processing delay to minimize the total business delay.
  • layer 1 has three local areas, each area has base stations and corresponding edge computing servers, and layer 2 is a cloud network composed of switches and corresponding edge computing servers. Assume that the computing resources of the edge computing server in the local area of layer 1 are 20, the computing resources of the edge computing server of the cloud network of layer 2 are 1000, and the number of service computing resources and the number of spectrum slots are randomly generated.
  • the connection request is represented by CR(u, f, r), where u represents the user request number, f represents the number of spectrum slots required to establish a working path, and r represents the computing resources required by the user request.
  • Three user request sets CR(1,3,15), CR(2,8,10) and CR(3,5,30) are generated in the base station area of node 3 and node 2 in Fig. 2 .
  • the dynamic computing resource of node 2 is 5 at this time, it cannot meet the computing resources required by the user request.
  • the computing resources are migrated to the edge computing server node 8 in other regions, the computing resources of the node 8 are updated to 10, and the transmission path is 2.
  • the computing resources on the local and adjacent edge computing area servers cannot meet the computing resources required by the user request, and the user request needs to pass through the switch node. 4 is transmitted to the cloud computing area server node 5 connected to the switch, and the data transmission route is 3.
  • the routing algorithm of K shortest paths needs to be used to calculate node 2 to node 1, node 1 K paths from 2 to node 8 and from node 3 to node 5.
  • the first hit spectrum allocation algorithm is used, and the user requests CR(1,3,15), CR(2,8,10), CR according to the constraints of spectrum consistency and spectrum continuity.
  • the working paths of (3, 5, 30) perform spectrum resource allocation. After the spectrum resource is allocated, the connection request is successfully established. At this time, the status of computing resources and spectrum resources is updated in real time and the number of successful connections is recorded.
  • this embodiment provides a cloud-edge collaborative network resource smooth migration and reconstruction system.
  • the principle of solving the problem is similar to the cloud-edge collaborative network resource smooth migration and reconstruction method. Repeat will not be repeated here.
  • This embodiment provides a cloud-edge collaborative network resource smooth migration and reconstruction system, including:
  • the cloud-edge collaborative network initialization module is used to read the cloud-edge collaborative network topology information, and initialize the parameters of the cloud-edge collaborative network;
  • connection request generation module is used to generate a set of connection request sets according to user requests
  • connection request processing module is used to process any connection request in the connection request set until the processing of all connection requests is completed; the method for processing any connection request is: first determine the local area requested by the user Whether there are enough computing resources, if so, the user request is sent to the edge computing server in the local area for processing; Migrate to other regional edge computing servers on the same switch in the local area, if not, send the user request to the cloud server through the switch for data processing; calculate multiple candidate paths from the user request to the edge computing server separately
  • the sum of the path transmission delay and the data processing delay is the sum of the path transmission delay and the data processing delay, and the path with the lowest delay is used as the working path; after the spectrum resources are allocated to the working path, the computing resources of the edge computing server processing user requests are updated in real time;
  • the calculation module is used to calculate the average path transmission delay and average data processing delay requested by the user.
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

Abstract

The present invention relates to a cloud-edge collaborative network resource smooth migration and reconstruction method and system, comprising: reading topology information of a cloud-edge collaborative network, initializing the parameters of the cloud-edge collaborative network; generating a connection request set according to a user request; processing any one connection request from the connection request set until all connection requests have been processed; and calculating the average path transmission delay and average data processing delay of user requests. The present invention decreases service delay as much as possible, and improves network service quality.

Description

云边协同网络资源平滑迁移与重构方法及系统Cloud-edge collaborative network resource smooth migration and reconstruction method and system 技术领域technical field
本发明涉及云计算与边缘计算网络的技术领域,尤其是指一种云边协同网络资源平滑迁移与重构方法及系统。The invention relates to the technical field of cloud computing and edge computing networks, in particular to a method and system for smooth migration and reconstruction of cloud-edge collaborative network resources.
背景技术Background technique
近年来,随着物联网(简称IoT)的高速发展和大量数据应用的广泛使用,用户对于网络计算资源的需求量急剧增长。此外,智能手机、笔记本电脑和平板电脑的技术进步,使得新的高要求服务和应用得以出现。尽管新的移动设备在中央处理器CPU方面越来越强大,但可能无法在短时间内处理业务量大的应用程序。云计算具有强大的计算能力,设备可以通过计算资源卸载,将计算任务传输到远端云服务器执行,从而能够有效缓解计算资源需求量较大的问题。然而,计算任务传输到云端服务器会造成不可接受的时延、增加额外传输能量消耗和带来数据泄露等问题。In recent years, with the rapid development of the Internet of Things (IoT for short) and the widespread use of a large number of data applications, users' demands for network computing resources have increased dramatically. In addition, technological advancements in smartphones, laptops and tablets have enabled the emergence of new demanding services and applications. Although new mobile devices are getting more powerful in terms of central processing unit (CPU), they may not be able to handle high-traffic applications in a short period of time. Cloud computing has powerful computing capabilities. Devices can offload computing resources and transfer computing tasks to remote cloud servers for execution, which can effectively alleviate the problem of large computing resource requirements. However, the transmission of computing tasks to the cloud server will cause unacceptable delays, increase additional transmission energy consumption, and bring about data leakage and other problems.
为了解决云计算卸载过程中面临的高时延等问题,欧洲电信标准协会(ETSI)于2014年率先提出移动边缘计算(简称MEC)的概念:一种在靠近用户的无线接入网提供IT及云计算的平台,被认为是第五代移动通信的关键技术之一。MEC系统允许设备将计算任务卸载到网络边缘节点处,如基站、无线接入点等,既满足了终端设备计算能力的扩展需求,同时弥补了云计算时延较长的缺点。MEC技术有助于达到5G业务超低时延、超高能效、超高可靠性等关键技术指标。通过将云计算和云存储部署到网络边缘,提供一个具备高性能、低时延与高带宽的电信服务环境,加速网络中各项内容、服务以及应用的分发和下载,让消费者享有更高质量网络体验。In order to solve the problem of high latency in the process of offloading cloud computing, the European Telecommunications Standards Institute (ETSI) first proposed the concept of Mobile Edge Computing (MEC) in 2014: a wireless access network close to users to provide IT and The cloud computing platform is considered to be one of the key technologies of the fifth generation mobile communication. The MEC system allows devices to offload computing tasks to network edge nodes, such as base stations, wireless access points, etc., which not only satisfies the expansion requirements of terminal devices' computing capabilities, but also makes up for the long delay of cloud computing. MEC technology helps to achieve key technical indicators such as ultra-low latency, ultra-high energy efficiency, and ultra-high reliability for 5G services. By deploying cloud computing and cloud storage to the network edge, a telecom service environment with high performance, low latency and high bandwidth is provided to accelerate the distribution and download of various contents, services and applications in the network, so that consumers can enjoy higher Quality web experience.
目前,在云计算与边缘计算的研究中,主要有以下两种计算资源分配的 方法:时延优先的计算资源分配方法和资源优先的计算资源分配方法。然而,这两种分配方法都是仅仅针对一个目的而设计的资源分配方法。时延优先的分配方法能够有效降低数据传输时延,但却不能提高节点处理业务的计算能力,数据处理时延仍然需要进行优化处理;资源优先的分配方法可以提高节点处理业务的速度,但是传输时延却有可能显著增加。At present, in the research of cloud computing and edge computing, there are mainly the following two computing resource allocation methods: the computing resource allocation method with delay priority and the computing resource allocation method with resource priority. However, both of these allocation methods are resource allocation methods designed for only one purpose. The delay-first allocation method can effectively reduce the data transmission delay, but it cannot improve the computing capability of the node to process services, and the data processing delay still needs to be optimized; the resource-first allocation method can improve the speed of the node processing services, but the transmission Latency is likely to increase significantly.
发明内容SUMMARY OF THE INVENTION
为此,本发明所要解决的技术问题在于克服现有技术中不能同时满足数据传输时延低,节点处理业务计算能力高的问题,从而提供一种可以保证数据传输时延低,同时保证节点处理业务计算能力高的云边协同网络资源平滑迁移与重构方法及系统。Therefore, the technical problem to be solved by the present invention is to overcome the problems of low data transmission delay and high node processing business computing capability in the prior art, so as to provide a method that can ensure low data transmission delay while ensuring node processing A method and system for smooth migration and reconstruction of cloud-edge collaborative network resources with high business computing capability.
为解决上述技术问题,本发明的一种云边协同网络资源平滑迁移与重构方法,包括:读取云边协同网络拓扑信息,对所述云边协同网络的参数进行初始化;根据用户请求生成一组连接请求集合;对所述连接请求集合中的任意一个连接请求进行处理,直至完成对所有连接请求的处理;其中对任意一个连接请求进行处理的方法为:先判断用户请求的本地区域是否具有足够的计算资源,若有,则用户请求被发送到本地区域内边缘计算服务器进行处理,若否,继续判断相邻区域内服务器是否具有足够计算资源处理用户请求,若有,把用户请求迁移到与本地区域内的同一交换机上的其他区域边缘计算服务器上,若否,则将用户请求通过交换机发送到云端服务器上进行数据处理;分别计算从用户请求到边缘计算服务器的多条候选路径的路径传输时延和数据处理时延的总和,将时延最低的路径作为工作路径;对所述工作路径进行频谱资源分配后,对处理用户请求的边缘计算服务器的计算资源进行实时更新;计算用户请求的平均路径传输时延和平均数据处理时延。In order to solve the above technical problems, a method for smooth migration and reconstruction of cloud-edge collaborative network resources of the present invention includes: reading cloud-edge collaborative network topology information, initializing the parameters of the cloud-edge collaborative network; A set of connection requests; process any connection request in the connection request set until the processing of all connection requests is completed; the method for processing any connection request is: first determine whether the local area requested by the user is There are enough computing resources. If so, the user request is sent to the edge computing server in the local area for processing. If not, continue to judge whether the server in the adjacent area has enough computing resources to process the user request. If so, migrate the user request. To other regional edge computing servers on the same switch as the local area, if not, send the user request to the cloud server through the switch for data processing; calculate the multiple candidate paths from the user request to the edge computing server separately. The sum of the path transmission delay and the data processing delay, the path with the lowest delay is used as the working path; after allocating spectrum resources to the working path, the computing resources of the edge computing server processing user requests are updated in real time; Requested average path propagation delay and average data processing delay.
在本发明的一个实施例中,在所述云边协同网络中,配置网络的拓扑信息、网络连接状态、用户请求数、边缘计算服务器的数目、基站和交换机的数目。In an embodiment of the present invention, in the cloud-edge collaboration network, network topology information, network connection status, number of user requests, number of edge computing servers, number of base stations and switches are configured.
在本发明的一个实施例中,计算从用户请求到边缘计算服务器的多条候 选路径的路径传输时延和数据处理时延的总和方法为:分别计算多条路径的路径传输时延和数据处理时延,再计算路径传输时延和数据处理时延的总和。In an embodiment of the present invention, the method for calculating the sum of the path transmission delay and data processing delay of multiple candidate paths from the user request to the edge computing server is: calculating the path transmission delay and data processing delay of the multiple paths respectively. Delay, and then calculate the sum of the path transmission delay and data processing delay.
在本发明的一个实施例中,计算多条路径的路径传输时延和数据处理时延的方法为:根据公式TRD=Weight k/c计算多条路径的路径传输时延,其中Weight k表示第k条工作路径的路径长度,c表示光纤传输速率;根据公式COD=r u/TR i计算多条路径的数据处理时延,其中r u表示用户请求u所需的计算资源,TR i表示节点i的计算资源总和。 In an embodiment of the present invention, the method for calculating the path transmission delay and data processing delay of multiple paths is: calculating the path transmission delay of multiple paths according to the formula TRD=Weight k /c, where Weight k represents the first The path length of k working paths, c represents the optical fiber transmission rate; the data processing delay of multiple paths is calculated according to the formula COD=r u /TR i , where r u represents the computing resources required by the user to request u, and TR i represents the node The sum of computing resources for i.
在本发明的一个实施例中,对所述工作路径进行频谱资源分配的方法为:根据频谱一致性和频谱连续性的约束条件对所述工作路径进行频谱资源分配。In an embodiment of the present invention, the method for allocating spectrum resources to the working path is: assigning spectrum resources to the working path according to the constraints of spectrum consistency and spectrum continuity.
在本发明的一个实施例中,对处理用户请求的边缘计算服务器的计算资源进行实时更新后,记录成功建立连接请求数,更新云边协同网络状态。In one embodiment of the present invention, after the computing resources of the edge computing server processing user requests are updated in real time, the number of successfully established connection requests is recorded, and the cloud-edge collaboration network status is updated.
在本发明的一个实施例中,更新云边协同网络状态时,对工作路径占用的频谱资源进行资源释放;同时,对处理用户请求的边缘计算服务器的计算资源进行释放;最后,将连接请求建立的工作路径进行信息清除。In an embodiment of the present invention, when updating the cloud-edge collaborative network state, the spectrum resources occupied by the working path are released; at the same time, the computing resources of the edge computing server processing the user request are released; finally, the connection request is established the working path to clear the information.
在本发明的一个实施例中,计算用户请求的平均路径传输时延和平均数据处理时延的方法为:根据公式
Figure PCTCN2020105691-appb-000001
计算用户请求的平均路径传输时延aveTRD,其中sucConReq表示成功建立连接请求的数目;根据公式
Figure PCTCN2020105691-appb-000002
计算用户请求的平均路径传输时延aveCOD,其中sucConReq表示成功建立连接请求的数目。
In an embodiment of the present invention, the method for calculating the average path transmission delay and the average data processing delay requested by the user is: according to the formula
Figure PCTCN2020105691-appb-000001
Calculate the average path transmission delay aveTRD requested by the user, where sucConReq represents the number of successful connection requests; according to the formula
Figure PCTCN2020105691-appb-000002
Calculate the average path transmission delay aveCOD requested by the user, where sucConReq indicates the number of successful connection establishment requests.
在本发明的一个实施例中,所述步骤S5完成后,继续对所述云边协同网络的状态进行监控。In an embodiment of the present invention, after the step S5 is completed, the state of the cloud-edge collaboration network is continued to be monitored.
本发明的还提供了一种云边协同网络资源平滑迁移与重构系统,包括:云边协同网络初始化模块,用于读取云边协同网络拓扑信息,对所述云边协同网络的参数进行初始化;连接请求生成模块,用于根据用户请求生成一组连接请求集合;连接请求处理模块,用于对所述连接请求集合中的任意一个 连接请求进行处理,直至完成对所有连接请求的处理;其中对任意一个连接请求进行处理的方法为:先判断用户请求的本地区域是否具有足够的计算资源,若有,则用户请求被发送到本地区域内边缘计算服务器进行处理,若否,继续判断相邻区域内服务器是否具有足够计算资源处理用户请求,若有,把用户请求迁移到与本地区域内的同一交换机上的其他区域边缘计算服务器上,若否,则将用户请求通过交换机发送到云端服务器上进行数据处理;分别计算从用户请求到边缘计算服务器的多条候选路径的路径传输时延和数据处理时延的总和,将时延最低的路径作为工作路径;对所述工作路径进行频谱资源分配后,对处理用户请求的边缘计算服务器的计算资源进行实时更新;计算模块,用于计算用户请求的平均路径传输时延和平均数据处理时延。The present invention also provides a system for smooth migration and reconstruction of cloud-edge collaborative network resources, comprising: a cloud-edge collaborative network initialization module for reading cloud-edge collaborative network topology information, and performing parameters of the cloud-edge collaborative network. initialization; a connection request generation module, used to generate a set of connection request sets according to user requests; a connection request processing module, used to process any connection request in the connection request set until the processing of all connection requests is completed; The method for processing any connection request is as follows: first determine whether the local area requested by the user has sufficient computing resources, if so, the user request is sent to the edge computing server in the local area for processing, if not, continue to determine the relevant Whether the server in the adjacent area has enough computing resources to process the user request, if so, migrate the user request to other regional edge computing servers on the same switch as the local area; if not, send the user request to the cloud server through the switch Perform data processing on the server; calculate the sum of the path transmission delay and data processing delay of multiple candidate paths from the user request to the edge computing server, and use the path with the lowest delay as the working path; perform spectrum resource analysis on the working path. After the allocation, the computing resources of the edge computing server processing user requests are updated in real time; the computing module is used to calculate the average path transmission delay and average data processing delay requested by the user.
本发明的上述技术方案相比现有技术具有以下优点:The above-mentioned technical scheme of the present invention has the following advantages compared with the prior art:
本发明所述的云边协同网络资源平滑迁移与重构方法及系统,根据业务对时延敏感度的高低设置工作路径,选择时延敏感度低的工作路径传输与处理,有利于降低业务的路径传输时延和数据处理时延;另外,采用移动边缘计算分层部署的策略分配计算资源和带宽资源,首先考虑产生业务的本地区域内服务器是否具有处理业务所需网络资源,其次考虑通过交换机连接的其他区域内服务器是否具有足够的网络资源,若本地区域和其他区域都没有足够的网络资源,则考虑通过交换机将业务迁移到云端内服务器进行处理,从而通过分层部署进一步降低业务的路径传输时延和数据处理时延。In the method and system for smooth migration and reconstruction of cloud-edge collaborative network resources of the present invention, the working path is set according to the sensitivity of the service to the delay, and the working path with low delay sensitivity is selected for transmission and processing, which is conducive to reducing the delay of the service. Path transmission delay and data processing delay; in addition, the strategy of hierarchical deployment of mobile edge computing is used to allocate computing resources and bandwidth resources. First, it is considered whether the server in the local area where the service is generated has the network resources required to process the service, and secondly, it is considered through the switch. Whether the connected servers in other areas have sufficient network resources, if the local area and other areas do not have enough network resources, consider migrating the service to the server in the cloud through the switch for processing, so as to further reduce the service path through layered deployment Transmission delay and data processing delay.
附图说明Description of drawings
为了使本发明的内容更容易被清楚的理解,下面根据本发明的具体实施例并结合附图,对本发明作进一步详细的说明,其中In order to make the content of the present invention easier to understand clearly, the present invention will be described in further detail below according to specific embodiments of the present invention and in conjunction with the accompanying drawings, wherein
图1是本发明云边协同网络资源平滑迁移与重构方法流程图;Fig. 1 is the flow chart of the method for smooth migration and reconstruction of cloud-edge collaborative network resources according to the present invention;
图2是本发明云边协同网络资源迁移与重构的网络图;Fig. 2 is the network diagram of cloud-side collaborative network resource migration and reconstruction of the present invention;
图3是本发明云边协同网络资源平滑迁移与重构系统的示意图。FIG. 3 is a schematic diagram of the cloud-edge collaborative network resource smooth migration and reconstruction system of the present invention.
具体实施方式detailed description
实施例一Example 1
如图1所示,本实施例提供一种云边协同网络资源平滑迁移与重构方法,包括:步骤S1:读取云边协同网络拓扑信息,对所述云边协同网络的参数进行初始化;步骤S2:根据用户请求生成一组连接请求集合;步骤S3:对所述连接请求集合中的任意一个连接请求进行处理,直至完成对所有连接请求的处理;其中对任意一个连接请求进行处理的方法为:先判断用户请求的本地区域是否具有足够的计算资源,若有,则用户请求被发送到本地区域内边缘计算服务器进行处理,若否,继续判断相邻区域内服务器是否具有足够计算资源处理用户请求,若有,把用户请求迁移到与本地区域内的同一交换机上的其他区域边缘计算服务器上,若否,则将用户请求通过交换机发送到云端服务器上进行数据处理;分别计算从用户请求到边缘计算服务器的多条候选路径的路径传输时延和数据处理时延的总和,将时延最低的路径作为工作路径;对所述工作路径进行频谱资源分配后,对处理用户请求的边缘计算服务器的计算资源进行实时更新;步骤S4:计算用户请求的平均路径传输时延和平均数据处理时延。As shown in FIG. 1 , this embodiment provides a method for smooth migration and reconstruction of cloud-edge collaborative network resources, including: step S1: reading cloud-edge collaborative network topology information, and initializing the parameters of the cloud-edge collaborative network; Step S2: generate a group of connection request sets according to user requests; Step S3: process any connection request in the connection request set until the processing of all connection requests is completed; wherein the method for processing any connection request It is: first judge whether the local area requested by the user has sufficient computing resources, if so, the user request is sent to the edge computing server in the local area for processing, if not, continue to judge whether the server in the adjacent area has sufficient computing resources for processing If there is a user request, migrate the user request to other regional edge computing servers on the same switch as the local area; if not, send the user request to the cloud server through the switch for data processing; calculate the data from the user request separately The sum of the path transmission delay and data processing delay of multiple candidate paths to the edge computing server, and the path with the lowest delay is used as the working path; The computing resources of the server are updated in real time; Step S4: Calculate the average path transmission delay and average data processing delay requested by the user.
本实施例所述云边协同网络资源平滑迁移与重构方法,所述步骤S1中,读取云边协同网络拓扑信息,对所述云边协同网络的参数进行初始化,有利于业务的运行;所述步骤S2中,根据用户请求生成一组连接请求集合,有利于对连接请求进行处理;所述步骤S3中,对于每一个连接请求,先判断用户请求的本地区域是否具有足够的计算资源,若有,则用户请求被发送到本地区域内边缘计算服务器进行处理,此时的网络传输时延可以忽略不计,端到端的处理时延只需考虑计算时延;若否,继续判断相邻区域内服务器是否具有足够计算资源处理用户请求,若有,把用户请求迁移到与本地区域内的同一交换机上的其他区域边缘计算服务器上,此时,端到端时延包括网络传输时延和计算资源时延,若否,则将用户请求通过交换机发送到云端服务器上进行数据处理,此时,用户请求端到端时延包括边缘计算区域和云计算区域的网络传输时延和计算资源时延,通过分层部署有利于降低业务的路径传输时延和数据处理时延;分别计算从用户请求到边缘计算服务器的多条候 选路径的路径传输时延和数据处理时延的总和,将时延最低的路径作为工作路径,由于选择时延敏感度低的工作路径传输与处理,有利于降低业务的路径传输时延和数据处理时延;对所述工作路径进行频谱资源分配后,对处理用户请求的边缘计算服务器进行计算资源的实时更新,有利于记录成功建立的连接请求数;所述步骤S4中,计算用户请求的平均路径传输时延和平均数据处理时延,本发明有利于合理规划边缘计算与云计算区域的资源调度、迁移、重构,有效平衡网络资源和业务时延之间的关系,在合理分配资源的情况下尽可能减少时延,以最大程度地减少业务时延,同时保证节点处理业务计算能力高,以提高网络的服务质量。In the method for smooth migration and reconstruction of cloud-edge collaborative network resources in this embodiment, in step S1, the cloud-edge collaborative network topology information is read, and the parameters of the cloud-edge collaborative network are initialized, which is beneficial to the operation of services; In the step S2, a set of connection requests is generated according to the user request, which is conducive to processing the connection request; in the step S3, for each connection request, first determine whether the local area requested by the user has sufficient computing resources, If yes, the user request is sent to the edge computing server in the local area for processing. At this time, the network transmission delay can be ignored, and the end-to-end processing delay only needs to consider the calculation delay; if not, continue to judge the adjacent area Whether the internal server has enough computing resources to process user requests, and if so, migrate user requests to other regional edge computing servers on the same switch in the local area. At this time, the end-to-end delay includes network transmission delay and computing Resource delay, if not, the user request is sent to the cloud server through the switch for data processing. At this time, the end-to-end delay of the user request includes the network transmission delay and computing resource delay in the edge computing area and cloud computing area. , through layered deployment, it is beneficial to reduce the path transmission delay and data processing delay of the service; calculate the sum of the path transmission delay and data processing delay of multiple candidate paths from the user request to the edge computing server respectively, and calculate the delay The lowest path is used as the working path. Since the working path with low delay sensitivity is selected for transmission and processing, it is beneficial to reduce the path transmission delay and data processing delay of the service; The requested edge computing server updates the computing resources in real time, which is conducive to recording the number of connection requests successfully established; in the step S4, the average path transmission delay and average data processing delay requested by the user are calculated, and the present invention is conducive to reasonable planning. Resource scheduling, migration, and reconstruction in edge computing and cloud computing areas, effectively balancing the relationship between network resources and service latency, and reducing latency as much as possible while reasonably allocating resources to minimize service latency. At the same time, it is ensured that the nodes have high computing capacity for processing business, so as to improve the service quality of the network.
所述步骤S1中,在所述云边协同网络中,配置网络的拓扑信息、网络连接状态、用户请求数、边缘计算服务器的数目、基站和交换机的数目。具体地,在云边协同网络G(U,B,J,S)中,配置网络的拓扑信息、网络连接状态、用户请求数、边缘计算服务器的数目、基站和交换机的数目。In the step S1, in the cloud-edge collaboration network, the topology information of the network, the network connection state, the number of user requests, the number of edge computing servers, the number of base stations and switches are configured. Specifically, in the cloud-edge collaboration network G (U, B, J, S), the topology information of the network, the network connection status, the number of user requests, the number of edge computing servers, the number of base stations and switches are configured.
所述步骤S2中,根据用户请求生成一组连接请求集合时,配置连接请求数目、不同连接请求所需的频谱间隙个数和计算资源等信息。In the step S2, when a set of connection requests is generated according to the user request, information such as the number of connection requests, the number of spectrum gaps required by different connection requests, and computing resources are configured.
所述步骤S3中,对于任意一个连接请求,判定用户请求的本地服务器是否具有用户请求所需计算资源,若本地服务器计算资源足够,则用户请求直接在本地进行处理。若本地服务器计算资源不够,考虑本地区域外的其他区域边缘计算服务器是否具有用户请求所需的计算资源,若其他区域的服务器具有足够的计算资源,则将用户请求通过交换机迁移到其他区域进行处理;而若本地和其他区域的计算资源都不满足用户请求的计算资源,则将用户请求通过交换机迁移到云服务器进行处理,上述通过分层部署有利于降低业务的路径传输时延和数据处理时延。In the step S3, for any connection request, it is determined whether the local server requested by the user has the computing resources required by the user request, and if the computing resources of the local server are sufficient, the user request is directly processed locally. If the computing resources of the local server are insufficient, consider whether the edge computing servers in other regions outside the local region have the computing resources required by the user request. If the servers in other regions have sufficient computing resources, the user request will be migrated to other regions through the switch for processing. ; If the computing resources of the local and other regions do not meet the computing resources requested by the user, the user request will be migrated to the cloud server through the switch for processing. The above-mentioned hierarchical deployment will help reduce the path transmission delay and data processing time of the service. extension.
计算从用户请求到边缘计算服务器的多条候选路径的路径传输时延和数据处理时延的总和方法为:分别计算多条路径的路径传输时延和数据处理时延,再计算路径传输时延和数据处理时延总和,有利于降低业务的路径传输时延和数据处理时延。The method for calculating the sum of the path transmission delay and data processing delay of multiple candidate paths from the user request to the edge computing server is as follows: calculate the path transmission delay and data processing delay of the multiple paths respectively, and then calculate the path transmission delay. The sum of the data processing delay and the path transmission delay of the service is beneficial to reduce the service path transmission delay and data processing delay.
具体地,根据连接请求CR(u,f,r)的用户请求和处理请求的边缘计算服务器,采用K条最短路径算法,计算出从用户请求到服务器的K条候选路径,以便查找出最优的路径作为工作路径。Specifically, according to the user request of the connection request CR(u, f, r) and the edge computing server processing the request, the K shortest path algorithm is used to calculate the K candidate paths from the user request to the server, so as to find the optimal path path as the working path.
其中采用K条最短路径路算法计算从用户请求到边缘计算服务器之间的工作路径。K条最短路径路算法计算了K条候选路径,并按距离由小到大升序排列,即路径距离越小,优先选择权就越高。当具有高优先级的路径在某一段链路上发生阻塞时,就依次选取较低优先级的路径进行频谱资源分配,直到分配资源成功或所有路径全都阻塞。The K shortest path algorithm is used to calculate the working path from the user request to the edge computing server. The K shortest path algorithm calculates K candidate paths and arranges them in ascending order of distance, that is, the smaller the path distance, the higher the priority. When a path with a high priority is blocked on a certain link, paths with lower priorities are selected in turn to allocate spectrum resources until the resources are allocated successfully or all paths are blocked.
计算多条路径的路径传输时延和数据处理时延的方法为:根据公式TRD=Weight k/c计算多条路径的路径传输时延,其中Weight k表示第k条工作路径的路径长度,c表示光纤传输速率;根据公式COD=r u/TR i计算多条路径的数据处理时延,其中r u表示用户请求u所需的计算资源,TR i表示节点i的计算资源总和。 The method for calculating the path transmission delay and data processing delay of multiple paths is: Calculate the path transmission delay of multiple paths according to the formula TRD=Weight k /c, where Weight k represents the path length of the kth working path, c Represents the optical fiber transmission rate; calculates the data processing delay of multiple paths according to the formula COD=r u /TR i , where r u represents the computing resources required by the user to request u, and TR i represents the sum of the computing resources of node i.
对所述工作路径进行频谱资源分配的方法为:根据频谱一致性和频谱连续性的约束条件对所述工作路径进行频谱资源分配。The method for allocating spectrum resources to the working path is: assigning spectrum resources to the working path according to the constraints of spectrum consistency and spectrum continuity.
具体地,根据连接请求CR(u,f,r)所需的频谱间隙数f,在所选择的工作路径中查找满足连接请求所需的带宽资源,若同时满足频谱连续性与频谱一致性双重约束条件,则成功建立连接请求;若不能同时满足频谱连续性与频谱一致性双重约束条件,则连接请求建立失败。Specifically, according to the number of spectrum gaps f required by the connection request CR(u, f, r), the selected working path is searched for the bandwidth resources required to satisfy the connection request. If both spectrum continuity and spectrum consistency are satisfied at the same time If the constraint conditions are met, the connection request is successfully established; if the dual constraints of spectrum continuity and spectrum consistency cannot be satisfied at the same time, the connection request establishment fails.
连接请求CR(u,f,r)成功建立工作路径后,根据频谱一致性和频谱连续性的约束条件对工作路径进行频谱资源分配时,采用首次命中的频谱分配算法,根据路径上所有链路的频谱资源状态生成一张频谱资源表进行编号,从标号小的一端开始查找可用的频谱间隙。如果找到可用的频谱间隙则进行频谱资源分配并进行频谱状态更新;如果没有找到则频谱分配失败,业务阻塞。After the connection request CR(u, f, r) successfully establishes the working path, when assigning spectrum resources to the working path according to the constraints of spectrum consistency and spectrum continuity, the spectrum allocation algorithm that hits the first time is used, according to all links on the path. A spectrum resource table is generated for numbering the spectrum resource status, and the available spectrum gaps are searched from the end with the smaller number. If an available spectrum gap is found, spectrum resource allocation is performed and spectrum status update is performed; if no spectrum is found, spectrum allocation fails and services are blocked.
在频谱资源成功分配后,对处理用户请求的边缘计算服务器进行计算资源的实时更新,对处理用户请求的边缘计算服务器进行计算资源的实时更新 后,记录成功建立连接请求数,更新云边协同网络状态。After the spectrum resources are successfully allocated, real-time update of computing resources is performed on the edge computing server processing user requests, and after real-time update of computing resources on the edge computing server processing user requests, the number of successfully established connection requests is recorded, and the cloud-edge collaborative network is updated. state.
更新云边协同网络状态时,对工作路径占用的频谱资源进行资源释放;同时,对处理用户请求的边缘计算服务器的计算资源进行释放;最后,将连接请求建立的工作路径进行信息清除。When updating the cloud-edge collaboration network status, the spectrum resources occupied by the working path are released; at the same time, the computing resources of the edge computing server processing user requests are released; finally, the information of the working path established by the connection request is cleared.
对于剩余的用户请求,均采用上述相同的方法直至完成对所有连接请求的处理。For the remaining user requests, the same method as above is used until all connection requests are processed.
所述步骤S4中,计算用户请求的平均路径传输时延和平均数据处理时延的方法为:根据公式
Figure PCTCN2020105691-appb-000003
计算用户请求的平均路径传输时延aveTRD,其中sucConReq表示成功建立连接请求的数目;根据公式
Figure PCTCN2020105691-appb-000004
计算用户请求的平均路径传输时延aveCOD,其中sucConReq表示成功建立连接请求的数目。
In the step S4, the method for calculating the average path transmission delay and the average data processing delay requested by the user is: according to the formula
Figure PCTCN2020105691-appb-000003
Calculate the average path transmission delay aveTRD requested by the user, where sucConReq represents the number of successful connection requests; according to the formula
Figure PCTCN2020105691-appb-000004
Calculate the average path transmission delay aveCOD requested by the user, where sucConReq indicates the number of successful connection establishment requests.
具体地,在每一个连接请求成功建立后,记录每个连接请求传输的最短路径长度,计算每个连接请求的路径传输时延,利用平均路径传输时延aveTRD计算得到一组连接请求的平均路径传输时延;在所有连接请求处理完成后,记录每个连接请求的用户请求累计计算资源和边缘计算服务器的初始计算资源,利用平均路径传输时延aveCOD计算一组连接请求的平均数据处理时延。Specifically, after each connection request is successfully established, record the shortest path length of each connection request transmission, calculate the path transmission delay of each connection request, and use the average path transmission delay aveTRD to calculate the average path of a group of connection requests. Transmission delay: After all connection requests are processed, record the cumulative computing resources of each connection request and the initial computing resources of the edge computing server, and use the average path transmission delay aveCOD to calculate the average data processing delay of a group of connection requests .
所述步骤S4完成后,继续对所述云边协同网络的状态进行监控。具体地,主要完成对云边协同网络初始化、连接请求生成、业务优先级选择、边缘计算服务器选择、工作路径建立、频谱资源分配、计算资源更新、资源释放、网络传输时延计算和数据处理时延计算的状态监控功能,以实现在计算资源分配时尽可能减少业务时延的目标。After the step S4 is completed, continue to monitor the state of the cloud-edge collaboration network. Specifically, it mainly completes the initialization of cloud-edge collaborative network, connection request generation, service priority selection, edge computing server selection, working path establishment, spectrum resource allocation, computing resource update, resource release, network transmission delay calculation and data processing time. The state monitoring function of delayed computing is implemented to achieve the goal of reducing service delay as much as possible when computing resources are allocated.
另外,本方法还包括判决和预警的步骤。具体地,执行各个模块之间的协调功能,以及每个模块是否建立成功的判决与预警功能,完成移动边缘计算中减少业务时延的目标。In addition, the method also includes the steps of judging and warning. Specifically, the coordination function between each module is performed, and the judgment and early warning function of each module is successfully established, so as to achieve the goal of reducing service delay in mobile edge computing.
本发明中,对每一个连接请求,根据业务时延敏感度优先处理时延敏感度更高的业务,根据移动边缘计算分层部署的策略选择处理业务的边缘计算 服务器。采用K条最短路径算法计算业务到边缘计算服务器之间的工作路径,工作路径成功选择后,采用首次命中的频谱分配算法对路径进行频谱资源分配,需要同时满足频谱一致性和频谱连续性两个约束条件;然后对网络计算资源和频谱资源状态进行实时更新;每一个连接请求成功建立后,根据路径长度和路径传输速率计算路径传输时延,根据业务计算资源和边缘计算服务器共同计算资源容量和数据处理时延。本发明通过业务时延敏感的云边协同网络资源迁移与重构,有效平衡网络资源和业务时延之间的关系,在合理分配资源的情况下尽可能减少时延。In the present invention, for each connection request, the business with higher delay sensitivity is preferentially processed according to the business delay sensitivity, and the edge computing server processing the business is selected according to the mobile edge computing hierarchical deployment strategy. The K shortest path algorithm is used to calculate the working path between the service and the edge computing server. After the working path is successfully selected, the spectrum allocation algorithm with the first hit is used to allocate spectrum resources to the path, which needs to satisfy both spectrum consistency and spectrum continuity. Constraints; then update the status of network computing resources and spectrum resources in real time; after each connection request is successfully established, the path transmission delay is calculated according to the path length and path transmission rate, and the resource capacity and Data processing delay. The present invention effectively balances the relationship between network resources and service delay through cloud-side coordination network resource migration and reconstruction sensitive to service delay, and reduces the delay as much as possible under the condition of rational resource allocation.
另外,本发明根据业务传输对时延敏感程度的不同,采用业务时延敏感程度分类处理方法,对时延敏感程度高的业务优先传输与处理。本发明只需要考虑两种不同的时延,包括网络传输时延和计算资源时延。网络传输时延是指用户的服务区域与边缘计算服务器之间的最短路径长度,以其链路时延的累计时延计算;计算资源时延与每个用户的计算资源需求和边缘计算服务器的计算能力有关。In addition, the present invention adopts the classification processing method of service delay sensitivity according to the difference of service transmission sensitivity to time delay, and preferentially transmits and processes services with high time delay sensitivity. The present invention only needs to consider two different delays, including network transmission delay and computing resource delay. The network transmission delay refers to the shortest path length between the user's service area and the edge computing server, which is calculated by the cumulative delay of the link delay; the computing resource delay is related to the computing resource requirements of each user and the edge computing server. computing power.
下面具体阐述如何通过移动边缘计算分层部署分配计算资源,如何根据业务时延敏感度处理用户请求,如何通过链路累计时延计算路径传输时延,以及如何通过边缘计算服务器资源和用户请求资源计算数据处理时延,进而尽可能减少业务的总时延。The following describes how to allocate computing resources through hierarchical deployment of mobile edge computing, how to process user requests according to service delay sensitivity, how to calculate path transmission delay through link accumulation delay, and how to use edge computing server resources and user request resources. Calculate the data processing delay to minimize the total business delay.
如图2所示,层1有三个本地区域,每个区域内都有基站和对应的边缘计算服务器,层2是由交换机和对应的边缘计算服务器组成的云端网络。假设层1本地区域内的边缘计算服务器计算资源为20,层2云端网络边缘计算服务器计算资源为1000,业务计算资源数和频谱隙数随机生成。As shown in Figure 2, layer 1 has three local areas, each area has base stations and corresponding edge computing servers, and layer 2 is a cloud network composed of switches and corresponding edge computing servers. Assume that the computing resources of the edge computing server in the local area of layer 1 are 20, the computing resources of the edge computing server of the cloud network of layer 2 are 1000, and the number of service computing resources and the number of spectrum slots are randomly generated.
首先,对云边协同网络G(U,B,J,S)进行初始化,包括用户请求、基站、交换机、边缘计算服务器可以选取的位置,同时对边缘计算服务器的计算资源进行初始化。连接请求用CR(u,f,r)表示,u表示用户请求编号,f表示建立工作路径所需要的频谱隙数,r表示用户请求所需计算资源。在图2中的节点3和节点2基站区域内生成3个用户请求集合CR(1,3,15)、CR (2,8,10)、CR(3,5,30)。First, initialize the cloud-edge collaboration network G (U, B, J, S), including user requests, base stations, switches, and locations that can be selected by the edge computing server, and initialize the computing resources of the edge computing server. The connection request is represented by CR(u, f, r), where u represents the user request number, f represents the number of spectrum slots required to establish a working path, and r represents the computing resources required by the user request. Three user request sets CR(1,3,15), CR(2,8,10) and CR(3,5,30) are generated in the base station area of node 3 and node 2 in Fig. 2 .
第二,对于节点2产生的用户请求CR(1,3,15),首先判定用户请求是否能够在本地处理。由于用户请求的计算资源数为15,本地区域内边缘计算服务器节点2的计算资源为20,此时用户请求被发送到本地边缘计算服务器进行处理,数据传输路线为①。Second, for the user request CR(1, 3, 15) generated by node 2, first determine whether the user request can be processed locally. Since the number of computing resources requested by the user is 15, the computing resources of the edge computing server node 2 in the local area are 20. At this time, the user request is sent to the local edge computing server for processing, and the data transmission route is ①.
第三,对于节点2产生的用户请求CR(2,8,10),由于节点2此时的动态计算资源为5,不能满足用户请求所需计算资源,因此需要通过交换机节点7把用户需求的计算资源迁移到其他区域边缘计算服务器节点8上,节点8的计算资源更新为10,传输路径为②。Third, for the user request CR(2, 8, 10) generated by node 2, since the dynamic computing resource of node 2 is 5 at this time, it cannot meet the computing resources required by the user request. The computing resources are migrated to the edge computing server node 8 in other regions, the computing resources of the node 8 are updated to 10, and the transmission path is ②.
第四,对于节点3产生的用户请求CR(3,5,30),此时本地和相邻边缘计算区域服务器上的计算资源都不能满足用户请求所需的计算资源,用户请求需要通过交换机节点4传输到与交换机相连的云计算区域服务器节点5,数据传输路线为③。Fourth, for the user request CR (3,5,30) generated by node 3, the computing resources on the local and adjacent edge computing area servers cannot meet the computing resources required by the user request, and the user request needs to pass through the switch node. 4 is transmitted to the cloud computing area server node 5 connected to the switch, and the data transmission route is ③.
第五,对于用户请求CR(1,3,15)、CR(2,8,10)、CR(3,5,30)需要采用K条最短路径的路由算法分别计算节点2到节点1、节点2到节点8、节点3到节点5之间的K条路径。根据公式TRD=Weight k/c和公式COD=r u/TR i分别计算K条路径的路径传输时延和数据处理时延总和,选择K条路径中时延最低的一条路径作为工作路径。 Fifth, for user requests CR(1,3,15), CR(2,8,10), CR(3,5,30), the routing algorithm of K shortest paths needs to be used to calculate node 2 to node 1, node 1 K paths from 2 to node 8 and from node 3 to node 5. Calculate the sum of the path transmission delay and data processing delay of the K paths respectively according to the formula TRD=Weight k /c and the formula COD=r u /TR i , and select the path with the lowest delay among the K paths as the working path.
第六,选定工作路径后,采用首次命中的频谱分配算法,根据频谱一致性和频谱连续性的约束条件对用户请求CR(1,3,15)、CR(2,8,10)、CR(3,5,30)的工作路径进行频谱资源分配。频谱资源分配后,连接请求成功建立,此时计算资源和频谱资源状态实时更新并记录连接成功数。Sixth, after the working path is selected, the first hit spectrum allocation algorithm is used, and the user requests CR(1,3,15), CR(2,8,10), CR according to the constraints of spectrum consistency and spectrum continuity. The working paths of (3, 5, 30) perform spectrum resource allocation. After the spectrum resource is allocated, the connection request is successfully established. At this time, the status of computing resources and spectrum resources is updated in real time and the number of successful connections is recorded.
最后,每次连接请求成功建立后,记录工作路径的链路长度、连接请求的计算资源,用公式
Figure PCTCN2020105691-appb-000005
Figure PCTCN2020105691-appb-000006
分别计算平均路径传输时延和平均数据处理时延。
Finally, after each connection request is successfully established, record the link length of the working path, the computing resources of the connection request, and use the formula
Figure PCTCN2020105691-appb-000005
and
Figure PCTCN2020105691-appb-000006
Calculate the average path transmission delay and the average data processing delay, respectively.
实施例二Embodiment 2
如图3所述,基于同一发明构思,本实施例提供了云边协同网络资源平滑迁移与重构系统,其解决问题的原理与所述云边协同网络资源平滑迁移与重构方法类似,重复之处不再赘述。As shown in FIG. 3 , based on the same inventive concept, this embodiment provides a cloud-edge collaborative network resource smooth migration and reconstruction system. The principle of solving the problem is similar to the cloud-edge collaborative network resource smooth migration and reconstruction method. Repeat will not be repeated here.
本实施例提供一种云边协同网络资源平滑迁移与重构系统,包括:This embodiment provides a cloud-edge collaborative network resource smooth migration and reconstruction system, including:
云边协同网络初始化模块,用于读取云边协同网络拓扑信息,对所述云边协同网络的参数进行初始化;The cloud-edge collaborative network initialization module is used to read the cloud-edge collaborative network topology information, and initialize the parameters of the cloud-edge collaborative network;
连接请求生成模块,用于根据用户请求生成一组连接请求集合;The connection request generation module is used to generate a set of connection request sets according to user requests;
连接请求处理模块,用于对所述连接请求集合中的任意一个连接请求进行处理,直至完成对所有连接请求的处理;其中对任意一个连接请求进行处理的方法为:先判断用户请求的本地区域是否具有足够的计算资源,若有,则用户请求被发送到本地区域内边缘计算服务器进行处理,若否,继续判断相邻区域内服务器是否具有足够计算资源处理用户请求,若有,把用户请求迁移到与本地区域内的同一交换机上的其他区域边缘计算服务器上,若否,则将用户请求通过交换机发送到云端服务器上进行数据处理;分别计算从用户请求到边缘计算服务器的多条候选路径的路径传输时延和数据处理时延的总和,将时延最低的路径作为工作路径;对所述工作路径进行频谱资源分配后,对处理用户请求的边缘计算服务器的计算资源进行实时更新;The connection request processing module is used to process any connection request in the connection request set until the processing of all connection requests is completed; the method for processing any connection request is: first determine the local area requested by the user Whether there are enough computing resources, if so, the user request is sent to the edge computing server in the local area for processing; Migrate to other regional edge computing servers on the same switch in the local area, if not, send the user request to the cloud server through the switch for data processing; calculate multiple candidate paths from the user request to the edge computing server separately The sum of the path transmission delay and the data processing delay is the sum of the path transmission delay and the data processing delay, and the path with the lowest delay is used as the working path; after the spectrum resources are allocated to the working path, the computing resources of the edge computing server processing user requests are updated in real time;
计算模块,用于计算用户请求的平均路径传输时延和平均数据处理时延。The calculation module is used to calculate the average path transmission delay and average data processing delay requested by the user.
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by those skilled in the art, the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流 程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It will be understood that each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of processes and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.
显然,上述实施例仅仅是为清楚地说明所作的举例,并非对实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式变化或变动。这里无需也无法对所有的实施方式予以穷举。而由此所引伸出的显而易见的变化或变动仍处于本发明创造的保护范围之中。Obviously, the above-mentioned embodiments are only examples for clear description, and are not intended to limit the implementation manner. For those of ordinary skill in the art, other different forms of changes or modifications can also be made on the basis of the above description. There is no need and cannot be exhaustive of all implementations here. And the obvious changes or changes derived from this are still within the protection scope of the present invention.

Claims (10)

  1. 一种云边协同网络资源平滑迁移与重构方法,其特征在于,包括如下步骤:A method for smooth migration and reconstruction of cloud-edge collaborative network resources, characterized in that it includes the following steps:
    步骤S1:读取云边协同网络拓扑信息,对所述云边协同网络的参数进行初始化;Step S1: read the cloud-edge collaborative network topology information, and initialize the parameters of the cloud-edge collaborative network;
    步骤S2:根据用户请求生成一组连接请求集合;Step S2: generating a set of connection request sets according to user requests;
    步骤S3:对所述连接请求集合中的任意一个连接请求进行处理,直至完成对所有连接请求的处理;其中对任意一个连接请求进行处理的方法为:先判断用户请求的本地区域是否具有足够的计算资源,若有,则用户请求被发送到本地区域内边缘计算服务器进行处理,若否,继续判断相邻区域内服务器是否具有足够计算资源处理用户请求,若有,把用户请求迁移到与本地区域内的同一交换机上的其他区域边缘计算服务器上,若否,则将用户请求通过交换机发送到云端服务器上进行数据处理;分别计算从用户请求到边缘计算服务器的多条候选路径的路径传输时延和数据处理时延的总和,将时延最低的路径作为工作路径;对所述工作路径进行频谱资源分配后,对处理用户请求的边缘计算服务器的计算资源进行实时更新;Step S3: process any connection request in the connection request set until the processing of all connection requests is completed; wherein the method for processing any connection request is: first determine whether the local area requested by the user has sufficient If there are computing resources, the user request will be sent to the edge computing server in the local area for processing. If not, continue to judge whether the server in the adjacent area has enough computing resources to process the user request. On other regional edge computing servers on the same switch in the region, if not, the user request will be sent to the cloud server through the switch for data processing; the path transmission of multiple candidate paths from the user request to the edge computing server will be calculated separately. The sum of the delay and the data processing delay, and the path with the lowest delay is used as the working path; after the spectrum resources are allocated to the working path, the computing resources of the edge computing server processing the user request are updated in real time;
    步骤S4:计算用户请求的平均路径传输时延和平均数据处理时延。Step S4: Calculate the average path transmission delay and average data processing delay requested by the user.
  2. 根据权利要求1所述的云边协同网络资源平滑迁移与重构方法,其特征在于:在所述云边协同网络中,配置网络的拓扑信息、网络连接状态、用户请求数、边缘计算服务器的数目、基站和交换机的数目。The cloud-side collaborative network resource smooth migration and reconstruction method according to claim 1, wherein: in the cloud-side collaborative network, configuration network topology information, network connection status, number of user requests, edge computing server number of base stations and switches.
  3. 根据权利要求1所述的云边协同网络资源平滑迁移与重构方法,其特征在于:计算从用户请求到边缘计算服务器的多条候选路径的路径传输时延和数据处理时延的总和方法为:分别计算多条路径的路径传输时延和数据处理时延,再计算路径传输时延和数据处理时延的总和。The method for smooth migration and reconstruction of cloud-edge collaborative network resources according to claim 1, wherein the method for calculating the sum of path transmission delay and data processing delay of multiple candidate paths from a user request to an edge computing server is as follows: : Calculate the path transmission delay and data processing delay of multiple paths respectively, and then calculate the sum of the path transmission delay and data processing delay.
  4. 根据权利要求3所述的云边协同网络资源平滑迁移与重构方法,其特征在于:计算多条路径的路径传输时延和数据处理时延的方法为:根据公式TRD=Weight k/c计算多条路径的路径传输时延,其中Weight k表示第k条工作路径的路径长度,c表示光纤传输速率;根据公式COD=r u/TR i计算多条路径的数据处理时延,其中r u表示用户请求u所需的计算资源,TR i表示节点i的计算资源总和。 The method for smooth migration and reconstruction of cloud-edge collaborative network resources according to claim 3, wherein the method for calculating the path transmission delay and data processing delay of multiple paths is: calculating according to the formula TRD=Weight k /c The path transmission delay of multiple paths, where Weight k represents the path length of the kth working path, and c represents the optical fiber transmission rate; calculate the data processing delay of multiple paths according to the formula COD=r u /TR i , where r u Represents the computing resources required by the user to request u, and TR i represents the sum of the computing resources of node i.
  5. 根据权利要求1所述的云边协同网络资源平滑迁移与重构方法,其特征在于:对所述工作路径进行频谱资源分配的方法为:根据频谱一致性和频谱连续性的约束条件对所述工作路径进行频谱资源分配。The method for smooth migration and reconstruction of cloud-edge collaborative network resources according to claim 1, wherein the method for allocating spectrum resources to the working path is: according to the constraints of spectrum consistency and spectrum continuity The working path performs spectrum resource allocation.
  6. 根据权利要求1所述的云边协同网络资源平滑迁移与重构方法,其特征在于:对处理用户请求的边缘计算服务器的计算资源进行实时更新后,记录成功建立连接请求数,更新云边协同网络状态。The method for smooth migration and reconstruction of cloud-edge collaboration network resources according to claim 1, characterized in that: after the computing resources of the edge computing server processing user requests are updated in real time, the number of successfully established connection requests is recorded, and the cloud-edge collaboration is updated. network status.
  7. 根据权利要求6所述的云边协同网络资源平滑迁移与重构方法,其特征在于:更新云边协同网络状态时,对工作路径占用的频谱资源进行资源释放;同时,对处理用户请求的边缘计算服务器的计算资源进行释放;最后,将连接请求建立的工作路径进行信息清除。The method for smooth migration and reconstruction of cloud-edge collaborative network resources according to claim 6, characterized in that: when updating the cloud-edge collaborative network state, the spectrum resources occupied by the working path are released; The computing resources of the computing server are released; finally, the information of the working path established by the connection request is cleared.
  8. 根据权利要求1所述的云边协同网络资源平滑迁移与重构方法,其特征在于:计算用户请求的平均路径传输时延和平均数据处理时延的方法为:根据公式
    Figure PCTCN2020105691-appb-100001
    计算用户请求的平均路径传输时延aveTRD,其中sucConReq表示成功建立连接请求的数目;根据公式
    Figure PCTCN2020105691-appb-100002
    计算用户请求的平均路径传输时延aveCOD,其中sucConReq表示成功建立连接请求的数目。
    The method for smooth migration and reconstruction of cloud-edge collaborative network resources according to claim 1, wherein the method for calculating the average path transmission delay and average data processing delay requested by the user is: according to the formula
    Figure PCTCN2020105691-appb-100001
    Calculate the average path transmission delay aveTRD requested by the user, where sucConReq represents the number of successful connection requests; according to the formula
    Figure PCTCN2020105691-appb-100002
    Calculate the average path transmission delay aveCOD requested by the user, where sucConReq indicates the number of successful connection establishment requests.
  9. 根据权利要求1所述的云边协同网络资源平滑迁移与重构方法,其特征在于:所述步骤S5完成后,继续对所述云边协同网络的状态进行监控。The method for smooth migration and reconstruction of cloud-edge collaborative network resources according to claim 1, characterized in that: after the step S5 is completed, the state of the cloud-edge collaborative network is continued to be monitored.
  10. 一种云边协同网络资源平滑迁移与重构系统,其特征在于,包括:A cloud-edge collaborative network resource smooth migration and reconstruction system, characterized by comprising:
    云边协同网络初始化模块,用于读取云边协同网络拓扑信息,对所述云边协同网络的参数进行初始化;The cloud-edge collaborative network initialization module is used to read the cloud-edge collaborative network topology information, and initialize the parameters of the cloud-edge collaborative network;
    连接请求生成模块,用于根据用户请求生成一组连接请求集合;The connection request generation module is used to generate a set of connection request sets according to user requests;
    连接请求处理模块,用于对所述连接请求集合中的任意一个连接请求进行处理,直至完成对所有连接请求的处理;其中对任意一个连接请求进行处理的方法为:先判断用户请求的本地区域是否具有足够的计算资源,若有,则用户请求被发送到本地区域内边缘计算服务器进行处理,若否,继续判断相邻区域内服务器是否具有足够计算资源处理用户请求,若有,把用户请求迁移到与本地区域内的同一交换机上的其他区域边缘计算服务器上,若否,则将用户请求通过交换机发送到云端服务器上进行数据处理;分别计算从用户请求到边缘计算服务器的多条候选路径的路径传输时延和数据处理时延的总和,将时延最低的路径作为工作路径;对所述工作路径进行频谱资源分配后,对处理用户请求的边缘计算服务器的计算资源进行实时更新;The connection request processing module is used to process any connection request in the connection request set until the processing of all connection requests is completed; the method for processing any connection request is: first determine the local area requested by the user Whether there are enough computing resources, if so, the user request is sent to the edge computing server in the local area for processing; Migrate to other regional edge computing servers on the same switch in the local area, if not, send the user request to the cloud server through the switch for data processing; calculate multiple candidate paths from the user request to the edge computing server separately The sum of the path transmission delay and the data processing delay is the sum of the path transmission delay and the data processing delay, and the path with the lowest delay is used as the working path; after the spectrum resources are allocated to the working path, the computing resources of the edge computing server processing user requests are updated in real time;
    计算模块,用于计算用户请求的平均路径传输时延和平均数据处理时延。The calculation module is used to calculate the average path transmission delay and average data processing delay requested by the user.
PCT/CN2020/105691 2020-07-28 2020-07-30 Cloud-edge collaborative network resource smooth migration and restructuring method and system WO2022021176A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010739986.X 2020-07-28
CN202010739986.XA CN111901424B (en) 2020-07-28 2020-07-28 Cloud edge cooperative network resource smooth migration and reconstruction method and system

Publications (1)

Publication Number Publication Date
WO2022021176A1 true WO2022021176A1 (en) 2022-02-03

Family

ID=73183704

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/105691 WO2022021176A1 (en) 2020-07-28 2020-07-30 Cloud-edge collaborative network resource smooth migration and restructuring method and system

Country Status (2)

Country Link
CN (1) CN111901424B (en)
WO (1) WO2022021176A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827152A (en) * 2022-07-01 2022-07-29 之江实验室 Low-delay cloud edge-side collaborative computing method and device for satellite-ground collaborative network
CN115277452A (en) * 2022-07-01 2022-11-01 中铁第四勘察设计院集团有限公司 ResNet self-adaptive acceleration calculation method based on edge-end cooperation and application
CN115696403A (en) * 2022-11-04 2023-02-03 东南大学 Multilayer edge computing task unloading method assisted by edge computing node
CN115801811A (en) * 2023-01-09 2023-03-14 江苏云工场信息技术有限公司 Cloud edge coordination method and device
CN116016522A (en) * 2023-02-13 2023-04-25 广东电网有限责任公司中山供电局 Cloud edge end collaborative new energy terminal monitoring architecture
CN116755867A (en) * 2023-08-18 2023-09-15 中国电子科技集团公司第十五研究所 Satellite cloud-oriented computing resource scheduling system, method and storage medium
CN116894469A (en) * 2023-09-11 2023-10-17 西南林业大学 DNN collaborative reasoning acceleration method, device and medium in end-edge cloud computing environment

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113015217B (en) * 2021-02-07 2022-05-20 重庆邮电大学 Edge cloud cooperation low-cost online multifunctional business computing unloading method
CN113344152A (en) * 2021-04-30 2021-09-03 华中农业大学 System and method for intelligently detecting and uploading full-chain production information of dairy products
CN113344728A (en) * 2021-04-30 2021-09-03 华中农业大学 Intelligent monitoring system and method for food production full-chain information
CN113364850B (en) * 2021-06-01 2023-02-14 苏州路之遥科技股份有限公司 Software-defined cloud-edge collaborative network energy consumption optimization method and system
CN113489643A (en) * 2021-07-13 2021-10-08 腾讯科技(深圳)有限公司 Cloud-edge cooperative data transmission method and device, server and storage medium
CN113784373B (en) 2021-08-24 2022-11-25 苏州大学 Combined optimization method and system for time delay and frequency spectrum occupation in cloud edge cooperative network
CN113742046A (en) * 2021-09-17 2021-12-03 苏州大学 Flow grooming cloud-side computing network computing resource balanced scheduling method and system
CN114363984B (en) * 2021-12-16 2022-11-25 苏州大学 Cloud edge collaborative optical carrier network spectrum resource allocation method and system
CN115396358B (en) * 2022-08-23 2023-06-06 中国联合网络通信集团有限公司 Route setting method, device and storage medium of computing power perception network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121512A (en) * 2017-12-22 2018-06-05 苏州大学 A kind of edge calculations services cache method, system, device and readable storage medium storing program for executing
CN108541027A (en) * 2018-04-24 2018-09-14 南京邮电大学 A kind of communication computing resource method of replacing based on edge cloud network
WO2019101056A1 (en) * 2017-11-21 2019-05-31 华为技术有限公司 Configuration method and apparatus
CN110891093A (en) * 2019-12-09 2020-03-17 中国科学院计算机网络信息中心 Method and system for selecting edge computing node in delay sensitive network

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975330B (en) * 2016-06-27 2019-06-18 华为技术有限公司 A kind of virtual network function dispositions method that network edge calculates, device and system
CN106844051A (en) * 2017-01-19 2017-06-13 河海大学 The loading commissions migration algorithm of optimised power consumption in a kind of edge calculations environment
US10887198B2 (en) * 2017-09-29 2021-01-05 Nec Corporation System and method to support network slicing in an MEC system providing automatic conflict resolution arising from multiple tenancy in the MEC environment
US10206094B1 (en) * 2017-12-15 2019-02-12 Industrial Technology Research Institute Mobile edge platform servers and UE context migration management methods thereof
WO2020023115A1 (en) * 2018-07-27 2020-01-30 Futurewei Technologies, Inc. Task offloading and routing in mobile edge cloud networks
CN111010295B (en) * 2019-11-28 2022-09-16 国网甘肃省电力公司电力科学研究院 SDN-MEC-based power distribution and utilization communication network task migration method
CN111131421B (en) * 2019-12-13 2022-07-29 中国科学院计算机网络信息中心 Method for interconnection and intercommunication of industrial internet field big data and cloud information
CN111240821B (en) * 2020-01-14 2022-04-22 华南理工大学 Collaborative cloud computing migration method based on Internet of vehicles application security grading

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019101056A1 (en) * 2017-11-21 2019-05-31 华为技术有限公司 Configuration method and apparatus
CN108121512A (en) * 2017-12-22 2018-06-05 苏州大学 A kind of edge calculations services cache method, system, device and readable storage medium storing program for executing
CN108541027A (en) * 2018-04-24 2018-09-14 南京邮电大学 A kind of communication computing resource method of replacing based on edge cloud network
CN110891093A (en) * 2019-12-09 2020-03-17 中国科学院计算机网络信息中心 Method and system for selecting edge computing node in delay sensitive network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MAN JUNFENG; ZHAO LONGGAN; PENG CHENG; LI QIANQIAN: "Task Scheduling Method for Large-scale Factory Access in Cloud and Edge Collaborative Computing Architecture", COMPUTER INTEGRATED MANUFACTURING SYSTEMS, vol. 27, no. 8, 1 August 2020 (2020-08-01), CN , pages 2282 - 2294, XP009534058, ISSN: 1006-5911, DOI: 10.13196/j.cims.2021.08.011 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827152A (en) * 2022-07-01 2022-07-29 之江实验室 Low-delay cloud edge-side collaborative computing method and device for satellite-ground collaborative network
CN114827152B (en) * 2022-07-01 2022-09-27 之江实验室 Low-delay cloud edge-side cooperative computing method and device for satellite-ground cooperative network
CN115277452A (en) * 2022-07-01 2022-11-01 中铁第四勘察设计院集团有限公司 ResNet self-adaptive acceleration calculation method based on edge-end cooperation and application
CN115277452B (en) * 2022-07-01 2023-11-28 中铁第四勘察设计院集团有限公司 ResNet self-adaptive acceleration calculation method based on edge-side coordination and application
CN115696403A (en) * 2022-11-04 2023-02-03 东南大学 Multilayer edge computing task unloading method assisted by edge computing node
CN115696403B (en) * 2022-11-04 2023-05-16 东南大学 Multi-layer edge computing task unloading method assisted by edge computing nodes
CN115801811B (en) * 2023-01-09 2023-04-28 江苏云工场信息技术有限公司 Cloud edge cooperation method and device
CN115801811A (en) * 2023-01-09 2023-03-14 江苏云工场信息技术有限公司 Cloud edge coordination method and device
CN116016522A (en) * 2023-02-13 2023-04-25 广东电网有限责任公司中山供电局 Cloud edge end collaborative new energy terminal monitoring architecture
CN116016522B (en) * 2023-02-13 2023-06-02 广东电网有限责任公司中山供电局 Cloud edge end collaborative new energy terminal monitoring system
CN116755867A (en) * 2023-08-18 2023-09-15 中国电子科技集团公司第十五研究所 Satellite cloud-oriented computing resource scheduling system, method and storage medium
CN116755867B (en) * 2023-08-18 2024-03-08 中国电子科技集团公司第十五研究所 Satellite cloud-oriented computing resource scheduling system, method and storage medium
CN116894469A (en) * 2023-09-11 2023-10-17 西南林业大学 DNN collaborative reasoning acceleration method, device and medium in end-edge cloud computing environment
CN116894469B (en) * 2023-09-11 2023-12-15 西南林业大学 DNN collaborative reasoning acceleration method, device and medium in end-edge cloud computing environment

Also Published As

Publication number Publication date
CN111901424A (en) 2020-11-06
CN111901424B (en) 2021-09-07

Similar Documents

Publication Publication Date Title
WO2022021176A1 (en) Cloud-edge collaborative network resource smooth migration and restructuring method and system
CN113364850B (en) Software-defined cloud-edge collaborative network energy consumption optimization method and system
WO2023039965A1 (en) Cloud-edge computing network computational resource balancing and scheduling method for traffic grooming, and system
WO2023024219A1 (en) Joint optimization method and system for delay and spectrum occupancy in cloud-edge collaborative network
Hu et al. Dynamic slave controller assignment for enhancing control plane robustness in software-defined networks
Tziritas et al. Data replication and virtual machine migrations to mitigate network overhead in edge computing systems
CN110087250B (en) Network slice arranging scheme and method based on multi-objective joint optimization model
CN112822050A (en) Method and apparatus for deploying network slices
CN111538570A (en) VNF deployment method and device for energy conservation and QoS guarantee
CN113076177B (en) Dynamic migration method of virtual machine in edge computing environment
CN105391651B (en) Virtual optical network multi-layer resource convergence method and system
WO2023108718A1 (en) Spectrum resource allocation method and system for cloud-edge collaborative optical carrier network
CN104539744B (en) A kind of the media edge cloud dispatching method and device of two benches cooperation
CN109617811B (en) Rapid migration method for mobile application in cloud network
CN106681815A (en) Concurrent migration method of virtual machines
Liu et al. Mobility-aware dynamic service placement for edge computing
Haitao et al. Multipath transmission workload balancing optimization scheme based on mobile edge computing in vehicular heterogeneous network
Chang et al. Adaptive replication for mobile edge computing
Mahmoudi et al. SDN-DVFS: an enhanced QoS-aware load-balancing method in software defined networks
CN109327340B (en) Mobile wireless network virtual network mapping method based on dynamic migration
Zhao et al. Neighboring-aware caching in heterogeneous edge networks by actor-attention-critic learning
Yi et al. Energy‐aware disaster backup among cloud datacenters using multiobjective reinforcement learning in software defined network
Li et al. An efficient algorithm for service function chains reconfiguration in mobile edge cloud networks
Wu Deep reinforcement learning based multi-layered traffic scheduling scheme in data center networks
Ambalavanan et al. DICer: Distributed coordination for in-network computations

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20947096

Country of ref document: EP

Kind code of ref document: A1