CN114827028B - Multi-layer computation network integrated routing system and method - Google Patents

Multi-layer computation network integrated routing system and method Download PDF

Info

Publication number
CN114827028B
CN114827028B CN202210229576.XA CN202210229576A CN114827028B CN 114827028 B CN114827028 B CN 114827028B CN 202210229576 A CN202210229576 A CN 202210229576A CN 114827028 B CN114827028 B CN 114827028B
Authority
CN
China
Prior art keywords
calculation
network
data
router
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210229576.XA
Other languages
Chinese (zh)
Other versions
CN114827028A (en
Inventor
许方敏
杨帆
赵成林
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202210229576.XA priority Critical patent/CN114827028B/en
Publication of CN114827028A publication Critical patent/CN114827028A/en
Application granted granted Critical
Publication of CN114827028B publication Critical patent/CN114827028B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/54Organization of routing tables
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/04Protocols for data compression, e.g. ROHC

Abstract

The invention provides a multi-layer computation network integrated routing system and a method, wherein end equipment generates computation tasks and sends computation requests to a network entry router; the network entry router acquires a routing path and an exit computing node according to the computing network integrated routing table, and responds to a computing request of end equipment; after receiving the response of the network entry router, the end equipment transmits the calculation task data to the next-hop router; after receiving the task data, the router submits the data to a corresponding preprocessing calculation server, the data is preprocessed in the preprocessing calculation server, and the preprocessed data are returned to the router; the router performs routing transmission on the preprocessed calculation task data according to the calculation network integrated routing table until the preprocessed calculation task data are transmitted to the optimal calculation server, performs processing on the calculation task, and finally returns a calculation result to the end equipment, so that during routing calculation, the service requirement, calculation power matching and network state planning are considered, and the total delay of task transmission and calculation is minimized.

Description

Multi-layer computation network integrated routing system and method
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a multi-layer computing network integrated routing system and method.
Background
With the large deployment of edge computing equipment and intelligent terminal equipment, although the problems of bandwidth shortage, network congestion and overlong time delay caused by uploading mass data to a cloud computing center in a network are solved, the computing resources are also enabled to present a ubiquitous deployment trend. In order to efficiently and cooperatively utilize computing power resources which are heterogeneous in the whole network, a computing power network is provided as a technical scheme for computing power and network fusion based on a distributed system. Computationally aware routing and computationally resource allocation are a key problem in computationally networks. However, in an effort network, existing route calculation can only perform network planning, and the matching and planning of the effort resources are omitted.
Disclosure of Invention
In order to solve the existing problems, the invention provides a multi-layer computation network integrated routing system and a method, which are used for fusing computation power and network depth, considering service requirements, computation power matching and network state planning during routing computation and solving the problem that the existing routing computation can only carry out network planning and neglects the matching and planning of computation power resources.
The invention provides a multi-layer computation network integrated routing system, which comprises end equipment, a network router, a preprocessing computation server and a computation server, wherein:
the end equipment is used for generating a calculation task, proposing a calculation force request to the network, and transmitting task data to the network router after acquiring the calculation network integrated routing table;
the network router is used for transmitting data after reading the address of the data packet; routing transmission is carried out on the preprocessed calculation task data according to the calculation network integrated routing table until the calculation task data are transmitted to the optimal calculation server; and returning the final calculation result of the calculation server to the end equipment.
The preprocessing calculation server is used as a network node for preprocessing data, is responsible for receiving the data, preprocesses the data and returns the preprocessed data to the network router; the preprocessing comprises data cleaning, data integration, data transformation, data specification and data compression;
the computing server is used for transmitting the pre-processed computing task data to the computing server and carrying out further processing; the further processing includes performing various types of intelligent computing tasks.
Optionally, the routing table for computing network is a routing table for computing network that is formed by the network entry router maintaining the routing table for computing power and the network routing table according to the computing power advertisement and the network state detection, and performing routing calculation according to the service requirement.
Optionally, the network router comprises a network entry router and a next hop router; the network entry router acquires a proper routing path and an exit computing node according to the computing network integrated routing table, and responds to a computing request of end equipment; the next hop router is used for receiving the calculation task data of the end equipment and submitting the calculation task data to the preprocessing calculation server.
Optionally, characterized by: and establishing a model for the system, wherein the model comprises a network model, a task model, a communication model, a calculation network integrated resource characterization model and an optimization target model.
Optionally, the network model comprises U = {1,2,.., U }, P = {1,2,.., P }, M = {1,2,..., M }, R = {1,2,..., R }; wherein U represents end equipment in the computational power network, P represents a preprocessing calculation server, M represents a calculation server, and R represents a network router;
the task model includes I = {1, 2.., I }, where I represents a task class; the ith task of the user u is u i =<n u,i ,o u,i ,t u,i Is where n is u,i Indicating the size of the data packet to be transmitted by task i, o u,i A quantized value, t, representing the computational resources required by task i u,i Representing the maximum processing delay tolerable by the task i;
the communication model includes the data transmission rate of the user u to the accessed network entry router r: r is u,r =B r, u log 2 (1+γ u,r ) (ii) a Wherein, B r,u Indicates the bandwidth, y, allocated by the network ingress router r to user u u,r Representing the transmission of user u to the network ingress router r to which it is attachedSignal-to-noise ratio; the communication delay from the user u to the accessed network entry router r is as follows: d u,r =n u,i /r u,r (ii) a Task data slave router r i To r j The communication delay of (c) can be expressed as:
Figure BDA0003537787560000021
where k e (0, 1) denotes the compression ratio>
Figure BDA0003537787560000022
Presentation router r i To r j The data transmission rate of (d); the communication delay over the entire path is:
Figure BDA0003537787560000023
wherein, t i,j E path represents router r i To r j The link of (1) belongs to the path;
the total delay calculated in the calculation model is as follows: d comp =D p +D m (ii) a Wherein D p =αo u,i /f p ,D m =o u,i /f m ;D p Representing the processing delay of the data on the preprocessing calculation server p, D m Indicating the processing delay, α o, of the data on the preprocessing calculation server m u,i And alpha epsilon (0, 1) represents the amount of computation required for task preprocessing, f p Representing the processing computing Server p computing force, f m Representing the computing power of the processing computing server m;
in the calculation network integrated resource characterization model, the task processing time delay comprises transmission time delay, preprocessing calculation time delay and calculation time delay of data on a path, and the shortest path mode is adopted for minimizing the task processing time delay; if the node only participates in data forwarding, only the transmission delay is considered, and the calculation delay of the node is not considered; if the node participates in task calculation, transmission delay and calculation delay of the node need to be considered;
in the optimization target model, the preprocessing calculation server, the calculation server and the data transmission path are selected to minimize the total calculation and communication delay and optimizeThe problem is expressed as min r,m,p D total =D path +D comp =D u,r +
Figure BDA0003537787560000033
The invention also provides a multi-layer computation network integrated routing method, which comprises the following steps:
s1: the end equipment generates a calculation task and sends a calculation request to the network entry router;
s2: the network entry router acquires a proper routing path and an exit computing node according to the computing network integrated routing table, and responds to a computing request of end equipment;
s3: after receiving the response of the network entry router, the end equipment transmits the calculation task data to the next hop router;
s4: the next hop router submits the received task data to a corresponding preprocessing calculation server firstly, the preprocessing calculation server carries out data preprocessing, and the preprocessed data are returned to the router;
s5: and the router performs routing transmission on the preprocessed calculation task data according to the calculation network integrated routing table until the preprocessed calculation task data are transmitted to the optimal calculation server, performs calculation task processing, and returns a final calculation result to the end equipment through the router.
Optionally, the network entry router maintains the computation force routing table and the network routing table according to the computation force advertisement and the network state detection, performs routing calculation according to a service requirement, and finally forms the network-computing integrated routing table.
Optionally, the network router comprises a network entry router and a next hop router; the network entry router acquires a proper routing path and an exit computing node according to the computing network integrated routing table, and responds to a computing request of end equipment; the next hop router is used for receiving the calculation task data of the end equipment and submitting the calculation task data to the preprocessing calculation server.
Optionally, the system is modeled, and the model comprises a network model, a task model, a communication model, a calculation model, a computational-network-integrated resource characterization model and an optimization objective model.
Optionally, the network model comprises U = {1,2,.., U }, P = {1,2,.., P }, M = {1,2,..., M }, R = {1,2,..., R }; wherein U represents end equipment in the computational power network, P represents a preprocessing calculation server, M represents a calculation server, and R represents a network router;
the task model includes I = {1, 2.., I }, where I represents a task class; the ith task of the user u is u i =<n u,i ,o u,i ,t u,i Wherein n u,i Indicating the size of the data packet to be transmitted by task i, o u,i A quantized value, t, representing the computational resources required by task i u,i Representing the maximum processing delay tolerable by the task i;
the communication model includes the data transmission rate of the user u to the accessed network entry router r: r is u,r =B r, u log 2 (1+γ u,r ) (ii) a Wherein, B r,u Indicates the bandwidth, y, allocated by the network ingress router r to user u u,r Representing the transmission signal-to-noise ratio of the user u to the accessed network entry router r; the communication delay from the user u to the accessed network entry router r is as follows: d u,r =n u,i /r u,r (ii) a Task data slave router r i To r j The communication delay of (c) can be expressed as:
Figure BDA0003537787560000031
wherein k ∈ (0, 1) denotes the compression ratio, is selected>
Figure BDA0003537787560000032
Presentation router r i To r j The data transmission rate of (d); the communication delay over the entire path is:
Figure BDA0003537787560000041
wherein, t i,j E path represents the router r i To r j The link of (b) belongs to the path;
the total delay calculated in the calculation model is as follows: d comp =D p +D m (ii) a Wherein D p =αo u,i /f p ,D m =o u,i /f m ;D p Representing data at a preprocessing computing server p Processing delay of (D) m Indicating the processing delay of the data on the preprocessing calculation server m, α o u,i And alpha epsilon (0, 1) represents the amount of computation required for task preprocessing, f p Representing the processing computing Server p computing force, f m Representing the computing power of the processing computing server m;
in the calculation network integrated resource characterization model, task processing time delay comprises transmission time delay, preprocessing calculation time delay and calculation time delay of data on a path, and the shortest path mode is adopted for minimizing the task processing time delay; if the node only participates in data forwarding, only considering transmission delay and not considering calculation delay of the node; if the node participates in task calculation, transmission delay and calculation delay of the node need to be considered;
in the optimization target model, a preprocessing calculation server, a calculation server and a data transmission path are selected to minimize the total calculation and communication delay, and the optimization problem is expressed as min r,m,p D total =D path +D comp =D u,r +
Figure BDA0003537787560000042
The invention has the following beneficial effects:
(1) The invention provides a multi-layer computation network integrated routing system, which integrates computation power with network depth, considers service requirements, computation power matching and network state planning during routing computation, minimizes total delay of task transmission and computation, and realizes efficient processing of multi-element tasks in a network;
(2) The invention creatively provides a multi-layer calculation network integrated resource characterization model, integrally characterizes multidimensional calculation resources, and provides a calculation network integrated routing algorithm, thereby realizing the calculation routing function in the calculation network and greatly improving the efficiency of calculation network integrated resource scheduling.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention.
FIG. 1 is a schematic diagram of a multi-tiered computing network architecture of the present invention;
FIG. 2 is a flow chart of the routing of the multi-layer computing network integrated network of the present invention;
FIG. 3 is a schematic diagram of a multi-tiered computing network integrated network resource of the present invention;
FIG. 4 is a schematic diagram of multi-layer computing network integrated network resource reconfiguration according to the present invention;
FIG. 5 is a graph of accumulated earnings for reinforcement learning according to the present invention;
fig. 6 is a graph of the calculated delay of 6 paths versus the optimal delay of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, shall fall within the protection scope of the present invention.
In order to facilitate understanding of the technical solutions of the present application, some concepts related to the embodiments of the present application are briefly described below.
1. Computing power
The computing power has different measurement units in different application scenes, is used for the hash operation times per second (H/S) of the bitcoin and the floating point operation times per second (FLOP/S) of AI and graphic processing, and the appeal of the intelligent society to the computing power is mainly the floating point operation capability. Bitcoin networks must perform intensive mathematical and encryption related operations for security purposes. For example, when the network reaches a hash rate of 10Th/s, it means that it can perform 10 trillion computations per second.
2. Computing Network (CN)
The computing power network is a novel information infrastructure for allocating and flexibly scheduling computing resources, storage resources and network resources among the cloud, the network and the edge according to business requirements. The network infrastructure has the functions of sensing, scheduling and arranging computing resources, provides novel ICT fusion services of network, computing and storage in a network layer, and is a new generation of technical architecture and a service operation system based on the network.
3. Network node
A network node refers to a computer or other device connected to a network having an independent address and having the function of transmitting or receiving data. The nodes may be workstations, clients, network users or personal computers, servers, printers and other network-connected devices. Each workstation, server, terminal device, network device, i.e. the device having its own unique network address, is a network node.
Fig. 1 is a schematic diagram of a multi-layer computing Network Integrated Network architecture of the present invention, and as shown in fig. 1, an embodiment of the present invention provides a multi-layer Computing Network Integrated Routing (CNIR) system, which specifically includes four parts: namely end devices, network routers, preprocessing computation servers and computation servers.
The end devices may include conventional computing devices, including, for example, a number of edge computing devices and smart end devices (smart phones, car navigation systems, smart bands, tablets, computers, etc.) deployed in a multi-tiered computing network. The end equipment is mainly used for generating and uploading calculation task data and making a calculation force request for the network; specifically, the end device may directly initiate the computing power request as a starting point; or after responding to the information returned from the network router, the computing request is proposed again, for example, after receiving the response of the entry router in the network router, the computing task data is transmitted to the network router of the next hop; or transmitting the task data to the network router again after acquiring the routing table of the computational network integration.
The network routers can be classified into a network entry router and a next hop router, and the network routers can be understood in a conventional sense and are mainly responsible for reading addresses of data packets sent by data transmission nodes in the multi-layer computational power network and then determining how to transmit data.
The preprocessing calculation server is used as a network node for preprocessing data and is responsible for storing and processing user data transmitted by a network entry router, and specifically comprises the steps of preprocessing the user data, wherein the data preprocessing comprises data cleaning, data integration, data transformation, data specification, data compression and the like; the user data of the network entry router comprises data with various sources, formats and characteristic properties, and standard, clean and continuous data can be obtained for further processing through the preprocessing of the preprocessing calculation server.
The computing server is used as a network node for data processing, and further processes computing task data which is cleaned, integrated, transformed, compressed and integrated by the pre-processing computing server, wherein the computing task data comprises the operations of classification, combination, logic correction, insertion, updating, sequencing retrieval, summarization, analysis, identification, judgment and the like, and a final computing result is obtained through further processing; in particular, the computing server may handle various types of intelligent computing tasks, such as image recognition, video recognition, and the like; and sending the final result of the picture identification or the video identification to a router and returning the final result to the end equipment by the router, wherein the router can be a network entry router, a next hop router or other network routers.
Optionally, as shown in fig. 1, the multi-layer computation-network integrated routing system may include 1 end device U1,2 pre-processing computation servers respectively denoted as P1, P2,3 computation servers respectively denoted as M1, M2, M3, and 5 network routers respectively denoted as R1, R2, R3, R4, R5, where the pre-processing computation servers P1, P2 and the computation servers M1, M2, M3 are all separated from the network routers R1, R2, R3, R4, R5.
Specifically, when the end device generates a calculation task, firstly, the end device is used as a starting point to send a calculation force request to the network entry router, and after a reply is obtained, calculation task data is transmitted to the network entry router; the method comprises the following steps that after receiving a task request, a network entry router acquires a proper routing path and an exit computing node according to a computing network integrated routing table, responds to a computing request of end equipment, obtains the routing path according to the computing of the network entry router, selects the network entry router and submits data to a corresponding preprocessing computing server, and the data are preprocessed in the preprocessing computing server; the preprocessed data are transmitted to a next hop router, the next hop router performs routing transmission on the preprocessed calculation task data according to a calculation network integrated routing table, and a calculation server is selected as a network node for data processing until the data are transmitted to an optimal calculation server; and the calculation server processes the calculation task, and the final calculation result is returned to the end equipment through the router.
Because the data volume of the result data packet is small and calculation processing is not needed, the transmission delay is extremely low and no calculation delay exists, so that the downlink delay of the result can be ignored, and only the uplink process of the data needs to be considered.
As shown in fig. 2, an embodiment of the present invention provides a Reinforcement Learning (RL) -based computational integrated routing algorithm, which includes:
s1: the network entry router dynamically maintains the calculation power routing table and the network routing table according to the calculation power notification and the network state detection, and determines routing through routing calculation aiming at the multi-element service requirements to form a calculation network integrated routing table.
S2: the end equipment generates a calculation task, puts forward a calculation request to a calculation network, and sends the calculation request to a network entry router after acquiring a calculation network integrated routing table;
s3: the network entry router acquires a proper routing path and an exit computing node according to a computing network integrated routing table based on sensing of multi-element service requirements and by combining real-time network and computing state information, and responds to a computing request generated by end equipment;
s4: after the end equipment receives the response of the network entrance router, the end equipment transmits the calculation task data to the next hop router;
s5: after receiving the task data, the next hop router firstly submits the data to a corresponding preprocessing calculation server, the data is preprocessed in the preprocessing calculation server, and the preprocessed data is returned to the router.
S6: and the router performs routing transmission on the preprocessed calculation task data according to the calculation network integrated routing table until the preprocessed calculation task data are transmitted to the optimal calculation server, the optimal calculation server performs processing on the calculation task, and the final calculation result is returned to the end equipment through the router.
Optionally, dijkstra, bellman Ford, a heuristic algorithm, a reinforcement learning RL algorithm, etc. may be selected for the routing calculation in step S1, although the above algorithms are only described in this embodiment, for a person skilled in the art, other algorithms may also be adopted without departing from the principle of the present invention.
Optionally, the end devices in step S2 include conventional computing devices, for example, edge computing devices and intelligent terminal devices (smart phones, car navigation systems, smart bands, tablets, computers, and the like) deployed in a multi-tier computing network in large numbers.
Optionally, the preprocessing performed by the preprocessing calculation server in step S5 includes data cleaning, data integration, data transformation, data specification, data compression, and the like.
Alternatively, the computing server in step S6 may process various intelligent computing tasks, such as image recognition, video recognition, and the like, and the processing of the computing tasks includes operations including classification, merging, logical correction, insertion, updating, ranking, retrieving, summarizing, analyzing, recognizing, and judging.
In addition, the embodiment of the invention also comprises modeling the system, wherein the model specifically comprises a network model, a task model, a communication model, a calculation and network integration resource characterization model and an optimization target model. Wherein:
1) Network model
Suppose that U end devices, P preprocessing calculation servers, M calculation servers and R network routers exist in a multi-level calculation network integrated network system. Thus, end devices, pre-processing computation servers, network routers in a computational power network may be denoted as U = {1,2, ·, U }, P = {1,2,.., P }, M = {1,2,..., M }, R = {1,2,..., R }, respectively.
As shown in fig. 1, the network router is logically separate from the compute server. But on a physical level, the network router is deployed as a communication module of the compute server. Therefore, the communication delay between the router and the corresponding computation server is negligible.
2) Task model
The task class in the system may be denoted as I = {1, 2. In particular, user u ith task may be represented as u i =<n u,i ,o u,i ,t u,i Is where n is u,i Indicating the size of the data packet to be transmitted by task i, o u,i A quantized value representing the computational resources required by the task, i.e. the number of revolutions of the central processing unit, t, required to process a unit of bit data per unit of time u,i Representing the maximum processing delay that task i can tolerate.
3) Communication model
The requested task from the end device is transmitted over a wireless link to a network entry router in the computing network. According to shannon's formula, the data transmission rate from the user u to the accessed network entry router r can be expressed as:
r u,r =B r,u log 2 (1+γ u,r )
wherein, B r,u Represents the bandwidth allocated to the user by the network ingress router r, gamma u,r Represents the transmission Signal-to-Noise ratio (SNR) of the user u to the accessed network ingress router r.
Therefore, the communication delay from the user u to the accessed network entry router r is:
D u,r =n u,i /r u,r
task data slave router r i To r j The communication delay of (c) may be expressed as:
Figure BDA0003537787560000083
wherein k ∈ (0, 1) represents compression rate, and data is cleaned, integrated, transformed, reduced and compressed after being transmitted to the preprocessing calculation server, so that the data volume needing to be transmitted is reduced;
Figure BDA0003537787560000081
representation router r i To r j The data transmission rate of.
The communication delay over the entire path can be expressed as:
Figure BDA0003537787560000082
wherein, t i,j E path represents router r i To r j Belongs to the path.
4) Calculation model
The computational server force p may be expressed as f, taking into account preprocessing p The computing power of the computing server m can be expressed as f m . The task of consideration is firstly to perform data cleaning, data integration, data transformation, data specification and data compression in the preprocessing calculation server. The amount of computation required to consider task preprocessing is α o u,i And alpha is equal to (0, 1). Thus, the processing delay of data on the preprocessing computation server p can be expressed as:
D p =αo u,i /f p
the computational delay of a task on compute server m can be expressed as:
D m =o u,i /f m
specifically, one preprocessing calculation server provides data preprocessing services for the tasks of the user u, and one calculation server provides calculation services for the tasks of the user u.
Thus, calculating the total delay can be expressed as:
D comp =D p +D m
5) Computer-network integrated resource characterization model
As shown in fig. 3, there are end device U, preprocessing calculation servers P1, P2 and calculation servers M1, M2, M3 in the network, and the network router is used as a communication module of the preprocessing calculation servers and the calculation servers, and is deployed together with the preprocessing calculation servers and the calculation servers, so that no specific reference is made in the figure.
The task processing delay comprises transmission delay of data on a path, preprocessing calculation delay and calculation delay. Therefore, the problem of minimizing the task processing delay can be seen as the shortest-circuit problem in fig. 3. Unlike the conventional shortest path problem that only the path cost, i.e., the path data transmission delay is considered, in the present problem, the node computation delay also needs to be considered.
If the node only participates in data forwarding, the calculation time delay of the node is not considered; if the node participates in task calculation, the calculation delay of the node needs to be considered.
The present invention reconstructs the shortest path problem in fig. 3 into the shortest path problem shown in fig. 4. Specifically, the present invention reconstructs a node into a segment of a path and calculates the delay as the cost on the path.
6) Optimizing an objective model
The present invention minimizes the total computation and communication delay by selecting the preprocessing computation server, the computation server and the data transmission path, so the optimization problem can be expressed as:
Figure BDA0003537787560000091
optionally, through modeling, reconstruction and establishment of an optimization problem of resources, the network-computing integrated scheduling problem can be optimized and calculated by adopting a traditional shortest-path algorithm such as Diikstra, bellman Ford and a heuristic algorithm, and can also be optimized and calculated by adopting a novel intelligent algorithm such as reinforcement learning RL.
Optionally, taking a routing algorithm based on reinforcement learning as an example, specifically, taking Q learning as an example to solve the above problem, taking an initial state of the system as a node where a task data packet is located, performing an action, that is, selecting a next-hop node, and performing the next-hop node selection to transfer the state to the next-hop node, and obtaining a revenue value under the current selection at the same time.
The profit value is defined as the negative of the delay on the path segment. Therefore, the total benefit on the global path is maximized through reinforcement learning so as to realize the minimization of task transmission and total calculation delay.
The invention simulates the scene that 1 end device, 2 preprocessing servers, 3 edge computing servers and a router exist in the system. The implementation effect of the present invention is shown in fig. 5 and fig. 6, where fig. 5 shows that the reinforcement learning algorithm model has a better convergence characteristic, the abscissa in fig. 6 represents 6 fixed routing routes selected by 6 experiments, and the ordinate represents the propagation delay on the route, and fig. 6 compares the propagation total delay of the conventional method with the optimal route delay selected by the propagation integrated method based on reinforcement learning proposed in this study. It can be seen that the present invention selects the optimal scheduling path. The method can be applied to the calculation-force routing in the calculation network.
In summary, by means of the above technical solution of the present invention, a network architecture mode with cooperation of computational power, algorithm, data and application resources is constructed through the multi-layer computational network integrated network architecture of the embodiment of the present invention, and the network architecture mode has good hierarchy and structure; the computational power and the network depth are fused, so that the problem that the existing route calculation only can be used for network planning, and the matching and planning of computational power resources are neglected is solved; meanwhile, based on the network architecture, a multi-layer computation network integrated resource characterization model is creatively provided on the aspect of searching for the problem of minimizing task processing delay, and a computation network integrated routing algorithm is provided for searching for preprocessing computation server processing delay, computation delay on a computation server and communication delay minimization on the whole path, so that communication delay is effectively reduced, and user experience is greatly improved.
While the foregoing is directed to the preferred embodiment of the present invention, it will be appreciated by those skilled in the art that various changes and modifications may be made therein without departing from the principles of the invention as set forth in the appended claims.

Claims (4)

1. A multi-layer computing network integrated routing system is characterized in that: the system comprises end equipment, a network router, a preprocessing calculation server and a calculation server, wherein:
the end equipment is used for generating a calculation task, proposing a calculation force request to the network, and transmitting task data to the network router after acquiring the calculation network integrated routing table;
the network router is used for transmitting data after reading the address of the data packet; carrying out route transmission on the preprocessed calculation task data according to the calculation network integrated route table until the calculation task data are transmitted to the optimal calculation server; returning the final calculation result of the calculation server to the end equipment; the network computing integrated routing table is a network computing integrated routing table which is formed by a network entry router according to the computing power announcement and the network state detection, maintains the computing power routing table and the network routing table, and performs routing calculation according to service requirements;
the preprocessing calculation server is used as a network node for preprocessing data, is responsible for receiving the data, preprocesses the data and returns the preprocessed data to the network router; the preprocessing comprises data cleaning, data integration, data transformation, data specification and data compression;
the computing server is used for transmitting the preprocessed computing task data to the computing server and further processing the data; the further processing includes performing various intelligent computing tasks;
establishing a model for the system, wherein the model comprises a network model, a task model, a communication model, a calculation network integrated resource representation model and an optimization target model;
the network model includes U = {1,2, \8230;, U }, P = {1,2, \8230;, P }, M = {1,2, \8230;, M }, R = {1,2, \8230;, R }; wherein U represents end equipment in the computational power network, P represents a preprocessing calculation server, M represents a calculation server, and R represents a network router;
the task model includes I = {1,2, \8230;, I }, where I represents a task category; the ith task of the user u is u i =<n u,i ,o u,i ,t u,i >Wherein n is u,i Indicating the size of the data packet to be transmitted by task i, o u,i A quantized value, t, representing the computational resources required by task i u,i Representing the maximum processing delay tolerable by the task i;
the communication model comprises the data transmission rate of the user u to the accessed network entry router r: r is u,r =B r,u log 2 (1+γ u,r ) (ii) a Wherein, B r,u Indicates the bandwidth, y, allocated by the network entry router r to the user u u,r Representing the transmission signal-to-noise ratio of the user u to the accessed network entry router r; the communication delay from the user u to the accessed network entry router r is as follows: d u,r =n u,i /r u,r (ii) a Task data slave router r i To r j The communication delay of (c) may be expressed as:
Figure FDA0003943575800000011
wherein k ∈ (0, 1) denotes the compression ratio, is selected>
Figure FDA0003943575800000012
Representation router r i To r j The data transmission rate of (d); the communication delay over the entire path is:
Figure FDA0003943575800000013
wherein, t i,j E path represents the router r i To r j The link of (1) belongs to the path;
the total delay calculated in the calculation model is as follows: d comp =D p +D m (ii) a Wherein D p =αo u,i /f p ,D m =o u,i /f m ;D p Representing the processing delay of the data on the preprocessing calculation server p, D m Indicating the processing delay of the data on the preprocessing calculation server m, α o u,i And alpha epsilon (0, 1) represents the amount of computation required for task preprocessing, f p Representing the processing computing Server p computing force, f m Representing the computing power of the processing computing server m;
in the calculation network integrated resource characterization model, task processing time delay comprises transmission time delay, preprocessing calculation time delay and calculation time delay of data on a path, and the shortest path mode is adopted for minimizing the task processing time delay; if the node only participates in data forwarding, only the transmission delay is considered, and the calculation delay of the node is not considered; if the node participates in task calculation, the transmission delay and the calculation delay of the node need to be considered;
in the optimization objective model, the preprocessing calculation server, the calculation server and the data transmission path are selected to minimize the total calculation and communication delay, and the optimization problem is expressed as
Figure FDA0003943575800000021
/>
Figure FDA0003943575800000022
2. The integrated routing system of claim 1, wherein: the network router comprises a network entry router and a next hop router; the network entry router acquires a proper routing path and an exit computing node according to the routing table of the computing network integration, and responds to a computing request of end equipment; and the next hop router is used for receiving the calculation task data of the end equipment and submitting the calculation task data to the preprocessing calculation server.
3. A multi-layer network-computing integrated routing method is characterized by comprising the following steps:
s1: the end equipment generates a calculation task and sends a calculation request to the network entry router;
s2: the network entry router acquires a proper routing path and an exit computing node according to the computing network integrated routing table, and responds to a computing request of end equipment; the network computing integrated routing table is a network computing integrated routing table which is formed by a network entry router according to the computing power announcement and the network state detection, maintains the computing power routing table and the network routing table, and performs routing calculation according to service requirements;
s3: after receiving the response of the network entry router, the end equipment transmits the calculation task data to the network entry router;
s4: the network entry router submits the received task data to a corresponding preprocessing calculation server firstly, the preprocessing calculation server carries out data preprocessing, and the preprocessed data are returned to the network entry router;
s5: the network entry router performs routing transmission on the preprocessed calculation task data according to the calculation network integrated routing table until the preprocessed calculation task data are transmitted to the optimal calculation server, performs processing on the calculation task, and finally returns a calculation result to the end device through the network entry router;
establishing a model for the system, wherein the model comprises a network model, a task model, a communication model, a calculation network integrated resource representation model and an optimization target model;
the network model includes U = {1,2, \8230;, U }, P = {1,2, \8230;, P }, M = {1,2, \8230;, M }, R = {1,2, \8230;, R }; wherein U represents end equipment in the computational power network, P represents a preprocessing calculation server, M represents a calculation server, and R represents a network router;
the task model includes I = {1,2, \8230;, I }, where I represents a task category; the ith task of the user u is u i =<n u,i ,o u,i ,t u,i >Wherein n is u,i Indicates the size of the data packet to be transmitted by task i, o u,i Meter required to represent task iCalculating the quantized value of the resource, t u,i Representing the maximum processing delay tolerable by the task i;
the communication model includes the data transmission rate of the user u to the accessed network entry router r: r is a radical of hydrogen u,r =B r,u log 2 (1+γ u,r ) (ii) a Wherein, B r,u Indicates the bandwidth, y, allocated by the network ingress router r to user u u,r Representing the transmission signal-to-noise ratio of the user u to the accessed network entry router r; the communication delay from the user u to the accessed network entry router r is as follows: d u,r =n u,i /r u,r (ii) a Task data slave router r i To r j The communication delay of (c) may be expressed as:
Figure FDA0003943575800000031
where k e (0, 1) denotes the compression ratio>
Figure FDA0003943575800000032
Representation router r i To r j The data transmission rate of (c); the communication delay over the entire path is:
Figure FDA0003943575800000033
wherein, t i,j E path represents router r i To r j The link of (1) belongs to the path;
the total delay calculated in the calculation model is as follows: d comp =D p +D m (ii) a Wherein D p =αo u,i /f p ,D m =o u,i /f m ;D p Representing the processing delay of the data on the preprocessing calculation server p, D m Indicating the processing delay of the data on the preprocessing calculation server m, α o u,i Where α ∈ (0, 1) denotes the amount of computation required for task pre-processing, f p Representing the processing computing Server p computing force, f m Representing the computing power of the processing computing server m;
in the calculation network integrated resource characterization model, task processing time delay comprises transmission time delay, preprocessing calculation time delay and calculation time delay of data on a path, and the shortest path mode is adopted for minimizing the task processing time delay; if the node only participates in data forwarding, only considering transmission delay and not considering calculation delay of the node; if the node participates in task calculation, transmission delay and calculation delay of the node need to be considered;
in the optimization target model, the preprocessing calculation server, the calculation server and the data transmission path are selected to minimize the total calculation and communication delay, and the optimization problem is expressed as
Figure FDA0003943575800000034
Figure FDA0003943575800000035
4. The method of claim 3, wherein the routing method comprises: the network router comprises a network entry router and a next hop router; the network entry router acquires a proper routing path and an exit computing node according to the routing table of the computing network integration, and responds to a computing request of end equipment; and the next hop router is used for receiving the calculation task data of the end equipment and submitting the calculation task data to the preprocessing calculation server.
CN202210229576.XA 2022-03-09 2022-03-09 Multi-layer computation network integrated routing system and method Active CN114827028B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210229576.XA CN114827028B (en) 2022-03-09 2022-03-09 Multi-layer computation network integrated routing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210229576.XA CN114827028B (en) 2022-03-09 2022-03-09 Multi-layer computation network integrated routing system and method

Publications (2)

Publication Number Publication Date
CN114827028A CN114827028A (en) 2022-07-29
CN114827028B true CN114827028B (en) 2023-03-28

Family

ID=82529373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210229576.XA Active CN114827028B (en) 2022-03-09 2022-03-09 Multi-layer computation network integrated routing system and method

Country Status (1)

Country Link
CN (1) CN114827028B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114884861B (en) * 2022-07-11 2022-09-30 军事科学院系统工程研究院网络信息研究所 Information transmission method and system based on intra-network computation
CN115292007B (en) * 2022-09-28 2022-12-30 广东河海工程咨询有限公司 Water conservancy model simulation computing system and computing method based on cloud service
CN115955383B (en) * 2023-03-14 2023-05-16 中国电子科技集团公司第五十四研究所 Broadband low-time-delay high-precision mixed computing power signal cooperative processing system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109819030A (en) * 2019-01-22 2019-05-28 西北大学 A kind of preparatory dispatching method of data resource based on edge calculations
CN112867092A (en) * 2021-03-04 2021-05-28 嘉兴学院 Intelligent data routing method for mobile edge computing network
CN113079218A (en) * 2021-04-09 2021-07-06 网络通信与安全紫金山实验室 Service-oriented computing power network system, working method and storage medium
CN113315700A (en) * 2020-02-26 2021-08-27 中国电信股份有限公司 Computing resource scheduling method, device and storage medium
CN113709048A (en) * 2020-05-21 2021-11-26 中国移动通信有限公司研究院 Routing information sending and receiving method, network element and node equipment
WO2022028418A1 (en) * 2020-08-04 2022-02-10 中国移动通信有限公司研究院 Computing power processing network system, and service processing method and device
CN114138373A (en) * 2021-12-07 2022-03-04 吉林大学 Edge calculation task unloading method based on reinforcement learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109819030A (en) * 2019-01-22 2019-05-28 西北大学 A kind of preparatory dispatching method of data resource based on edge calculations
CN113315700A (en) * 2020-02-26 2021-08-27 中国电信股份有限公司 Computing resource scheduling method, device and storage medium
CN113709048A (en) * 2020-05-21 2021-11-26 中国移动通信有限公司研究院 Routing information sending and receiving method, network element and node equipment
WO2022028418A1 (en) * 2020-08-04 2022-02-10 中国移动通信有限公司研究院 Computing power processing network system, and service processing method and device
CN114095579A (en) * 2020-08-04 2022-02-25 中国移动通信有限公司研究院 Computing power processing network system, service processing method and equipment
CN112867092A (en) * 2021-03-04 2021-05-28 嘉兴学院 Intelligent data routing method for mobile edge computing network
CN113079218A (en) * 2021-04-09 2021-07-06 网络通信与安全紫金山实验室 Service-oriented computing power network system, working method and storage medium
CN114138373A (en) * 2021-12-07 2022-03-04 吉林大学 Edge calculation task unloading method based on reinforcement learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Edge Computing in VANETs-An Efficient and Privacy-Preserving Cooperative Downloading Scheme;Jie Cui;《IEEE》;20200630;第38卷(第6期);全文 *
基于通信云和承载网协同的算力网络编排技术;曹畅等;《电信科学》;20200731(第07期);全文 *
算力网络研究进展综述;贾庆民等;《网络与信息安全学报》;20211031;第7卷(第5期);全文 *
边缘算力网络中智能算力感知路由分配策略研究;孙钰坤等;《无线电通信技术》;20220131(第1期);全文 *

Also Published As

Publication number Publication date
CN114827028A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN114827028B (en) Multi-layer computation network integrated routing system and method
US6535504B1 (en) Link aggregation path selection method
US5940372A (en) Method and system for selecting path according to reserved and not reserved connections in a high speed packet switching network
JP2685069B2 (en) Network access node of communication network and method for selecting connection route in the network
JP2737828B2 (en) Communication network and route selection method in the network
EP2171605B1 (en) Isp-aware peer-to-peer content exchange
US7719997B2 (en) System and method for global traffic optimization in a network
US20100027442A1 (en) Constructing scalable overlays for pub-sub with many topics: the greedy join-leave algorithm
JP2023159363A (en) Method for transmitting data packet in network of node
CN110113140B (en) Calculation unloading method in fog calculation wireless network
CN113162970A (en) Message routing method, device, equipment and medium based on publish/subscribe model
Li et al. Service home identification of multiple-source IoT applications in edge computing
Liao et al. Live: learning and inference for virtual network embedding
Nguyen et al. Adaptive caching for beneficial content distribution in information-centric networking
CN111866438B (en) User experience driven transcoding and multicast routing method and device for multi-party video conference
JP5884919B2 (en) Network device and transmission program
Xuan et al. Distributed admission control for anycast flows with QoS requirements
CN116806043A (en) Routing method, device, electronic equipment and mobile edge network
US11483899B2 (en) Network system that processes data generated by data generation device, communication control device, and communication control method
KR101310769B1 (en) Smart router and controlling method thereof, and network service system and method using thereof
He et al. Towards smart routing: Exploiting user context for video delivery in mobile networks
Nguyen et al. Joint N ode-Link Embedding Algorithm based on Genetic Algorithm in Virtualization Environment
CN100442758C (en) Multicast transfer route setting method, and multicast label switching method for implementing former method
CN115604275B (en) Virtual special server distribution method in information interaction network
JP2012175153A (en) Information circulation control device and communication system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant