CN115695561A - Message forwarding method, device and system and computer readable storage medium - Google Patents

Message forwarding method, device and system and computer readable storage medium Download PDF

Info

Publication number
CN115695561A
CN115695561A CN202110846041.2A CN202110846041A CN115695561A CN 115695561 A CN115695561 A CN 115695561A CN 202110846041 A CN202110846041 A CN 202110846041A CN 115695561 A CN115695561 A CN 115695561A
Authority
CN
China
Prior art keywords
service node
service
layer
node layer
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110846041.2A
Other languages
Chinese (zh)
Inventor
徐玲
彭书萍
陈霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110846041.2A priority Critical patent/CN115695561A/en
Priority to PCT/CN2022/106519 priority patent/WO2023005745A1/en
Publication of CN115695561A publication Critical patent/CN115695561A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1023Server selection for load balancing based on a hash applied to IP addresses or costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Abstract

Disclosed are a message forwarding method, device and system, and a computer readable storage medium, belonging to the communication technology field. After receiving a message sent by a second device, a first device determines a first service node layer from a plurality of service node layers according to an application service identifier in the message. The service node in the first service node layer has a first layer identifier, and the service node in the first service node layer deploys a target application service corresponding to the application service identifier. And then the first equipment selects a target service node from the first service node layer and sends a message to the target service node. By layering the service nodes and distributing corresponding layer identifiers for the service nodes, the equipment can select the service nodes for processing the message according to the layer identifiers of the service nodes after receiving the message, and message scheduling modes are enriched.

Description

Message forwarding method, device and system and computer readable storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method, an apparatus, and a system for forwarding a packet, and a computer-readable storage medium.
Background
The construction of wireless communication networks is centered around Data Centers (DC). Each data center is managed by using a distributed cloud technology to form a hierarchical network system of an edge cloud (edge cloud) and a central cloud (central cloud). An application server (application server) provided by a data center for User Equipment (UE) is deployed on each edge cloud, so that the application service is deployed to a position closer to the UE, service delay is reduced, and the requirement of delay sensitive service can be met. An infrastructure platform for implementing edge cloud deployment may be referred to as a Mobile Edge Computing (MEC) site.
The same application service is typically deployed on multiple MEC sites, working independently between different MEC sites. At present, a message from a user equipment is generally scheduled to an MEC site closest to the user equipment, or a Computational First Network (CFN) technology is adopted, and the message is scheduled based on a load balancing principle according to the computing power of the MEC site, and the scheduling mode is relatively single.
Disclosure of Invention
The application provides a message forwarding method, a message forwarding device, a message forwarding system and a computer readable storage medium.
In a first aspect, a method for forwarding a packet is provided. The method comprises the following steps: the first equipment receives a message sent by the second equipment, wherein the message comprises an application service identifier. The first device determines a first service node layer from the multiple service node layers according to the application service identifier, wherein the service node in the first service node layer has the first layer identifier, and the service node in the first service node layer deploys a target application service corresponding to the application service identifier. The first device selects a target service node from the first service node layer. The first device sends a message to the target service node.
In the application, the service nodes are layered, and the corresponding layer identifiers are distributed to the service nodes, so that the equipment can select the service node for processing the message according to the layer identifier of the service node after receiving the message, and the message scheduling mode is enriched.
Optionally, the determining, by the first device, an implementation manner of the first service node layer from the multiple service node layers according to the application service identifier includes: the first device acquires the first service node layer after determining that one or more service nodes in the second service node layer are overloaded, wherein the service nodes in the second service node layer have second layer identifiers, and the priority of the second service node layer is higher than that of the first service node layer.
In the application, the first device may sequentially determine the service node layers according to the application service identifier and the layer identifiers and from the highest priority to the lowest priority, and determine the first service node layer with the lower priority under the condition that one or more service nodes in the second service node layer with the higher priority are overloaded, so as to ensure that the finally determined target service node is capable of processing the packet with a certain degree, and improve reliability of providing the application service.
Or, the first service node layer is a service node layer with the highest priority in the plurality of service node layers.
Optionally, the first service node layer is a service node layer with a highest priority among the multiple service node layers, and includes: the first service node layer is a service node layer closest to the first device in the multiple service node layers, or the first service node layer is a service node layer with the shortest time delay from the multiple service node layers to the first device.
Optionally, the plurality of service node layers include an access service node layer, a convergence service node layer, and a core service node layer, where a service node in the access service node layer is connected to the access network device, a service node in the convergence service node layer is connected to the convergence network device, a service node in the core service node layer is connected to the core network device, a priority of the access service node layer is higher than a priority of the convergence service node layer, and a priority of the convergence service node layer is higher than a priority of the core service node layer. Or, the plurality of service node layers comprise a level (level) 1 service node layer and a level2 service node layer, wherein a service node in the level1 service node layer is connected with a gateway in a level1 area from an intermediate system to an intermediate system (ISIS) protocol, a service node in the level2 service node layer is connected with a gateway in a level2 area of the ISIS protocol, and the priority of the level1 service node layer is higher than that of the level2 service node layer. Or, the plurality of service node layers include a non-backbone service node layer and a backbone service node layer, where a service node in the non-backbone service node layer is connected to a Gateway (GW) in a non-backbone area of an Open Shortest Path First (OSPF) protocol, a service node in the backbone service node layer is connected to a gateway in a backbone area of the OSPF protocol, and a priority of the non-backbone service node layer is higher than a priority of the backbone service node layer. Or each service node layer in the multiple service node layers corresponds to a delay threshold, the delay corresponding to the delay threshold is the delay from the corresponding service node layer to the first device, and the priority of the service node layer with the smaller corresponding delay threshold is higher than the priority of the service node layer with the larger corresponding delay threshold. Or, the plurality of service node layers comprise a main service node layer and a standby service node layer, the priority of the main service node layer is higher than that of the standby service node layer, wherein the service node in the main service node layer is connected with the main gateway, the service node in the standby service node layer is connected with the standby gateway, and the priority of the main service node layer is higher than that of the standby service node layer.
Optionally, an implementation manner of the first device selecting the target service node from the first service node layer includes: the first device selects a target service node with minimum Inter Gateway Protocol (IGP) overhead of a link between the first device and the first device from a first service node layer. Or the first device selects a target service node with the shortest time delay from the first service node layer to the first device.
In this implementation manner, the first device selects a service node with the smallest IGP overhead of a link between the determined service node layer and the first device or the shortest delay to the first device as a target service node, that is, selects a service node with better network performance, and can reduce the transmission delay of the packet as much as possible to reduce the overall end-to-end delay, thereby providing better application service for the user.
Optionally, in another implementation, the target service node is a service node in the first service node layer, where IGP overhead of a link between the target service node and the first device is minimal and not overloaded. Or, the target service node is a service node in the first service node layer, where the time delay to the first device is shortest and the service node is not overloaded.
In this implementation manner, the first device takes the service node that has the smallest IGP overhead of the link between the service node layer and the first device or has the shortest time delay to the first device and is not overloaded as the target service node. The implementation mode comprehensively considers the computing power and the network performance of the service node, preferentially selects the service node with better network performance during message scheduling, and selects the service node with suboptimal network performance but more sufficient computing resources when the service node with better network performance is overloaded, so that the service node can effectively provide application service, the transmission delay of the message can be reduced as much as possible, the end-to-end integral delay is reduced, and better application service is provided for users.
Optionally, the first device further receives an advertisement message, where the advertisement message includes computing power information corresponding to the application service deployed by the service node, an application service identifier corresponding to the application service deployed by the service node, and a layer identifier of the service node.
Optionally, the announcement message further includes next hop information. The notification message comes from a gateway connected with the service node, and the next hop information is the address of the gateway connected with the service node. Or, the notification message comes from the service node, and the next hop information is the address of the service node.
Optionally, the notification message is a Border Gateway Protocol (BGP) update (update) message.
Optionally, the computation power information and the layer identifier are carried in a routing attribute field of the BGP update message, and the application service identifier is carried in a network layer reachability information field of the BGP update message.
Optionally, the first device stores, according to the notification message, calculation force information corresponding to the application service deployed by each service node, and establishes a correspondence between the application service identifier and the layer identifier of the service node layer;
the implementation process of determining a first service node layer from a plurality of service node layers by the first device according to the application service identifier includes: the first device selects a first service node layer containing a service node deployed with a target application service based on a correspondence between the application service identifier and the layer identifier of the service node layer.
Optionally, another implementation manner in which the first device selects the target service node from the first service node layer includes: the first device obtains a target load sharing group from the first service node layer. The first device acquires a target service node from the target load sharing group.
The realization mode is based on the consideration that the distance difference between the corresponding scheduling node and different service nodes is possibly great due to different access positions of the user equipment, the concept of the load sharing group is provided, and the load balance of a plurality of service nodes in the same load sharing group is realized while better application service is provided for users, so that the resource utilization rate of the service nodes is improved.
Optionally, an implementation manner of the first device obtaining the target load sharing group from the first service node layer includes:
the first device takes the service node with the time delay from the first service node layer to the first device smaller than the sharing time delay threshold value corresponding to the first service node layer as a target load sharing group. Or, the first device takes the first m service nodes with the shortest time delay from the first service node layer to the first device as a target load sharing group, where m is an integer greater than 1. Or, the first device takes the service node in the IGP domain closest to the first device in the first service node layer as the target load sharing group.
Optionally, an implementation manner of acquiring, by the first device, the target service node from the target load sharing group includes: and the first equipment takes the service node which is not overloaded in the target load sharing group as the target service node according to the calculation force information corresponding to the application service deployed by the service node in the target load sharing group.
Optionally, the computing power information includes one or more of a heavy load status, a number of computing power resources, a utilization of computing power resources, a number of device connections, a converged computing power value, or a task processing latency.
In a second aspect, a method for forwarding a packet is provided. The method comprises the following steps: the service node generates an announcement message, wherein the announcement message comprises computing power information corresponding to the application service deployed by the service node, an application service identifier corresponding to the application service deployed by the service node, and a layer identifier of the service node. The service node sends an advertisement message to a gateway to which the service node is connected.
Optionally, the advertisement message further includes next hop information, and the next hop information is an address of the service node.
Optionally, the notification message is a BGP update message.
Optionally, the computation power information and the layer identifier are carried in a routing attribute field of the BGP update message, and the application service identifier is carried in a network layer reachability information field of the BGP update message.
Optionally, the advertisement message further includes a group identifier of the service node, where the group identifier is used to indicate a load sharing group to which the service node belongs.
In a third aspect, a method for forwarding a packet is provided. The method comprises the following steps: the control device performs layering on a plurality of service nodes managed, and the layer identifications of the service nodes belonging to the same service node layer are the same. And the control equipment respectively sends the corresponding layer identifications to the plurality of service nodes.
In the application, the control equipment is used for layering the service nodes and distributing the corresponding layer identifiers for the service nodes, so that the scheduling node can select the service node for processing the message according to the layer identifier of the service node after receiving the message, and message scheduling modes are enriched.
Optionally, an implementation manner of layering multiple managed service nodes by a control device includes: the control equipment divides a plurality of service nodes into an access service node layer, a convergence service node layer and a core service node layer, wherein the service nodes in the access service node layer are connected with access network equipment, the service nodes in the convergence service node layer are connected with the convergence network equipment, the service nodes in the core service node layer are connected with the core network equipment, the priority of the access service node layer is higher than that of the convergence service node layer, and the priority of the convergence service node layer is higher than that of the core service node layer.
Optionally, another implementation manner in which the control device performs layering on the plurality of managed service nodes includes: the control equipment divides the service nodes into a level1 service node layer and a level2 service node layer, wherein the service nodes in the level1 service node layer are connected with gateways in a level1 area of an ISIS protocol, the service nodes in the level2 service node layer are connected with gateways in a level2 area of the ISIS protocol, and the priority of the level1 service node layer is higher than that of the level2 service node layer.
Optionally, another implementation manner in which the control device performs layering on the plurality of managed service nodes includes: the control equipment divides a plurality of service nodes into a non-backbone service node layer and a backbone service node layer, wherein the service nodes in the non-backbone service node layer are connected with gateways in a non-backbone area of an OSPF protocol, the service nodes in the backbone service node layer are connected with gateways in a backbone area of the OSPF protocol, and the priority of the non-backbone service node layer is higher than that of the backbone service node layer.
Optionally, another implementation manner in which the control device performs layering on the plurality of managed service nodes includes: the control equipment divides the service nodes into a plurality of service node layers according to the time delay from the service nodes to the scheduling node, each service node layer corresponds to a time delay threshold, the time delay corresponding to the time delay threshold is the time delay from the corresponding service node layer to the scheduling node, and the priority of the service node layer with the small time delay threshold is higher than that of the service node layer with the large time delay threshold.
Optionally, a further implementation manner in which the control device performs layering on the plurality of managed service nodes includes: the control equipment divides a plurality of service nodes into a main service node layer and a standby service node layer, and the priority of the main service node layer is higher than that of the standby service node layer.
In a fourth aspect, a packet forwarding apparatus is provided, where the packet forwarding apparatus is applied to a first device, and the first device is configured to execute the method in the first aspect or any one of the possible designs of the first aspect. In particular, the first network device comprises means for performing the first aspect or the method in any one of the possible designs of the first aspect.
In a fifth aspect, a packet forwarding apparatus is provided, which is applied to a service node, where the service node is configured to execute the method in the second aspect or any one of the possible designs of the second aspect. In particular, the service node comprises means for performing the method of the second aspect or any one of the possible designs of the second aspect.
A sixth aspect provides a packet forwarding apparatus, which is applied to a control device, where the control device is configured to execute the method in any one of the possible designs of the third aspect or the third aspect. In particular, the control device comprises means for performing the method of the third aspect or any one of the possible designs of the third aspect.
In a seventh aspect, an apparatus is provided, comprising: a processor and a memory;
the memory for storing a computer program, the computer program comprising program instructions;
the processor is configured to invoke the computer program to implement the method in the first aspect and the embodiments thereof.
In an eighth aspect, a service node is provided, comprising: a processor and a memory;
the memory for storing a computer program, the computer program comprising program instructions;
the processor is configured to invoke the computer program to implement the method in the second aspect and the embodiments thereof.
In a ninth aspect, there is provided a control apparatus comprising: a processor and a memory;
the memory for storing a computer program, the computer program comprising program instructions;
the processor is configured to invoke the computer program to implement the method in the third aspect and the embodiments thereof.
In a tenth aspect, a message forwarding system is provided, where the system includes: a first device comprising the apparatus of any of the fourth aspects and a plurality of serving nodes comprising the apparatus of any of the fifth aspects.
Optionally, the system further comprises: a control device for managing the plurality of service nodes, the control device comprising the apparatus of any of the sixth aspects.
In an eleventh aspect, there is provided a computer readable storage medium having stored thereon instructions which, when executed by a processor of a first device, implement the method of the first aspect and its embodiments above; or, when executed by a processor of a service node, to implement the method of the second aspect and its embodiments; alternatively, the instructions, when executed by a processor of a control device, implement the methods in the third aspect and its embodiments described above.
In a twelfth aspect, a chip is provided, which comprises programmable logic circuits and/or program instructions, and when the chip runs, the method in the first aspect and its embodiments is implemented, or the method in the second aspect and its embodiments is implemented, or the method in the third aspect and its embodiments is implemented.
Drawings
Fig. 1 is a schematic diagram of an application scenario related to a message forwarding method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a hierarchical network system of an edge cloud plus a center cloud according to an embodiment of the present application;
fig. 3 is a schematic view of an application scenario related to another packet forwarding method provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a hierarchical deployment of a service node according to an embodiment of the present application;
fig. 5 is a schematic diagram of a hierarchical deployment of another service node according to an embodiment of the present application;
fig. 6 is a schematic diagram of a hierarchical deployment of another service node provided in an embodiment of the present application;
fig. 7 is a schematic diagram of a hierarchical deployment of another service node according to an embodiment of the present application;
fig. 8 is a schematic diagram of a hierarchical deployment of a further service node according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an advertisement message obtained based on a BGP update message extension according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of another notification message obtained based on a BGP update message extension according to an embodiment of the present application;
fig. 11 is a schematic flowchart of a message forwarding method according to an embodiment of the present application;
fig. 12 is a schematic diagram of a message scheduling scenario provided in an embodiment of the present application;
fig. 13 is a schematic diagram of another packet scheduling scenario provided in the embodiment of the present application;
fig. 14 is a schematic structural diagram of a first device provided in an embodiment of the present application;
FIG. 15 is a schematic structural diagram of another first apparatus according to an embodiment of the present application;
FIG. 16 is a schematic structural diagram of a further first apparatus provided in an embodiment of the present application;
FIG. 17 is a schematic structural diagram of yet another first apparatus provided in an embodiment of the present application;
fig. 18 is a schematic structural diagram of a service node according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of another service node provided in an embodiment of the present application;
fig. 20 is a schematic structural diagram of another service node provided in an embodiment of the present application;
fig. 21 is a schematic structural diagram of another service node provided in an embodiment of the present application;
fig. 22 is a schematic structural diagram of a control device according to an embodiment of the present application;
fig. 23 is a schematic structural diagram of another control device provided in an embodiment of the present application;
fig. 24 is a schematic structural diagram of another control device provided in the embodiment of the present application;
fig. 25 is a schematic structural diagram of another control device provided in an embodiment of the present application;
fig. 26 is a schematic structural diagram of a message forwarding system according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an application scenario related to a packet forwarding method provided in an embodiment of the present application. As shown in fig. 1, the application scenario includes: user equipment 101, gateways 102A-102C (collectively gateways 102), and serving nodes 103A-103C (collectively serving nodes 103). Service node 103A is connected to gateway 102A, service node 103B is connected to gateway 102B, and service node 103C is connected to gateway 102C. The user equipment 101 is a device deployed on the user side. Alternatively, referring to fig. 1, the gateway 102 and the service node 103 are deployed on the application service provider side, and the user equipment 101 and the gateway 102 communicate with each other through an operator network. The number and the arrangement of the devices in fig. 1 are only used as an exemplary illustration, and are not used as a limitation to the application scenario related to the method provided in the embodiment of the present application. For example, the gateway 102 may also be deployed on the operator side, i.e. the gateway 102 is located in the operator network and the service node 103 is deployed on the application service provider side. As another example, both the gateway 102 and the serving node 103 may be deployed on the operator side.
The user device 101 may be a mobile phone, a computer, a smart wearable device, or the like. The user equipment 101 has a client installed thereon. The user equipment 101 is able to initiate a request based on the client installed thereon to enjoy the corresponding application service. In the embodiment of the present application, the application service means: services provided for user equipment by various types of Applications (APPs), such as computing processing services (especially intensive computing services), application online services, content storage services, and so on.
In the application scenario shown in fig. 1, the gateway 102 may be divided into a gateway 102A close to the user equipment 101 and a gateway 102B and a gateway 102C far from the user equipment 101 according to the deployment location relative to the user equipment 101. In this embodiment, the gateway 102A close to the user equipment 101 may be used as a scheduling node, configured to schedule traffic, and specifically, after receiving a packet from the user equipment 101, determine a target service node that processes the packet, and send the packet to the target service node. Optionally, the gateway 102 is deployed on the application service provider side, and the gateway 102A close to the user equipment 101 may also be referred to as an ingress node that accesses the user equipment 101 to the application service provider network. Of course, other gateways 102 may also be used as the scheduling node, which is not limited in this embodiment of the present application.
The service node 103 may be deployed in one server or in a server cluster composed of a plurality of servers. The service node 103 is used to provide a service platform for the application. At least one instance is deployed in each service node 103, and is used for providing application services for the user equipment 101. An instance refers to a specific application where an application service is deployed on different service nodes. Thus, one application service may correspond to multiple instances. In the embodiment of the present application, the application service may be distributed and deployed on multiple service nodes 103. For example, in the application scenario shown in fig. 1, a first instance is deployed in the service node 103A, a second instance is deployed in the service node 103B, and a third instance is deployed in the service node 103C, where the first instance, the second instance, and the third instance are instances of the same application service (target application service). In the embodiment of the present application, an application service is deployed in a service node, that is, an instance of the application service is deployed in the service node.
Different service nodes 103 may be configured with the same or different scale of computing and storage resources (which may be collectively referred to as computing resources). The computing resources include Central Processing Unit (CPU) resources, graphics Processing Unit (GPU) resources, tensor Processing Unit (TPU) resources, and the like. The storage resources comprise memory resources and/or disk resources and the like.
The service node 103 may store therein an application service identifier of an application service deployed in the service node 103. The application service identification is used for uniquely identifying the corresponding application service so as to distinguish different application services. In the embodiment of the application, each application service can be uniformly allocated with the application service identifier capable of uniquely identifying the application service. The application service identifier may be in the form of an Internet Protocol (IP) address, or may be in another form, which is not limited in this embodiment of the present application.
Optionally, the gateway 102 has computing power awareness capability, and is capable of perceiving computing power of the service node 103. In this case, the application service provider network is also a power-first network (CFN), and the gateway 102 is also a CFN node. Optionally, the service node 103 issues the computing power information to the CFN node in real time, or the CFN node periodically acquires the real-time computing power information from the service node 103. The CFN node may be a router, a switch, or the like, in addition to a gateway.
Optionally, in an edge cloud deployment scenario, the gateway 102 may be a data center gateway, and the service node 103 may be deployed on an edge cloud, for example, the service node 103 may be an MEC site. For example, fig. 2 is a schematic structural diagram of a hierarchical network system of an edge cloud plus a center cloud provided in an embodiment of the present application. The cloud deployment scenario shown in fig. 2 corresponds to the application scenario shown in fig. 1, where the gateway 102 in fig. 1 is mapped to a Data Center Gateway (DCGW) in fig. 2, and the service node 103 in fig. 1 is mapped to an edge cloud or a center cloud in fig. 2.
Optionally, fig. 3 is a schematic view of an application scenario related to another packet forwarding method provided in the embodiment of the present application. Unlike the application scenario shown in fig. 1, the application scenario shown in fig. 3 also includes a control device. As shown in fig. 3, the application scenario includes: user equipment 301, gateways 302A-302C (collectively referred to as gateways 302), serving nodes 303A-303C (collectively referred to as serving nodes 303), and control equipment 304. The user equipment 301, the gateway 302, and the service node 303 in fig. 3 are similar to the user equipment 101, the gateway 102, and the service node 103 in fig. 1, respectively, please refer to the above description of the user equipment 101, the gateway 102, and the service node 103 in fig. 1, and are not repeated here.
The control device 304 may be a cloud-hosted platform or a Software Defined Network (SDN) controller, or the like. The control device 304 is connected to the service node 303. The control device 304 is used to manage and control the service node 303.
In the scheme, the control device can layer a plurality of managed service nodes to obtain a plurality of service node layers, and distributes layer identifiers for each service node layer, wherein the layer identifiers of the service nodes belonging to the same service node layer are the same, and the layer identifiers of the service nodes belonging to different service node layers are different. And then the control equipment respectively sends the corresponding layer identifications to the plurality of service nodes. Wherein a service node layer may be understood as a set of service nodes, including one or more service nodes.
Optionally, there are multiple hierarchical modes of the control device to multiple service nodes. The following five layering methods are used as examples in the examples of the present application, and other layering methods are not excluded.
In a first hierarchical approach, the control device performs a hierarchy on the service nodes according to their locations. The implementation process of the control device for layering the plurality of managed service nodes includes: the control device divides a plurality of service nodes into an access service node layer, a convergence service node layer and a core service node layer. The service node in the access service node layer is connected with the access network equipment, the service node in the convergence service node layer is connected with the convergence network equipment, and the service node in the core service node layer is connected with the core network equipment. The priority of the access service node layer is higher than that of the convergence service node layer, and the priority of the convergence service node layer is higher than that of the core service node layer.
Alternatively, in a Radio Access Network (RAN) IP (IP RAN) network, the access network device may be a base station site gateway (CSG), the aggregation network device may be an Access Service Gateway (ASG), and the core network device may be a radio network controller site gateway (RSG). For example, fig. 4 is a schematic diagram of hierarchical deployment of a service node according to an embodiment of the present application. As shown in fig. 4, the scheduling node is CSG, the serving node connected to CSG belongs to the first layer (layer 1), the serving node connected to ASG belongs to the second layer (layer 2), and the serving node connected to RSG belongs to the third layer (layer 3).
In the second hierarchical mode, the control device is configured to hierarchy the service nodes based on IGP deployment, and an IGP used by the network is ISIS. The implementation process of the control device for layering the plurality of managed service nodes includes: the control device divides the service nodes into a level1 service node layer and a level2 service node layer. The service nodes in the level1 service node layer are connected with gateways in the level1 area of the ISIS protocol, and the service nodes in the level2 service node layer are connected with gateways in the level2 area of the ISIS protocol. The priority of the level1 service node layer is higher than that of the level2 service node layer.
Optionally, the level1 area of the ISIS protocol includes access network equipment, and the level2 area of the ISIS protocol includes convergence network equipment and core network equipment. Taking an IP RAN network as an example, for example, fig. 5 is a schematic diagram of a hierarchical deployment of another service node provided in the embodiment of the present application. As shown in fig. 5, the scheduling node is CSG, the serving node connected to CSG belongs to a first layer (layer 1), and the serving node connected to ASG and the serving node connected to RSG belong to a second layer (layer 2).
In the third hierarchical mode, the control device hierarchies the service nodes based on IGP deployment, and the IGP used by the network is OSPF. The implementation process of the control device for layering the plurality of managed service nodes includes: the control device divides the plurality of service nodes into a non-backbone service node layer and a backbone service node layer. The service node in the non-backbone service node layer is connected with a gateway in a non-backbone area (non area 0) of the OSPF protocol, and the service node in the backbone service node layer is connected with a gateway in a backbone area (area 0) of the OSPF protocol. The priority of the non-backbone service node layer is higher than that of the backbone service node layer.
Optionally, non-area 0 of the OSPF protocol includes an access network device, and area 0 of the OSPF protocol includes an aggregation network device and a core network device. Taking an IP RAN network as an example, for example, fig. 6 is a schematic diagram of hierarchical deployment of another service node provided in the embodiment of the present application. As shown in fig. 6, the scheduling node is CSG, the serving node connected to CSG belongs to the first layer (layer 1), and the serving node connected to ASG and the serving node connected to RSG belong to the second layer (layer 2).
In the fourth hierarchical approach, the control device hierarchies the service nodes based on the time delay between the service node and the scheduling node. The implementation process of the control device for layering the plurality of managed service nodes includes: the control device divides the plurality of service nodes into a plurality of service node layers according to the time delay from the plurality of service nodes to the scheduling node. Each service node layer corresponds to a time delay threshold, and the time delay corresponding to the time delay threshold is the time delay from the corresponding service node layer to the scheduling node. The priority of the corresponding service node layer with the small delay threshold is higher than that of the corresponding service node layer with the large delay threshold.
The delay threshold corresponding to the service node layer may be understood as an upper delay limit from the service node in the service node layer to the scheduling node. For example, the control device divides the plurality of service nodes into 3 service node layers based on 3 latency thresholds. The 3 delay thresholds are respectively T1, T2 and T3, where T1< T2< T3. The 3 service node layers are respectively a service node layer1, a service node 2 and a service node 3. The service node layer1 corresponds to T1, the service node layer2 corresponds to T2, and the service node layer3 corresponds to T3, which means that the time delay from the service node in the service node layer1 to the scheduling node is not greater than T1, the time delay from the service node in the service node layer2 to the scheduling node is greater than T1 and not greater than T2, and the time delay from the service node in the service node layer3 to the scheduling node is greater than T2 and not greater than T3.
Optionally, the control device may layer the service nodes according to a Network Cloud Engine (NCE) delay map, for example, taking the scheduling node as an axis, measure the delay from each service node to the scheduling node, and then set a plurality of delay thresholds according to the delay from the plurality of service nodes to the scheduling node, so as to divide the plurality of service nodes into a plurality of service node layers.
Taking an IP RAN network as an example, for example, fig. 7 is a schematic diagram of a hierarchical deployment of another service node provided in the embodiment of the present application. As shown in fig. 7, the scheduling node is CSG, the serving node whose delay to the scheduling node is not greater than T1 belongs to the first layer (layer 1), the serving node whose delay to the scheduling node is greater than T1 and not greater than T2 belongs to the second layer (layer 2), and the serving node whose delay to the scheduling node is greater than T2 and not greater than T3 belongs to the third layer (layer 3).
In a fifth hierarchical manner, an implementation process of the control device for layering the plurality of managed service nodes includes: the control device divides the plurality of service nodes into a main service node layer and a standby service node layer. The service nodes in the main service node layer are connected with the main gateway, and the service nodes in the standby service node layer are connected with the standby gateway. The priority of the main service node layer is higher than that of the standby service node layer.
The standby gateway may be regarded as a redundant gateway, and the standby service node layer may be regarded as a redundant service node layer. When the computing resources of the main service node layer are sufficient, the control equipment can enable the service nodes in the standby service node layer to enter a dormant state, and correspondingly, if no service node communicates with the standby gateway for a long time, the standby gateway also enters the dormant state; when the computing resources of the main service node layer are urgent, the control device can wake up the service node in the standby service node layer to ensure that the application service can be provided for the user equipment.
For example, fig. 8 is a schematic diagram of a hierarchical deployment of a further service node provided in the embodiment of the present application. As shown in fig. 8, the service node connected to the active gateway belongs to the first layer (layer 1), and the service node connected to the standby gateway belongs to the second layer (layer 2).
Alternatively, the above process of layering a plurality of service nodes may also be performed by the scheduling node. For example, in an application scenario as shown in fig. 1, multiple service nodes may be layered by a scheduling node.
In the embodiment of the application, the control device is used for layering the service nodes and distributing the corresponding layer identifiers for the service nodes, so that the scheduling node can select the service node for processing the message according to the layer identifier of the service node after receiving the message, and message scheduling modes are enriched.
Further, after receiving the layer identifier sent by the control device, the service node generates an advertisement message, where the advertisement message includes computing power information corresponding to the application service deployed by the service node, an application service identifier corresponding to the application service deployed by the service node, and the layer identifier of the service node. The service node sends the advertisement message to a gateway to which the service node is connected.
Alternatively, the application service identifier corresponding to the application service may be a specific value or a specific identifier for the application service, so as to distinguish different application services. Optionally, the application service identifier corresponding to the application service is an anycast IP (anycast IP) address corresponding to the application service. The corresponding anycast IP addresses of different application services are different, and the corresponding anycast IP addresses of the same application service deployed in different service nodes are the same.
Optionally, the computing power information includes one or more of a heavy load status, a number of computing power resources, a utilization of computing power resources, a number of device connections, a converged computing power value, or a task processing latency.
The heavy and light load state of the application service is used for indicating whether the instance of the application service in the service node is in a heavy load state or a light load state, wherein the heavy load state reflects that the load of the instance is larger, namely the computing resources corresponding to the application service are exhausted or are about to be exhausted; the light load state reflects that the load of the instance is small, that is, the available computational resources corresponding to the application service are sufficient.
The amount of the computing resources corresponding to the application service may include the amount of CPU resources, GPU resources, or TPU resources allocated to the application service in the service node. The utilization rate of the computing resources corresponding to the application service may include a utilization rate of CPU resources, GPU resources, or TPU resources allocated to the application service by the service node. The device connection number corresponding to the application service refers to the number of user devices accessing the service node and requesting the application service. The number of computing resources, the utilization rate of the computing resources and the number of equipment connections can be collectively referred to as a computing resource detailed index.
The fused computational power value corresponding to the application service is a measure of computational power. For example, the fusion calculation value corresponding to the application service may be a fusion value calculated based on the number of calculation resources corresponding to the application service, the utilization rate of the calculation resources, or the detailed indicators of the calculation resources, such as the number of device connections. The fusion calculation value corresponding to the application service is in negative correlation with the number of calculation resources corresponding to the application service, is in positive correlation with the utilization rate of the calculation resources corresponding to the application service, and is in positive correlation with the connection number of the equipment corresponding to the application service.
The task processing delay corresponding to the application service may be an average processing delay, a maximum processing delay, or a minimum processing delay of the service node for the packet requesting the application service within a period of time.
The layer identification of the service node is used for indicating the service node layer to which the service node belongs. For example, in this embodiment of the present application, the service node has a first layer identifier, which indicates that the service node belongs to a first service node layer. The service node has a second tier identification indicating that the service node belongs to a second service node tier.
Optionally, the advertisement message sent by the service node to the gateway to which the service node is connected further includes next hop information. The next hop information is the address of the service node. The next hop information is used for a gateway connected with the service node to generate a routing table entry. For example, the service node is deployed with application service a, application service B, and application service C. The gateway determines that the load state corresponding to the application service A deployed by the service node is not overloaded according to the computing power information corresponding to the application service A, determines that the load state corresponding to the application service B deployed by the service node is overloaded according to the computing power information corresponding to the application service B, and determines that the load state corresponding to the application service C deployed by the service node is overloaded according to the computing power information corresponding to the application service C. The anycast IP address corresponding to the application service a is IP1, the anycast IP address corresponding to the application service B is IP2, and the anycast IP address corresponding to the application service C is IP3. The layer identification of the serving node is layer ID1. The IP address of the serving node is IP 1.1.1.1. After receiving the advertisement message sent by the service node, the gateway connected to the service node may generate a routing table entry as shown in table 1.
TABLE 1
Figure BDA0003180795310000101
The next hop refers to the next device to which the packet whose destination address is the corresponding anycast IP address needs to be forwarded. An egress interface refers to an interface used by a device to send a message to the next hop.
Optionally, the notification message is a BGP update message. The computation power information corresponding to the application service deployed by the service node and the layer identifier of the service node are carried in a routing attribute (path attributes) field of the BGP update message. The application service identifier corresponding to the application service is carried in a network layer availability information (NLRI) field of the BGP update message.
For example, fig. 9 and fig. 10 are respectively schematic structural diagrams of an advertisement message obtained based on a BGP update message extension according to an embodiment of the present application. As shown in fig. 9 and 10, the BGP update message includes an Ethernet header (Ethernet header), an IP header (IP header), a Transmission Control Protocol (TCP) header (TCP header), a BGP packet, and a Frame Check Sequence (FCS). The BGP data packet comprises a BGP header and a BGP message field. The BGP header includes a flag (maker) field, a length (length) field, and a type (type) field (not shown in the figure).
Fig. 9 shows the format of a BGP message field for a publish route defined in a request for comments (RFC) (abbreviated as RFC 4271) document, number 4271. As shown in fig. 9, the BGP message field includes a unreachable route length (unreachable routes) field, a unreachable routes (unreachable routes) field, a total length of route attribute (total attribute length) field, a route attribute field, and an NLRI field. Fig. 10 shows the format of BGP message fields for publishing routes as defined in the RFC 4760 document. As shown in fig. 10, the BGP message field includes an address family identifier (address family identifier) field, a subsequent address family identifier (subsequent address family identifier) field, a length of next hop network address (length of next hop network address) field, a next hop network address (next hop network address) field, a reserved (reserved) field, an NLRI field, and a route attribute field. If the communication scenario in the embodiment of the present application is a Virtual Private Network (VPN) scenario, and the interaction route and the next hop information between different devices are both of internet protocol version 4 (ipv 4) types, a BGP message format as shown in fig. 9 or fig. 10 may be used. If the communication scenario in the embodiment of the present application is a VPN scenario, or the interaction route and the next hop information between different devices are of an IPv6 type, a BGP message format as shown in fig. 10 may be used.
Optionally, the computation force information corresponding to the application service deployed by the service node may be encoded by a type-length-value (TLV) or a type-value (TV) and then loaded in the route attribute field. For example, it is possible to extend in the routing attribute field: a flag bit (flag) field having a length of 1 byte, a type (type) field having a length of 1 byte, a length (length) field having a length of 1 byte, and a payload (value) field having a length of 4 bytes. The flag bit field is used to flag the routing attributes. The payload field is used to carry computational power information. The type field is used to indicate that the content carried by the load field is computational power information.
Alternatively, the layer identification of the service node may be carried in the route attribute field after being encoded in TLV or TV. For example, the routing attribute field may be extended with: a flag bit field of length 1 byte, a type field of length 1 byte, a length field of length 1 byte, and a payload field of length 1 byte. The flag bit field is used to flag the routing attributes. The payload field is used for carrying layer identification. The type field is used to indicate that the content carried by the payload field is a layer identification.
Alternatively, when the BGP message format shown in fig. 9 is used, the next hop information is carried in the route attribute field. Alternatively, when the BGP message format shown in fig. 10 is used, the next hop information is carried in the next hop network address field.
Optionally, the service node periodically sends an advertisement message to the gateway to which the service node is connected, so as to provide the gateway to which the service node is connected with the computing power information corresponding to the deployed application service. Or, whenever the computation information corresponding to the application service deployed by the service node, the application service identifier corresponding to the application service deployed by the service node, or the layer identifier of the service node is updated, the service node sends an advertisement message containing the latest information to the gateway connected to the service node.
In the scheme of the application, the control device can also group a plurality of service nodes of the same service node layer to obtain a plurality of load sharing groups. For example, the control device regards service nodes connected to the same IGP domain in the same service node layer as one load sharing group. And then the control equipment respectively sends the corresponding group identifications to the plurality of service nodes. The notification message sent by the service node to the gateway to which the service node is connected may further include a group identifier of the service node, where the group identifier is used to indicate a load sharing group to which the service node belongs.
Alternatively, the group identification of the service node may be carried in the route attribute field after encoding using TLV or TV. For example, it is possible to extend in the routing attribute field: a flag bit field of length 1 byte, a type field of length 1 byte, a length field of length 1 byte, and a payload field of length 1 byte. The flag bit field is used to flag the routing attributes. The payload field is used to carry the group identification. The type field is used to indicate that the content carried by the payload field is a group id.
The embodiment of the application provides a message forwarding method based on a scheme of layering a plurality of service nodes. Fig. 11 is a flowchart illustrating a message forwarding method according to an embodiment of the present application. The method may be applied in an application scenario as shown in any of fig. 1 to 3. As shown in fig. 11, the method includes:
step 1101, the second device sends a message to the first device, where the message includes an application service identifier.
The application service identifier in the message is used to indicate the application service that the message requests to access. The first device is a scheduling node, and may be, for example, the gateway 102A shown in fig. 1, the DCGW1 shown in fig. 2, or the gateway 302A shown in fig. 3. Optionally, the second device is a user device, and may be, for example, the user device 101 shown in fig. 1, the user device shown in fig. 2, or the user device 301 shown in fig. 3. Or, the second device may also be a device located between the user equipment and the first device, and is configured to forward a packet sent by the user equipment to the first device. The second device sends the message to the first device, which may be that the second device directly sends the message to the first device, or that the second device indirectly sends the message to the first device through other devices.
Optionally, the message further includes content requesting processing. For example, the message is a calculation request message, the calculation request message includes content to be calculated, and the calculation request message is used for requesting calculation processing on the content included in the calculation request message. For another example, the message is an online request message, the online request message includes authentication information, and the online request message is used to request the application to be online. For another example, the message is a storage request message, the storage request message includes content to be stored, and the storage request message is used to request that the content included in the storage request message is stored in the service node. The message may also be other service messages, and the type of the message is not limited in the embodiment of the present application.
Step 1102, the first device determines a first service node layer from the plurality of service node layers according to the application service identifier.
A service node in the first service node layer has a first layer identification. And the service node in the first service node layer is deployed with a target application service corresponding to the application service identifier. For example, the service node layer corresponding to the first layer identifier includes a service node a, a service node B, and a service node C, that is, the service node a, the service node B, and the service node C have the first layer identifier, where the service node a and the service node B have deployed the target application service, and the service node C has not deployed the target application service, and then the first service node layer determined by the first device includes the service node a and the service node B.
Optionally, after receiving the advertisement message, the first device stores the computation force information corresponding to the application service deployed by the service node in each advertisement message, and establishes a corresponding relationship between the application service identifier and the layer identifier of the service node layer, for example, see table 1. Then, the first device selects a first service node layer including a service node deployed with the target application service based on a correspondence between the application service identifier and the layer identifier of the service node layer.
Optionally, the first device stores in advance a correspondence between a layer identifier of the service node layer and a priority of the service node layer. The first device may determine the service node layers in sequence from high priority to low priority according to the application service identifier and the layer identifier until a service node capable of processing the packet is obtained from the determined service node layers, that is, the first device first determines a service node layer with the highest priority from the plurality of service node layers according to the application service identifier and the layer identifier, and if there is no service node capable of processing the packet in the service node layer with the highest priority, the first device determines a service node layer with the next highest priority from the plurality of service node layers according to the application service identifier and the layer identifier, and so on until a service node capable of processing the packet is obtained. Accordingly, the first device determines that there are the following two possible situations at the first serving node layer in step 1102.
In a first possible scenario, the first service node layer is a service node layer with a highest priority among the plurality of service node layers. Optionally, the first service node layer is a service node layer with a highest priority among the multiple service node layers, and may be: the first service node layer is a service node layer closest to the first device in the multiple service node layers, or the first service node layer is a service node layer with the shortest time delay from the multiple service node layers to the first device.
The first service node layer is closest to the first device, which means that the service node in the first service node layer is closer to the first device than the service nodes in the other service node layers. For example, the first service node layer may be an access service node layer, a level1 service node layer corresponding to the ISIS protocol, or a non-backbone service node layer corresponding to the OSPF.
The shortest time delay from the first service node layer to the first device may be understood as a time delay from the service node in the first service node layer to the first device being shorter than a time delay from the service node in the other service node layers to the first device. For example, a service node in the first service node layer belongs to the first layer shown in fig. 7.
Optionally, the delay from the service node to the first device may be measured by using an internet packet finder (ping), a trace route (trace route) technology, a two-way active measurement protocol (TWAMP) or an in-situ operation administration and maintenance (iOAM) technology, and the like.
Optionally, the priority of the service node layer may also be unrelated to both the distance from the service node layer to the first device and the time delay from the service node layer to the first device, for example, the service node layer with the highest priority may be the active service node layer.
In a second possible case, the determining, by the first device, an implementation manner of the first service node layer from the multiple service node layers according to the application service identifier includes: the first device acquires the first service node layer after determining that one or more service nodes in the second service node layer are overloaded. The service nodes in the second service node layer have second layer identifiers, and the priority of the second service node layer is higher than that of the first service node layer. Here, the service node overload may refer to that the total computing power resource on the service node is exhausted or is about to be exhausted, that is, the first device acquires the first service node layer after determining that the total computing power resource on one or more service nodes in the second service node layer is exhausted or is about to be exhausted. Or, the service node overload may also mean that the computing resources corresponding to the target application services deployed by the service node are exhausted or are about to be exhausted, that is, after determining that the computing resources corresponding to the target application services deployed by one or more service nodes in the second service node layer are exhausted or are about to be exhausted, the first device acquires the first service node layer. The embodiment of the present application is described by taking an example that the overload of the service node means that the computing resources corresponding to the target application service deployed by the service node are exhausted or are about to be exhausted.
Optionally, the first device determines whether the service node is overloaded according to the calculation force information corresponding to the target application service deployed by the service node in the second service node layer. Optionally, the computing power information includes one or more of a heavy load status, a number of computing power resources, a utilization of computing power resources, a number of device connections, a converged computing power value, or a task processing latency.
Optionally, the computing power information includes a heavy-light load state, and the first device may determine whether the service node is overloaded according to the heavy-light load state of the target application service deployed by the service node in the second service node layer. If the target application service deployed by the service node is in a heavy-load state, the first device determines that the service node is overloaded; if the target application service deployed by the service node is in a light load state, the first device determines that the service node is not overloaded.
Optionally, the computing power information includes computing power resource detailed indicators such as the number of computing power resources, the utilization rate of the computing power resources, and the number of device connections, and the first device may determine whether the service node is overloaded according to the computing power resource detailed indicators corresponding to the target application service deployed by the service node in the second service node layer.
Optionally, the computation information includes a fusion computation force value, and the first device may determine whether the service node is overloaded according to the fusion computation force value corresponding to the target application service deployed by the service node in the second service node layer. If the fusion calculation force value corresponding to the target application service deployed by the service node is larger than the calculation force threshold value, the first device determines that the service node is overloaded; and if the fusion calculation force value corresponding to the target application service deployed by the service node is not greater than the calculation force threshold value, the first device determines that the service node is not overloaded.
Optionally, the calculation information includes task processing delay, and the first device may determine whether the service node is overloaded according to the task processing delay corresponding to the target application service deployed by the service node in the second service node layer. If the task processing delay corresponding to the target application service deployed by the service node is greater than the processing delay threshold, the first device determines that the service node is overloaded; and if the task processing delay corresponding to the target application service deployed by the service node is not larger than the processing delay threshold value, the first device determines that the service node is not overloaded.
Optionally, the first device acquires the first service node layer after determining that one or more service nodes in the second service node layer are overloaded, and the following two implementation manners may be specifically available.
In a first implementation manner, after determining that one service node in the second service node layer is overloaded, the first device obtains the first service node layer. The serving node may be a serving node in the second serving node layer, where IGP overhead of a link with the first device is minimum, or a serving node in the second serving node layer, where delay to the first device is minimum.
Optionally, after determining the second service node layer from the multiple service node layers according to the application service identifier, the first device obtains, from the second service node layer, a service node with the minimum IGP overhead of a link with the first device or a service node with the minimum time delay to the first device, and if the obtained service node is overloaded, the first device obtains the first service node layer. The manner in which the first device obtains the second service node layer may refer to the manner in which the first device obtains the first service node layer, and this embodiment of the present application is not described herein again.
On the premise that the calculation resource allocation corresponding to the application service deployed on each service node is reasonable, for a plurality of service nodes belonging to the same service node layer, if one service node is overloaded, other service nodes may be overloaded or about to be overloaded, so that it may not make sense to schedule a message in the same service node layer, or even may cause overload oscillation of the plurality of service nodes. If the computational resources of the service nodes in the second service node layer are not enough under the condition of sudden flow, the message can be directly dispatched to the next layer (the first service node layer) after determining that one service node in the second service node layer is overloaded by applying the implementation mode, so that the network stability is improved.
For example, fig. 12 is a schematic diagram of a message scheduling scenario provided in the embodiment of the present application. As shown in fig. 12, after receiving the packet, the scheduling node determines whether a service node closest to the scheduling node in the service nodes in the first layer is overloaded; if the service node closest to the scheduling node in the service nodes of the first layer is overloaded, the scheduling node judges whether the service node closest to the scheduling node in the service nodes of the second layer is overloaded or not; if the service node closest to the scheduling node in the service nodes on the second layer is overloaded, the scheduling node judges whether the service node closest to the scheduling node in the service nodes on the third layer is overloaded or not; and if the service node closest to the scheduling node in the service nodes on the third layer is not overloaded, the scheduling node sends a message to the service node closest to the scheduling node in the service nodes on the third layer. In fig. 12, dotted lines between devices indicate links between devices, and solid lines with arrows indicate transmission paths of messages.
In a second implementation manner, after determining that a plurality of service nodes in the second service node layer are overloaded, the first device acquires the first service node layer.
Optionally, the plurality of service nodes belong to the same load sharing group. For the same service node layer, due to different access positions of the user equipment, the distance difference between the corresponding scheduling node and different service nodes may be large. For example, multiple access rings belonging to the same service layer may be connected to the same aggregation ring, where the distance from one access ring to another access ring may be very long, and the scheduling node is located in one of the access rings, so that the scheduling node is closer to the service node that is subordinate to the access ring, but is farther from the service node that is subordinate to the other access ring, or even is farther from the service node that is subordinate to the aggregation ring. In this case, the response delay of the packet is larger when the scheduling node schedules the packet to the service node that is suspended in the other access ring than when the scheduling node schedules the packet to the service node that is suspended in the aggregation ring. Based on this, the embodiment of the present application provides a scheme for dividing load sharing groups in the same service node layer.
In the first scheme, each service node layer corresponds to a shared delay threshold. The higher the priority, the smaller the sharing delay threshold corresponding to the service node layer is, for example, the sharing delay threshold corresponding to the second service node layer is smaller than the sharing delay threshold corresponding to the first service node layer. The first device takes the service node with the time delay from the second service node layer to the first device smaller than the sharing time delay threshold corresponding to the second service node layer as a load sharing group, and acquires the first service node layer after determining that all the service nodes in the load sharing group are overloaded.
In a second scheme, the first device uses the first n service nodes with the shortest time delay from the second service node layer to the first device as a load sharing group, and the first device acquires the first service node layer after determining that all service nodes in the load sharing group are overloaded. n is an integer greater than 1.
In a third scheme, a first device uses a service node, which is in an IGP domain closest to the first device and is subordinate to a second service node layer, as a load sharing group, and the first device acquires the first service node layer after determining that all service nodes in the load sharing group are overloaded. The service node under-hung in the IGP domain refers to a service node under-hung in a gateway located in the IGP domain.
Optionally, the first device may first obtain a service node closest to the first device in the second service node layer, and then determine, according to the group identifier of the service node and the group identifiers of other service nodes in the second service node layer, a plurality of service nodes belonging to the same load sharing group as the service node.
For example, fig. 13 is a schematic diagram of another packet scheduling scenario provided in the embodiment of the present application. As shown in fig. 13, after receiving the packet, the scheduling node determines whether there is a non-overloaded service node in the load sharing group closest to the scheduling node in the first layer; if all service nodes in the load sharing group closest to the scheduling node in the first layer are overloaded, the scheduling node judges whether the load sharing group closest to the scheduling node in the second layer has service nodes which are not overloaded or not; and if the service node exists in the load sharing group closest to the scheduling node in the second layer and is not overloaded, the scheduling node sends a message to the service node which is not overloaded in the load sharing group closest to the scheduling node in the second layer. In fig. 13, solid lines with arrows indicate transmission paths of messages.
Or, the first device may also acquire the first service node layer after determining that all service nodes in the second service node layer are overloaded.
It is worth noting that subsequently, if the service node in the second service node layer is not overloaded, the first device may reschedule the latter traffic to the service node in the second service node layer.
Step 1103, the first device selects a target service node from the first service node layer.
In a first implementation manner, an implementation process of a first device selecting a target service node from a first service node layer includes: the first device selects a target service node from the first service node layer, wherein the target service node has the smallest IGP overhead of a link between the first device and the target service node. Or the first device selects a target service node with the shortest time delay from the first service node layer to the first device.
In this implementation manner, the first device selects a service node with the smallest IGP overhead of a link between the determined service node layer and the first device or the shortest delay to the first device as a target service node, that is, selects a service node with better network performance, and can reduce the transmission delay of the packet as much as possible to reduce the overall end-to-end delay, thereby providing better application service for the user.
In a second implementation, the target service node is a service node in the first service node layer that has the lowest IGP overhead for the link with the first device and is not overloaded. The first device may select a service node with the smallest IGP overhead of a link with the first device from the first service node layer, determine whether the service node is overloaded according to the calculation information corresponding to the target application service deployed by the service node, and use the service node as the target service node when the service node is not overloaded. Or, the target service node is a service node in the first service node layer, where the time delay to the first device is shortest and the service node is not overloaded. The first device may select a service node with the shortest time delay from the first service node layer, determine whether the service node is overloaded according to the calculation information corresponding to the target application service deployed by the service node, and use the service node as the target service node when the service node is not overloaded.
In a current message scheduling mode, a CFN technology is adopted to perform message scheduling based on a load balancing principle, namely, only computing power information corresponding to application services deployed by service nodes is considered to select the service nodes; in another message scheduling method, only the network performance factor is considered, and the messages from the user equipment are all scheduled to the service node closest to the user equipment. However, if all messages from the user equipment are scheduled to the service node closest to the user equipment, the closest service node may be overloaded, and the service node may not provide the application service normally. If the CFN technology is adopted to perform packet scheduling based on the load balancing principle, a situation of a long delay between a service node and user equipment with a large calculation power may occur, resulting in a long overall end-to-end delay.
In this implementation manner, the first device takes the service node that has the smallest IGP overhead of the link between the service node layer and the first device or has the shortest time delay to the first device and is not overloaded as the target service node. The implementation mode comprehensively considers the computing power and the network performance of the service node, preferentially selects the service node with better network performance during message scheduling, and selects the service node with suboptimal network performance but more sufficient computing resources when the service node with better network performance is overloaded, so that the service node can effectively provide application service, the transmission delay of the message can be reduced as much as possible, the end-to-end integral delay is reduced, and better application service is provided for users.
In a third implementation manner, an implementation process of selecting, by a first device, a target service node from a first service node layer includes: the first device obtains a target load sharing group from the first service node layer. The first device acquires a target service node from the target load sharing group.
Optionally, the first device takes a service node in the first service node layer whose delay to the first device is smaller than the sharing delay threshold corresponding to the first service node layer as the target load sharing group. Or, the first device takes the first m service nodes with the shortest time delay from the first service node layer to the first device as a target load sharing group, where m is an integer greater than 1. Or, the first device takes the service node in the IGP domain closest to the first device in the first service node layer as the target load sharing group. For the definition and explanation of the load sharing group, reference may be made to the related description in step 1102, and details of the embodiment of the present application are not repeated herein.
Optionally, the first device takes a service node that is not overloaded in the target load sharing group as the target service node according to the calculation force information corresponding to the application service deployed by the service node in the target load sharing group.
The realization mode is based on the consideration that the distance difference between the corresponding scheduling node and different service nodes is possibly great due to different access positions of the user equipment, the concept of the load sharing group is provided, and the load balance of a plurality of service nodes in the same load sharing group is realized while better application service is provided for users, so that the resource utilization rate of the service nodes is improved.
And step 1104, the first device sends the message to the target service node.
The first device sends the message to the target service node, where the first device may send the message to the target service node directly, or the first device may send the message to the target service node indirectly through another device.
After receiving the notification message, the first device establishes a routing table item based on the notification message, and then sends a message to the target service node based on the routing table item. Optionally, the announcement message includes next hop information.
In a first implementation scenario, the advertisement message received by the first device is from a gateway to which the service node is connected, and then the next hop information in the advertisement message is an address of the gateway to which the service node is connected.
Optionally, a BGP neighbor relationship is established between the gateways connected to the service nodes and the scheduling node. For example, in the application scenario shown in fig. 1, a BGP neighbor relationship is established between gateway 102A and gateway 102B, and a BGP neighbor relationship is established between gateway 102A and gateway 102C. Assuming that the first device is the gateway 102A, the gateway 102A receives an advertisement message 1 from the gateway 102B, where the advertisement message 1 includes computing power information corresponding to the application service deployed by the serving node 103B, an application service identifier corresponding to the application service deployed by the serving node 103B, a layer identifier of the serving node 103B, and next hop information (an address of the gateway 102B). Assume that service node 103B is deployed with application service a, application service B, and application service C. The gateway 102A determines, according to the calculation power information corresponding to the application service a, that the load state corresponding to the application service a deployed by the service node 103B is not overloaded, determines, according to the calculation power information corresponding to the application service B, that the load state corresponding to the application service B deployed by the service node 103B is overloaded, and determines, according to the calculation power information corresponding to the application service C, that the load state corresponding to the application service C deployed by the service node 103B is not overloaded. The anycast IP address corresponding to the application service a is IP1, the anycast IP address corresponding to the application service B is IP2, and the anycast IP address corresponding to the application service C is IP3. The layer identification of serving node 103B is layer ID2. The IP address of gateway 102B is IP 2.1.2.1. Gateway 102A, upon receiving advertisement message 1 from gateway 102B, may generate a routing table entry as shown in table 2.
TABLE 2
Figure BDA0003180795310000161
Similarly, gateway 102A may also receive advertisement message 2 from gateway 102C, where advertisement message 2 includes computing power information corresponding to the application service deployed by service node 103C, an application service identification corresponding to the application service deployed by service node 103C, a layer identification of service node 103C, and next hop information (address of gateway 102C). Assume that service node 103C is deployed with application service a and application service B. The gateway 102A determines, according to the calculation power information corresponding to the application service a, that the load state corresponding to the application service a deployed by the service node 103C is not overloaded, and determines, according to the calculation power information corresponding to the application service B, that the load state corresponding to the application service B deployed by the service node 103C is not overloaded. The anycast IP address corresponding to the application service a is IP1, and the anycast IP address corresponding to the application service B is IP2. The layer identification of serving node 103C is layer ID3. The IP address of gateway 102C is IP 2.1.3.1. Gateway 102A, upon receiving advertisement message 2 from gateway 102C, may generate a routing table entry as shown in table 3.
TABLE 3
Figure BDA0003180795310000171
If the target service node is the service node 102B, the gateway 102A sends a message to the IP 2.1.2.1 (the gateway 102B) through the egress interface 2, and the gateway 102B sends the message to the service node 102B based on its own routing table entry. If the target service node is the service node 102C, the gateway 102A sends a message to the IP 2.1.3.1 (the gateway 102C) through the egress interface 3, and the gateway 102C sends the message to the service node 102C based on its own routing table entry.
In an actual scenario, a plurality of service nodes may be hung on one gateway, and after receiving notification messages sent by the plurality of service nodes, the gateway may collect computing power information corresponding to application services deployed by the plurality of service nodes, and send the collected computing power information to a scheduling node. The aggregated computing power information substantially reflects the total computing power of the plurality of service nodes suspended by the gateway. If the gateway receives the packet sent by the scheduling node, the gateway may serve as a new scheduling node, and further determine, among the plurality of service nodes that are hung down, a service node that processes the packet.
In a second implementation scenario, the advertisement message received by the first device is from a service node, and then the next hop information in the advertisement message is an address of the service node.
For example, in the application scenario shown in fig. 1, the first device is a gateway 102A, and the gateway 102A receives an advertisement message 3 from the service node 103A, where the advertisement message 3 includes computing power information corresponding to the application service deployed by the service node 103A, an application service identifier corresponding to the application service deployed by the service node 103A, a layer identifier of the service node 103A, and next hop information (an address of the service node 103A). Assume that service node 103A is deployed with application service a, application service B, and application service C. The gateway 102A determines, according to the computing power information corresponding to the application service a, that the load state corresponding to the application service a deployed by the service node 103A is not overloaded, determines, according to the computing power information corresponding to the application service B, that the load state corresponding to the application service B deployed by the service node 103A is overloaded, and determines, according to the computing power information corresponding to the application service C, that the load state corresponding to the application service C deployed by the service node 103B is overloaded. The anycast IP address corresponding to the application service a is IP1, the anycast IP address corresponding to the application service B is IP2, and the anycast IP address corresponding to the application service C is IP3. The layer identification of serving node 103A is layer ID1. The IP address of serving node 103A is IP 1.1.1.1. Gateway 102A, upon receiving the advertisement message from serving node 101A, may generate a routing table entry as shown in table 4.
TABLE 4
Figure BDA0003180795310000172
If the target serving node is serving node 102A, gateway 102A sends a message to IP 1.1.1.1 (serving node 102A) over egress interface 1.
Optionally, the message includes content requesting processing. After receiving the message, the target service node may further perform the following steps 1105 to 1106.
Step 1105, the target service node processes the request processing content contained in the message.
For example, the message is a computation request message, and the target service node processes the content in the message, which may be to compute the content to be computed in the computation request message. For another example, the message is an online request message, and the target service node processes the content in the message, which may be performing online application authentication based on authentication information in the online request message. For another example, the message is a storage request message, and the target service node processes the content in the message, which may be to store the content to be stored in the storage request message.
Step 1106, the target service node sends the processing result for the content to the first device.
For example, the message sent by the first device to the target service node is a computation request message, and the processing result may be a computation result. For another example, the message sent by the first device to the target service node is an online request message, and the processing result may be an indication for indicating whether the application is allowed to be online. For another example, the message sent by the first device to the target service node is a storage request message, and the processing result may be a storage success indication or a storage failure indication.
Step 1107, the first device sends the processing result to the second device.
In summary, in the packet forwarding method provided in the embodiment of the present application, the service nodes are layered, and the corresponding layer identifier is allocated to each service node, so that after receiving the packet, the scheduling node can select the service node for processing the packet according to the layer identifier of the service node, thereby enriching the packet scheduling manner. The priority of the service node layer is positively correlated with the network performance of the service nodes in the service node layer, and the scheduling node preferentially selects the service node in the service node layer with high priority, namely selects the service node with better network performance, so that the transmission delay of the message can be reduced as much as possible, and the whole end-to-end delay can be reduced. In addition, by comprehensively considering the computing power and the network performance of the service node, the service node with better network performance is preferentially selected during message scheduling, and when the service node with better network performance is overloaded, the service node with suboptimal network performance and more sufficient computing resources is selected, so that the service node can effectively provide application service, the transmission delay of the message can be reduced as much as possible, the end-to-end integral delay is reduced, and better application service is provided for users.
The order of the steps of the message forwarding method provided by the embodiment of the application can be appropriately adjusted, and the steps can be correspondingly increased or decreased according to the situation. Any method that can be easily conceived by a person skilled in the art within the technical scope of the present disclosure shall be covered by the protection scope of the present disclosure.
Fig. 14 is a schematic structural diagram of a first device provided in an embodiment of the present application, which may implement the function of the first device in the embodiment shown in fig. 11. As shown in fig. 14, the first device 1400 includes: a receiving unit 1401, a processing unit 1402 and a transmitting unit 1403. These units may perform the respective functions of the first device in the above-described method embodiments. A receiving unit 1401, configured to support the first device to perform step 1101 (receiving a message sent by the second device) and step 1106 (receiving a processing result sent by the target serving node for the content of the request processing in the message) in fig. 11; a processing unit 1402, configured to enable the first device to perform steps 1102 and 1103 in fig. 4, and other processes performed by the first device in the technology described herein; a sending unit 1403, configured to support the first device to perform step 1104 and step 1107 in fig. 11. For example, a receiving unit 1401 for performing various information receptions performed by the first device in the above-described method embodiments; a processing unit 1402, configured to perform other processing except for the information transceiving action by the first device in the above method embodiment; a transmission unit 1403 for performing various information transmissions performed by the first device in the above-described method embodiments. For example, the receiving unit 1401 is configured to receive a message sent by the second device, where the message includes an application service identifier. A processing unit 1402, configured to determine a first service node layer from the multiple service node layers according to the application service identifier, and select a target service node from the first service node layer, where a service node in the first service node layer has the first layer identifier, and a service node in the first service node layer deploys a target application service corresponding to the application service identifier. A sending unit 1403, configured to send a packet to the target serving node. For a specific execution process, please refer to the detailed description of the corresponding steps in the embodiment shown in fig. 11, which is not repeated here.
The division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation. Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. For example, in the above embodiment, the receiving unit and the sending unit may be the same unit, or may be different units. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Fig. 15 shows another possible schematic configuration of the first device according to the exemplary embodiment described above, in the case of an integrated unit. The first device 1500 may also implement the functionality of the first device in the embodiment shown in fig. 11. The first device 1500 includes: a storage unit 1501, a processing unit 1502, and a communication unit 1503. The communication unit 1503 is configured to support communication between the first device 1500 and other network entities, for example, a second device or a service node shown in fig. 11, and the communication unit 1503 is configured to support the first device 1500 to perform step 1101, step 1104, step 1106, and step 1107 in fig. 11, for example. The processing unit 1502 is configured to control and manage actions of the first device 1500, for example, the processing unit 1502 is configured to support the first device 1500 to execute step 1102 and step 1103 in fig. 11, and/or other processes executed by the first device in the technology described herein. A storage unit 1501 stores program codes and data of the first device 1500. For a specific execution process, please refer to the detailed description of the corresponding steps in the embodiment shown in fig. 11, which is not repeated here.
The processing unit 1502 may be a processor, such as a Central Processing Unit (CPU), a general purpose processor, a Digital Signal Processor (DSP), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure of the embodiments of the application. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The communication unit 1503 may be a transceiver, and the storage unit 1501 may be a memory.
When the processing unit 1502 is a processor, the communication unit 1503 is a transceiver, and the storage unit 1501 is a memory, the first device according to the embodiment of the present application may be the first device 1650 illustrated in fig. 16.
Referring to fig. 16, the first apparatus 1600 includes: a processor 1602, a transceiver 1603, a memory 1601, and a bus 1604. The processor 1602, the transceiver 1603 and the memory 1601 are connected to each other via a bus 1604; the bus 1604 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 16, but this is not intended to represent only one bus or type of bus. This first device 1600 may implement the functionality of the first device in the embodiment shown in fig. 11. The processor 1602 and the transceiver 1603 may perform the respective functions of the first device in the above method example. The transceiver 1603 is used to enable the first device 1600 to perform steps 1101, 1104, 1106 and 1107 in fig. 11. Processor 1602 is configured to enable first device 1600 to perform steps 1102 and 1103 in fig. 11, and/or other processes performed by first device in the techniques described herein. A memory 1601 for storing program codes and data of the first device 1600. For a specific execution process, please refer to the detailed description of the corresponding steps in the embodiment shown in fig. 11, which is not repeated here.
Fig. 17 is a schematic structural diagram of still another first device provided in an embodiment of the present application. As shown in fig. 17, the first device 1700 may be a router, a switch, a gateway, or a network device with forwarding function, and the network device 1700 is capable of implementing the functions of the first device in the foregoing method embodiments. The first apparatus 1700 includes: a master control board 1701 and an interface board 1702. The main control board 1701 includes: a processor 1703 and a memory 1704. The interface board 1702 includes: a processor 1705, a memory 1706, and an interface card 1707. A master control board 1701 and an interface board 1702 are coupled.
These hardware may perform the corresponding functions in the above method examples, for example, the memory 1706 may be used to store the program codes of the interface board 1702, and the processor 1705 is used to call the program codes in the memory 1706 to trigger the interface card 1707 to perform various information receiving and sending performed by the first device in the above method embodiments, for example, the processor 1705 calls the program codes in the memory 1706 to trigger the interface card 1707 to support the first device 1700 to perform the steps 1101, 1104, 1106 and 1107 in fig. 11. The memory 1704 may be configured to store the program code of the main control board 1701, and the processor 1703 is configured to call the program code in the memory 1704 to perform other processing of the first device except for information transmission and reception in the above-described method embodiment. For example, the processor 1703 is configured to enable the first device 1700 to perform steps 1102 and 1103 in fig. 11, and/or other processes performed by the first device in the techniques described herein. The memory 1704 is used to store program codes and data of the main control board 1701. For a specific execution process, please refer to the detailed description of the corresponding steps in the embodiment shown in fig. 11, which is not repeated here.
In one possible implementation, an IPC control channel is established between the main control board 1701 and the interface board 1702, and the main control board 1701 and the interface board 1702 communicate with each other by using the IPC control channel.
Fig. 18 is a schematic structural diagram of a service node according to an embodiment of the present application, which may implement the function of the service node in the embodiment shown in fig. 11. As shown in fig. 18, the service node 1800 includes: a receiving unit 1801, a processing unit 1802, and a transmitting unit 1803. These units may perform the respective functions of the service node in the above-described method embodiments. A receiving unit 1801, configured to support the service node to perform step 1104 in fig. 11 (receive a packet sent by the first device); a processing unit 1802 for enabling a serving node to perform step 1105 in fig. 11, and other processes performed by the serving node in the techniques described herein; a sending unit 1803, configured to support the serving node to perform step 1106 in fig. 11. For example, the receiving unit 1801 is configured to perform various information receptions performed by the service node in the foregoing method embodiment; a processing unit 1802, configured to perform other processing except for an information transceiving action by the service node in the above method embodiments; a sending unit 1803, configured to perform various information sending performed by the serving node in the foregoing method embodiment. For example, the receiving unit 1801 is configured to receive a message sent by a first device. A processing unit 1802, configured to generate an advertisement message, where the advertisement message includes computing power information corresponding to an application service deployed by a service node, an application service identifier corresponding to the application service deployed by the service node, and a layer identifier of the service node. A sending unit 1803, configured to send an advertisement message to a gateway connected to the service node. For a specific execution process, reference is made to the detailed description in the above embodiments, which is not repeated here.
The division of the units in the embodiment of the present application is schematic, and is only one logic function division, and there may be another division manner in actual implementation. Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. For example, in the above embodiment, the receiving unit and the sending unit may be the same unit, or may be different units. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
In case of integrated units, fig. 19 shows another possible structural diagram of the service node involved in the above embodiment. The service node 1900 may also implement the functions of the service node in the embodiment shown in FIG. 11. Service node 1900 includes: a storage unit 1901, a processing unit 1902, and a communication unit 1903. The communication unit 1903 is configured to support communication between the serving node 1900 and other network entities, for example, a gateway shown in fig. 11, and for example, the communication unit 1903 is configured to support the serving node 1900 to perform step 1104 and step 1106 in fig. 11. Processing unit 1902 is configured to control and manage actions of service node 1900, e.g., processing unit 1902 is configured to enable service node 1900 to perform step 1105 in FIG. 11, and/or other processes performed by service node in the techniques described herein. A storage unit 1901 is used for storing program codes and data of the service node 1900. For a specific execution process, reference is made to the detailed description in the above embodiments, which is not repeated here.
The processing unit 1902 may be a processor, such as a CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure of the embodiments of the application. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The communication unit 1903 may be a transceiver, and the storage unit 1901 may be a memory.
When the processing unit 1902 is a processor, the communication unit 1903 is a transceiver, and the storage unit 1901 is a memory, the service node according to the embodiment of the present application may be the service node 2000 shown in fig. 20.
Referring to fig. 20, the service node 2000 includes: a processor 2002, a transceiver 2003, a memory 2001, and a bus 2004. Wherein the processor 2002, the transceiver 2003, and the memory 2001 are connected to each other by a bus 2004; the bus 2004 may be a PCI bus or EISA bus, etc. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 20, but this is not intended to represent only one bus or type of bus. The service node 2000 may implement the functions of the service node in the embodiment shown in fig. 11. The processor 2002 and the transceiver 2003 may perform the respective functions of the service node in the above method example. Transceiver 2003 is used to support serving node 2000 in performing steps 1104 and 1106 of fig. 11. Processor 2002 is configured to enable serving node 2000 to perform step 1105 in fig. 11, and/or other processes performed by a serving node in the techniques described herein. A memory 2001 for storing program codes and data for serving node 2000. For the specific implementation process, reference is made to the detailed description in the foregoing embodiments, and details are not repeated here.
Fig. 21 is a schematic structural diagram of another service node according to an embodiment of the present application. As shown in fig. 21, the service node 2100 may be a router, a switch, a gateway, or a network device with forwarding function, and the network device 2100 is capable of implementing the functions of the service node in the foregoing method embodiments. The service node 2100 includes: a main control board 2101 and an interface board 2102. The main control board 2101 includes: a processor 2103 and a memory 2104. The interface board 2102 includes: a processor 2105, a memory 2106, and an interface card 2107. The main control board 2101 is coupled with the interface board 2102.
These hardware may perform the corresponding functions in the above method examples, for example, the memory 2106 may be used for storing the program codes of the interface board 2102, the processor 2105 is used for calling the program codes in the memory 2106 to trigger the interface card 2107 to perform various information receiving and sending performed by the service node in the above method embodiments, for example, the processor 2105 calls the program codes in the memory 2106 to trigger the interface card 2107 to support the service node 2100 to perform steps 1104 and 1105 in fig. 11. The memory 2104 may be used for storing the program codes of the main control board 2101, and the processor 2103 is used for calling the program codes in the memory 2104 to perform other processing except information transceiving of the service node in the above-mentioned method embodiments. Processor 2103 is used, for example, to enable serving node 2100 to perform step 1108 of fig. 11, and/or other processes performed by serving nodes in the techniques described herein. The memory 2104 is used to store program codes and data of the main control board 2101. For a specific execution process, reference is made to the detailed description in the above embodiments, which is not repeated here.
In a possible implementation manner, an IPC control channel is established between the main control board 2101 and the interface board 2102, and the main control board 2101 and the interface board 2102 communicate with each other by using the IPC control channel.
Fig. 22 is a schematic structural diagram of a control device according to an embodiment of the present application, where functions of the control device in the foregoing embodiments may be implemented. As shown in fig. 22, the control device 2200 includes: a processing unit 2201 and a transmitting unit 2202. These units may perform the respective functions of the control device in the above-described method embodiments. A processing unit 2201 for supporting the control device to perform a process performed by the control device in the techniques described herein; a transmission unit 2202 configured to support the control device to perform a transmission process performed by the control device in the technology described herein. For example, the processing unit 2201 is configured to execute other processing except the information transceiving action of the control device in the above method embodiments; a transmission unit 2202 for performing various information transmissions performed by the control apparatus in the above-described method embodiment. For example, the processing unit 2201 is configured to perform layering on a plurality of service nodes managed, and the layer identifications of the service nodes belonging to the same service node layer are the same. A sending unit 2202 is configured to send the corresponding layer id to each of the plurality of serving nodes. For the specific implementation process, reference is made to the detailed description in the foregoing embodiments, and details are not repeated here.
The division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation. Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. For example, in the above embodiment, the receiving unit and the sending unit may be the same unit or different units. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Fig. 23 shows another possible structural diagram of the control device according to the exemplary embodiment described above, in the case of an integrated unit. The control device 2300 can also realize the functions of the control device in the above-described embodiment. The control device 2300 includes: a storage unit 2301, a processing unit 2302, and a communication unit 2303. The communication unit 2303 is used to support communication between the control device 2300 and other network entities, such as communication with a service node, for example, the communication unit 2303 is used to support the control device 2300 sending information to the service node. The processing unit 2302 is used for controlling and managing the operation of the control device 2300. A storage unit 2301 for storing program codes and data of the control device 2300. For a specific execution process, reference is made to the detailed description in the above embodiments, which is not repeated here.
The processing unit 2302 may be a processor, and may be, for example, a CPU, a general-purpose processor, a DSP, an ASIC, an FPGA, or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure of the embodiments of the application. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The communication unit 2303 may be a transceiver, and the storage unit 2301 may be a memory.
When the processing unit 2302 is a processor, the communication unit 2303 is a transceiver, and the storage unit 2301 is a memory, the control device according to the embodiment of the present application may be the control device 2400 shown in fig. 24.
Referring to fig. 24, the control device 2400 includes: a processor 2402, a transceiver 2403, a memory 2401, and a bus 2404. Wherein, the processor 2402, the transceiver 2403 and the memory 2401 are connected to each other through a bus 2404; the bus 2404 may be a PCI bus or an EISA bus, etc. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 24, but this does not mean only one bus or one type of bus. The control device 2400 may implement the functions of the control device in the above-described embodiments. The processor 2402 and the transceiver 2403 may perform the respective functions of the control device in the above method example. Transceiver 2403 is used to support control device 2400 in sending information to a serving node. Processor 2402 is configured to enable control device 2400 to perform the processes performed by the control device in the techniques described herein. A memory 2401 for storing program codes and data for controlling the device 2400. For the specific implementation process, reference is made to the detailed description in the foregoing embodiments, and details are not repeated here.
Fig. 25 is a schematic structural diagram of still another control device provided in an embodiment of the present application. As shown in fig. 25, the control device 2500 may be a router, a switch, a gateway, or a network device with a forwarding function, and the network device 2500 is capable of implementing the functions of the control device in the foregoing method embodiments. The control device 2500 includes: a main control board 2501 and an interface board 2502. The main control board 2501 includes: a processor 2503 and a memory 2504. The interface board 2502 includes: a processor 2505, a memory 2506, and an interface card 2507. The main control board 2501 and the interface board 2502 are coupled.
These pieces of hardware may perform the corresponding functions in the above method examples, for example, the memory 2506 may be used to store the program codes of the interface board 2502, and the processor 2505 may be used to call the program codes in the memory 2506 to trigger the interface card 2507 to perform various information receiving and transmitting performed by the control device in the above method embodiments, for example, the processor 2505 calls the program codes in the memory 2506 to trigger the interface card 2507 to support the control device 2500 to transmit information to the service node. The memory 2504 may be used to store the program code of the main control board 2501, and the processor 2503 is used to call the program code in the memory 2504 to perform other processing of the control device except information transceiving in the above method embodiments. For example, the processor 2503 is used to support the control device 2500 in performing processes performed by the control device in the techniques described herein. The memory 2504 is used for storing program codes and data of the main control board 2501. For a specific execution process, reference is made to the detailed description in the above embodiments, which is not repeated here.
In a possible implementation manner, an IPC control channel is established between the main control board 2501 and the interface board 2502, and the main control board 2501 and the interface board 2502 communicate with each other by using the IPC control channel.
Fig. 26 is a schematic structural diagram of a message forwarding system according to an embodiment of the present application. The system is used for realizing the message forwarding method in the embodiment of the method. As shown in fig. 26, the system includes: a first device 2601 and a plurality of service nodes 2602. The first device 2601 and the service node 2602 may implement the functionality of the first device and the service node, respectively, in the embodiment illustrated in fig. 11. For example, the first device performs steps 1102, 1103, 1104, and 1107 in fig. 11, and/or other processes for the first device to perform in the techniques described herein. Serving node 2602 performs steps 1105 and 1106 of fig. 11, and/or other processes performed by the serving node for the techniques described herein.
Optionally, with continuing reference to fig. 26, the system further comprises: the control device 2603, the control device 2603 is configured to manage a plurality of service nodes 2602, and the control device 2603 is configured to implement the process executed by the control device in the above-described embodiment.
An embodiment of the present application further provides a computer-readable storage medium, where instructions are stored on the computer-readable storage medium, and when the instructions are executed by a processor of a first device (scheduling node), the instructions implement the process executed by the first device in the foregoing embodiment; or, when the instruction is executed by a processor of the service node, implementing the process executed by the service node in the above embodiment; alternatively, when the instructions are executed by a processor of the control device, the processes performed by the control device in the above embodiments are implemented.
It should be noted that any of the device embodiments described above are merely illustrative, where the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the first device or service node embodiment provided by the present application, the connection relationship between the modules indicates that there is a communication connection therebetween, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
The steps of a method or algorithm described in the disclosure of the embodiments of the present application may be implemented in hardware, or may be implemented by a processor executing software instructions. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash memory, read Only Memory (ROM), erasable Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), a hard disk, a removable hard disk, an optical disk, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a core network interface device. Of course, the processor and the storage medium may reside as discrete components in a core network interface device.
In the embodiments of the present application, the terms "first", "second", and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The term "and/or" in this application is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The above description is intended only to illustrate the alternative embodiments of the present application, and not to limit the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (50)

1. A message forwarding method is characterized in that the method comprises the following steps:
the method comprises the steps that a first device receives a message sent by a second device, wherein the message comprises an application service identifier;
the first device determines a first service node layer from a plurality of service node layers according to the application service identifier, wherein service nodes in the first service node layer have first layer identifiers, and target application services corresponding to the application service identifiers are deployed in service nodes in the first service node layer;
the first device selects a target service node from the first service node layer;
and the first equipment sends the message to the target service node.
2. The method of claim 1, wherein determining, by the first device, a first service node layer from a plurality of service node layers based on the application service identifier comprises:
the first device acquires the first service node layer after determining that one or more service nodes in a second service node layer are overloaded, wherein the service nodes in the second service node layer have second layer identifiers, and the priority of the second service node layer is higher than that of the first service node layer.
3. The method of claim 1, wherein the first service node layer is a highest priority service node layer of the plurality of service node layers.
4. The method of claim 3, wherein the first service node layer is a highest priority service node layer of the plurality of service node layers, and comprising:
the first service node layer is a service node layer closest to the first device in the plurality of service node layers, or the first service node layer is a service node layer with the shortest time delay to the first device in the plurality of service node layers.
5. The method according to any one of claims 1 to 4,
the service nodes in the access service node layer are connected with access network equipment, the service nodes in the convergence service node layer are connected with convergence network equipment, the service nodes in the core service node layer are connected with core network equipment, the priority of the access service node layer is higher than that of the convergence service node layer, and the priority of the convergence service node layer is higher than that of the core service node layer; alternatively, the first and second electrodes may be,
the plurality of service node layers comprise level1 service node layers and level2 service node layers, wherein service nodes in the level1 service node layers are connected with gateways in level1 areas from an intermediate system to an intermediate system ISIS protocol, service nodes in the level2 service node layers are connected with gateways in level2 areas of the ISIS protocol, and the priority of the level1 service node layers is higher than that of the level2 service node layers; alternatively, the first and second electrodes may be,
the plurality of service node layers comprise a non-backbone service node layer and a backbone service node layer, wherein a service node in the non-backbone service node layer is connected with a gateway in a non-backbone area of an Open Shortest Path First (OSPF) protocol, a service node in the backbone service node layer is connected with a gateway in a backbone area of the OSPF protocol, and the priority of the non-backbone service node layer is higher than that of the backbone service node layer; alternatively, the first and second electrodes may be,
each service node layer in the plurality of service node layers corresponds to a time delay threshold, the time delay corresponding to the time delay threshold is the time delay from the corresponding service node layer to the first equipment, and the priority of the service node layer with the small corresponding time delay threshold is higher than the priority of the service node layer with the large corresponding time delay threshold; alternatively, the first and second electrodes may be,
the service node layers comprise an active service node layer and a standby service node layer, and the priority of the active service node layer is higher than that of the standby service node layer.
6. The method of any of claims 1 to 5, wherein the selecting, by the first device, the target serving node from the first serving node tier comprises:
the first device selects the target service node with the smallest Interior Gateway Protocol (IGP) overhead of a link between the first device and the first device from the first service node layer; alternatively, the first and second electrodes may be,
and the first equipment selects the target service node with the shortest time delay from the first service node layer to the first equipment.
7. The method of any of claims 1 to 5, wherein the target serving node is a serving node in the first serving node layer having the lowest IGP overhead and not being overloaded with the link between the target serving node and the first device; or the target service node is a service node which has the shortest time delay to the first device in the first service node layer and is not overloaded.
8. The method of any of claims 1 to 7, further comprising:
the first device receives an announcement message, wherein the announcement message comprises computing power information corresponding to an application service deployed by a service node, an application service identifier corresponding to the application service deployed by the service node, and a layer identifier of the service node.
9. The method of claim 8, wherein the advertisement message further comprises next hop information, wherein the advertisement message is from a gateway to which the service node is connected, and wherein the next hop information is an address of the gateway to which the service node is connected; or, the announcement message comes from the service node, and the next hop information is an address of the service node.
10. The method according to claim 8 or 9, wherein the advertisement message is a Border Gateway Protocol (BGP) update message.
11. The method of claim 10, wherein the computation power information and the layer identifier are carried in a routing attribute field of the BGP update message, and wherein the application service identifier is carried in a network layer reachability information field of the BGP update message.
12. The method of any of claims 8 to 11, further comprising:
the first equipment stores computing power information corresponding to the application service deployed by each service node according to the notification message, and establishes a corresponding relation between an application service identifier and a layer identifier of a service node layer;
the first device determines a first service node layer from a plurality of service node layers according to the application service identifier, and the method includes:
the first device selects the first service node layer including the service node deployed with the target application service based on a correspondence between an application service identifier and a layer identifier of the service node layer.
13. The method of any of claims 1 to 5, wherein the selecting, by the first device, the target serving node from the first serving node tier comprises:
the first equipment acquires a target load sharing group from the first service node layer;
and the first equipment acquires the target service node from the target load sharing group.
14. The method of claim 13, wherein the obtaining, by the first device, the target serving node from the target load sharing group comprises:
and the first equipment takes the service node which is not overloaded in the target load sharing group as the target service node according to the calculation force information corresponding to the application service deployed by the service node in the target load sharing group.
15. The method of any one of claims 8 to 12, 14, wherein the computing power information comprises one or more of a heavy load status, a number of computing resources, a utilization of computing resources, a number of device connections, a converged computing power value, or a task processing latency.
16. A message forwarding method is characterized in that the method comprises the following steps:
a service node generates an announcement message, wherein the announcement message comprises calculation force information corresponding to an application service deployed by the service node, an application service identifier corresponding to the application service deployed by the service node and a layer identifier of the service node;
and the service node sends the notification message to a gateway connected with the service node.
17. The method of claim 16, wherein the advertisement message further comprises next hop information, and wherein the next hop information is an address of the serving node.
18. The method according to claim 16 or 17, wherein the advertisement message is a border gateway protocol BGP update message.
19. The method of claim 18, wherein the computation power information and the layer identifier are carried in a routing attribute field of the BGP update message, and wherein the application service identifier is carried in a network layer reachability information field of the BGP update message.
20. The method according to any of the claims 16 to 19, wherein said advertisement message further comprises a group identity of said serving node, said group identity indicating a load sharing group to which said serving node belongs.
21. A message forwarding method is characterized in that the method comprises the following steps:
the control equipment carries out layering on a plurality of managed service nodes, and the layer identifications of the service nodes belonging to the same service node layer are the same;
and the control equipment respectively sends corresponding layer identifications to the service nodes.
22. The method of claim 21, wherein the controlling device hierarchies a plurality of managed service nodes, comprising:
the control device divides the service nodes into an access service node layer, a convergence service node layer and a core service node layer, wherein the service nodes in the access service node layer are connected with access network equipment, the service nodes in the convergence service node layer are connected with the convergence network equipment, the service nodes in the core service node layer are connected with the core network equipment, the priority of the access service node layer is higher than that of the convergence service node layer, and the priority of the convergence service node layer is higher than that of the core service node layer.
23. The method of claim 21, wherein the controlling device tiers the plurality of managed service nodes, comprising:
the control equipment divides the service nodes into a level1 service node layer and a level2 service node layer, wherein the service nodes in the level1 service node layer are connected with gateways in a level1 area from an intermediate system to an intermediate system ISIS protocol, the service nodes in the level2 service node layer are connected with gateways in a level2 area from the ISIS protocol, and the priority of the level1 service node layer is higher than that of the level2 service node layer.
24. The method of claim 21, wherein the controlling device hierarchies a plurality of managed service nodes, comprising:
the control equipment divides the plurality of service nodes into a non-backbone service node layer and a backbone service node layer, wherein the service nodes in the non-backbone service node layer are connected with gateways in a non-backbone area of an Open Shortest Path First (OSPF) protocol, the service nodes in the backbone service node layer are connected with gateways in a backbone area of the OSPF protocol, and the priority of the non-backbone service node layer is higher than that of the backbone service node layer.
25. The method of claim 21, wherein the controlling device hierarchies a plurality of managed service nodes, comprising:
the control equipment divides the service nodes into a plurality of service node layers according to the time delay from the service nodes to the scheduling node, each service node layer corresponds to a time delay threshold, the time delay corresponding to the time delay threshold is the time delay from the corresponding service node layer to the scheduling node, and the priority of the service node layer with the small time delay threshold is higher than that of the service node layer with the large time delay threshold.
26. The method of claim 21, wherein the controlling device hierarchies a plurality of managed service nodes, comprising:
the control equipment divides the service nodes into an active service node layer and a standby service node layer, and the priority of the active service node layer is higher than that of the standby service node layer.
27. A message forwarding apparatus, applied to a first device, includes:
a receiving unit, configured to receive a packet sent by a second device, where the packet includes an application service identifier;
a processing unit, configured to determine a first service node layer from multiple service node layers according to the application service identifier, where a service node in the first service node layer has a first layer identifier, and a service node in the first service node layer deploys a target application service corresponding to the application service identifier;
the processing unit is further configured to select a target service node from the first service node layer;
and the sending unit is used for sending the message to the target service node.
28. The apparatus of claim 27, wherein the processing unit is configured to:
after determining that one or more service nodes in a second service node layer are overloaded, acquiring the first service node layer, wherein the service nodes in the second service node layer have second layer identifiers, and the priority of the second service node layer is higher than that of the first service node layer.
29. The apparatus of claim 27, wherein the first service node layer is a highest priority service node layer of the plurality of service node layers.
30. The apparatus of claim 29, wherein the first service node layer is a highest priority service node layer of the plurality of service node layers, and wherein the first service node layer comprises:
the first service node layer is a service node layer closest to the first device in the plurality of service node layers, or the first service node layer is a service node layer with the shortest time delay to the first device in the plurality of service node layers.
31. The apparatus of any one of claims 27 to 30,
the service nodes in the access service node layer are connected with access network equipment, the service nodes in the convergence service node layer are connected with convergence network equipment, the service nodes in the core service node layer are connected with core network equipment, the priority of the access service node layer is higher than that of the convergence service node layer, and the priority of the convergence service node layer is higher than that of the core service node layer; alternatively, the first and second liquid crystal display panels may be,
the service node layers comprise a level1 service node layer and a level2 service node layer, wherein service nodes in the level1 service node layer are connected with gateways in a level1 area from an intermediate system to an intermediate system ISIS protocol, service nodes in the level2 service node layer are connected with gateways in a level2 area from the ISIS protocol, and the priority of the level1 service node layer is higher than that of the level2 service node layer; alternatively, the first and second electrodes may be,
the plurality of service node layers comprise a non-backbone service node layer and a backbone service node layer, wherein a service node in the non-backbone service node layer is connected with a gateway in a non-backbone area of an Open Shortest Path First (OSPF) protocol, a service node in the backbone service node layer is connected with a gateway in a backbone area of the OSPF protocol, and the priority of the non-backbone service node layer is higher than that of the backbone service node layer; alternatively, the first and second electrodes may be,
each service node layer in the plurality of service node layers corresponds to a time delay threshold, the time delay corresponding to the time delay threshold is the time delay from the corresponding service node layer to the first equipment, and the priority of the service node layer with the small corresponding time delay threshold is higher than the priority of the service node layer with the large corresponding time delay threshold; alternatively, the first and second liquid crystal display panels may be,
the plurality of service node layers comprise a main service node layer and a standby service node layer, and the priority of the main service node layer is higher than that of the standby service node layer.
32. The apparatus according to any one of claims 27 to 31, wherein the processing unit is configured to:
selecting the target service node with the smallest Interior Gateway Protocol (IGP) overhead of a link between the target service node and the first device from the first service node layer; alternatively, the first and second electrodes may be,
and selecting the target service node with the shortest time delay to the first device from the first service node layer.
33. The apparatus of any of claims 27 to 31, wherein the target serving node is a serving node in the first serving node layer with minimal IGP overhead and no overload for a link with the first device; or the target service node is a service node which has the shortest time delay to the first device in the first service node layer and is not overloaded.
34. The apparatus of any one of claims 27 to 33,
the receiving unit is further configured to receive a notification message, where the notification message includes computation force information corresponding to an application service deployed by a service node, an application service identifier corresponding to the application service deployed by the service node, and a layer identifier of the service node.
35. The apparatus of claim 34, wherein the advertisement message further comprises next hop information, wherein the advertisement message is from a gateway to which the service node is connected, and wherein the next hop information is an address of the gateway to which the service node is connected; or, the announcement message comes from the service node, and the next hop information is an address of the service node.
36. A packet forwarding apparatus, applied to a service node, the apparatus comprising:
the processing unit is used for generating an advertisement message, wherein the advertisement message comprises calculation force information corresponding to the application service deployed by the service node, an application service identifier corresponding to the application service deployed by the service node and a layer identifier of the service node;
and the sending unit is used for sending the notification message to a gateway connected with the service node.
37. The apparatus of claim 36, wherein the advertisement message further comprises next hop information, and wherein the next hop information is an address of the serving node.
38. The apparatus according to claim 36 or 37, wherein the advertisement message is a Border Gateway Protocol (BGP) update message.
39. The apparatus of claim 38, wherein the computational power information and the layer identifier are carried in a routing attribute field of the BGP update message, and wherein the application service identifier is carried in a network layer reachability information field of the BGP update message.
40. The apparatus according to any of claims 36 to 39, wherein the advertisement message further comprises a group identifier of the serving node, and wherein the group identifier is used to indicate a load sharing group to which the serving node belongs.
41. A message forwarding device is applied to a control device, and the device comprises:
the processing unit is used for layering a plurality of managed service nodes, and the layer identifiers of the service nodes belonging to the same service node layer are the same;
and the sending unit is used for sending the corresponding layer identifications to the plurality of service nodes respectively.
42. The apparatus according to claim 41, wherein the processing unit is configured to:
the service nodes are divided into an access service node layer, a convergence service node layer and a core service node layer, wherein the service nodes in the access service node layer are connected with access network equipment, the service nodes in the convergence service node layer are connected with convergence network equipment, the service nodes in the core service node layer are connected with core network equipment, the priority of the access service node layer is higher than that of the convergence service node layer, and the priority of the convergence service node layer is higher than that of the core service node layer.
43. The apparatus according to claim 41, wherein the processing unit is configured to:
the method comprises the steps of dividing a plurality of service nodes into a level1 service node layer and a level2 service node layer, wherein the service nodes in the level1 service node layer are connected with gateways in a level1 area from an intermediate system to an intermediate system ISIS protocol, the service nodes in the level2 service node layer are connected with gateways in a level2 area from the ISIS protocol, and the priority of the level1 service node layer is higher than that of the level2 service node layer.
44. The apparatus according to claim 41, wherein the processing unit is configured to:
dividing the plurality of service nodes into a non-backbone service node layer and a backbone service node layer, wherein the service nodes in the non-backbone service node layer are connected with gateways in a non-backbone area of an open shortest path first OSPF protocol, the service nodes in the backbone service node layer are connected with gateways in a backbone area of the OSPF protocol, and the priority of the non-backbone service node layer is higher than that of the backbone service node layer.
45. The apparatus according to claim 41, wherein the processing unit is configured to:
dividing the service nodes into a plurality of service node layers according to the time delay from the service nodes to the scheduling node, wherein each service node layer corresponds to a time delay threshold, the time delay corresponding to the time delay threshold is the time delay from the corresponding service node layer to the scheduling node, and the priority of the service node layer with the small time delay threshold is higher than that of the service node layer with the large time delay threshold.
46. The apparatus according to claim 41, wherein the processing unit is configured to:
and dividing the service nodes into a main service node layer and a standby service node layer, wherein the priority of the main service node layer is higher than that of the standby service node layer.
47. A message forwarding apparatus, comprising: a processor and a memory;
the memory for storing a computer program, the computer program comprising program instructions;
the processor is configured to invoke the computer program to implement the message forwarding method according to any one of claims 1 to 15, or to implement the message forwarding method according to any one of claims 16 to 20, or to implement the message forwarding method according to any one of claims 21 to 26.
48. A message forwarding system, the system comprising: a first device comprising apparatus as claimed in any of claims 27 to 35 and a plurality of service nodes comprising apparatus as claimed in any of claims 36 to 40.
49. The system of claim 48, further comprising: a control device for managing the plurality of service nodes, the control device comprising the apparatus of any of claims 41 to 46.
50. A computer-readable storage medium having stored thereon instructions which, when executed by a processor of a first device, implement the message forwarding method of any of claims 1 to 15; or, when executed by a processor of a service node, implement the packet forwarding method according to any one of claims 16 to 20; or, when executed by a processor of a control device, implement the message forwarding method according to any one of claims 21 to 26.
CN202110846041.2A 2021-07-26 2021-07-26 Message forwarding method, device and system and computer readable storage medium Pending CN115695561A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110846041.2A CN115695561A (en) 2021-07-26 2021-07-26 Message forwarding method, device and system and computer readable storage medium
PCT/CN2022/106519 WO2023005745A1 (en) 2021-07-26 2022-07-19 Message forwarding method, device and system, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110846041.2A CN115695561A (en) 2021-07-26 2021-07-26 Message forwarding method, device and system and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115695561A true CN115695561A (en) 2023-02-03

Family

ID=85044631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110846041.2A Pending CN115695561A (en) 2021-07-26 2021-07-26 Message forwarding method, device and system and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN115695561A (en)
WO (1) WO2023005745A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11496606B2 (en) * 2014-09-30 2022-11-08 Nicira, Inc. Sticky service sessions in a datacenter
US11792078B2 (en) * 2019-07-26 2023-10-17 Verizon Patent And Licensing Inc. Multi-access Edge Computing cloud discovery and communications
US11096036B2 (en) * 2019-09-12 2021-08-17 Intel Corporation Multi-access Edge Computing service for mobile User Equipment method and apparatus
US11032163B2 (en) * 2019-10-25 2021-06-08 Verizon Patent And Licensing Inc. Method and system for selection and orchestration of multi-access edge computing resources

Also Published As

Publication number Publication date
WO2023005745A1 (en) 2023-02-02

Similar Documents

Publication Publication Date Title
US11595845B2 (en) Tracking QoS violated events
US20240022650A1 (en) Computing power application traffic forwarding method and apparatus
US9426068B2 (en) Balancing of forwarding and address resolution in overlay networks
WO2018161850A1 (en) System and method of network policy optimization
WO2018077238A1 (en) Switch-based load balancing system and method
US9584369B2 (en) Methods of representing software defined networking-based multiple layer network topology views
CN108234309B (en) Network data transmission method
EP4181467A1 (en) Service traffic processing method and apparatus
US10397791B2 (en) Method for auto-discovery in networks implementing network slicing
CN113810512A (en) Internet of things terminal access system, method and device and storage medium
Liu et al. CFN-dyncast: Load Balancing the Edges via the Network
CN116633934A (en) Load balancing method, device, node and storage medium
CN114585105A (en) Computing power perception session management method and communication device
CN115499859A (en) NWDAF-based method for managing and deciding computing resources
Pirmagomedov et al. Augmented computing at the edge using named data networking
WO2022042505A1 (en) Methods and devices for forwarding messages and issuing forwarding instruction information and notification messages
Bellavista et al. SDN-based traffic management middleware for spontaneous WMNs
US20240048477A1 (en) Packet forwarding method, apparatus, and system, and computer-readable storage medium
WO2021209189A1 (en) Server computer, method for providing an application, mobile communication network and method for providing access to a server computer
US20230370947A1 (en) Systems, methods, and apparatuses for device routing management
CN115150305B (en) Carrier network delay link determination system, method, electronic equipment and storage medium
WO2023005745A1 (en) Message forwarding method, device and system, and computer-readable storage medium
CN114980243A (en) Data forwarding method and device and storage medium
Kawabata et al. A network design scheme in delay sensitive monitoring services
CN112887185A (en) Communication method and device of overlay network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication