CN117914820A - Calculation routing method, device and system - Google Patents

Calculation routing method, device and system Download PDF

Info

Publication number
CN117914820A
CN117914820A CN202211700619.4A CN202211700619A CN117914820A CN 117914820 A CN117914820 A CN 117914820A CN 202211700619 A CN202211700619 A CN 202211700619A CN 117914820 A CN117914820 A CN 117914820A
Authority
CN
China
Prior art keywords
service
service instance
message
routing node
network address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211700619.4A
Other languages
Chinese (zh)
Inventor
李振斌
陈霞
钱国锋
毛健炜
李呈
胡志波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN117914820A publication Critical patent/CN117914820A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a method, a device and a system for calculating force routing, and belongs to the technical field of communication. In the scheme provided by the application, the computing power message received by the first routing node through the VPN tunnel comprises the network address of the first service instance. Therefore, even if VPN identifiers distributed in the first routing node point to at least two service instances, the first routing node can accurately forward an original message obtained after tunnel decapsulation of the calculation message to the first service instance based on the network address of the first service instance.

Description

Calculation routing method, device and system
The present application claims priority from chinese patent application No. 202211268934.4, entitled "a method of achieving Dyncast," filed on 10 months 17 of 2022, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method, an apparatus, and a system for routing computation.
Background
A power calculation priority network (computing first network, CFN), referred to as a power calculation network, is a network that provides power calculation services for client devices. A computing network generally includes a plurality of routing nodes and a plurality of service nodes. Wherein each service node is deployed with at least one service instance for providing a computing power service.
In a virtual private network (virtual private network, VPN), the outgoing interface of the routing node may be directly connected to the service node, and the VPN may assign a VPN Identifier (ID) to the outgoing interface of the routing node, i.e. the VPN identifier may point to the service instance. A routing node (i.e., an egress node) directly connected to a service node may advertise, via VPN control plane protocols, to other routing nodes, a traffic route for a traffic instance deployed in the service node, the traffic route comprising: the identity of the computational service provided by the service instance (i.e., the service identity), the address of the next hop routing node, the VPN ID, and the computational load of the service instance. And, each routing node in the power network may generate a power routing table based on the received traffic routes for the respective traffic instances. After receiving the calculation message sent by the client device, the routing node (i.e. the ingress node) connected with the client device can select a matched service instance based on the calculation load of each service instance recorded in the calculation routing table, and can send the calculation message to the egress node corresponding to the VPN ID based on the VPN ID in the calculation routing table.
However, if a certain egress interface of the egress node is connected to a plurality of service nodes through a forwarding device, the VPNID of the egress interface may point to a plurality of service instances, and accordingly, the egress node cannot distinguish a plurality of service instances based on the VPN ID, and further cannot forward the computation message to the service instance selected by the ingress node.
Disclosure of Invention
The application provides a method, a device and a system for calculating force routing, which can solve the technical problem that a routing node cannot distinguish a plurality of service instances based on VPN ID and further cannot forward a calculating force message to a designated service instance.
In a first aspect, a power routing method is provided, applied to a first routing node. The method comprises the following steps: and receiving a first calculation message sent by the second routing node through the VPN tunnel, wherein the first calculation message is obtained by the second routing node through VPN tunnel encapsulation of a first original message sent by the client device, and the first calculation message comprises a network address of a first service instance. And after the first calculation message is subjected to tunnel decapsulation, forwarding the first original message to the first service instance based on the network address of the first service instance.
Because the first calculation message sent by the second routing node through the VPN tunnel comprises the network address of the first service instance, even if the VPN identifier distributed in the first routing node points to at least two service instances, the first routing node can accurately forward the first original message obtained by tunnel decapsulating the first calculation message to the first service instance based on the network address of the first service instance.
Optionally, the first computing force message may further include: a VPN identification assigned to the first routing node. Therefore, after the first routing node receives the first power calculation message, the corresponding address space can be determined based on the VPN identifier, and the corresponding table entry is searched to guide the subsequent message forwarding flow.
Alternatively, the VPN identification may point to at least two service instances, each for providing the target computing power service, the first service instance belonging to the at least two service instances.
Wherein, an outgoing interface of the first routing node may be connected with the service node to which the at least two service instances belong, and the VPN identifier may be a VPN identifier allocated to the outgoing interface.
Optionally, the method may further include: receiving a second calculation message sent by a second routing node through the VPN tunnel, wherein the second calculation message is obtained by carrying out VPN tunnel encapsulation on a second original message by the second routing node, and the second calculation message comprises the VPN identifier and a network address of a second service instance, and the second service instance belongs to the at least two service instances. And then, after the second calculation message is subjected to tunnel decapsulation, forwarding the second original message to the second service instance based on the network address of the second service instance.
Because the second power calculation message includes the network address of the second service instance, even if the second power calculation message and the first power calculation message include the same VPN identifier, the first routing node can accurately forward the original message to different service instances based on the network addresses of different service instances.
Optionally, before receiving the first computation packet sent by the second routing node through the VPN tunnel, the method may further include: the first traffic route of the first traffic instance is advertised to the second routing node, the first traffic route comprising an identification of the target computational power service provided by the first traffic instance, the VPN identification, a network address of the first traffic instance, and a computational power load of the first traffic instance.
Wherein the first traffic route may further comprise a network address of the first routing node and the first traffic route may be used for the second routing node to generate a routing table. Correspondingly, after receiving the original message sent by the client device, the second routing node can select a service instance for providing the target computing service based on the routing table, and encapsulate the network address of the service instance to obtain the computing message.
Optionally, the power network may further include: and the measurement proxy node corresponding to the first service instance can be deployed as an independent module in the service node to which the first service instance belongs. The method may further comprise, prior to advertising the first traffic route of the first traffic instance to the second routing node: a second traffic route for the first traffic instance advertised by the metric proxy node is received. The second traffic route may include an identification of the target computing power service, a network address of the first traffic instance, and a computing power load of the first traffic instance.
Wherein the metric proxy node may advertise the second traffic route via an interior gateway protocol (interior gateway protocol, IGP) or border gateway protocol (border gateway protocol, BGP) protocol.
Optionally, the next hop attribute in the second service route carries the network address of the first service instance; or an extension attribute or an extension type-value (TLV) in the second service route carries the network address of the first service instance. Correspondingly, the next-hop attribute in the second service route may carry the network address of the service node to which the first service instance belongs.
Optionally, the outer layer header of the first computation packet may include the network address of the first service instance. That is, when the second routing node encapsulates the first original packet, the network address of the first service instance may be encapsulated in the outer layer packet header, without changing the content of the first original packet.
Optionally, before forwarding the first original packet to the first service instance based on the network address of the first service instance, the method may further include: based on the indication of VPN identification in the first calculation message, the network address of the first service instance is obtained from the outer layer message header.
In the present application, the VPN identifier in the first power packet may define a new forwarding behavior, which may also be referred to as a behavior feature (flag). The new forwarding behavior may include: and acquiring the network address of the service instance from the outer layer message header. Accordingly, after the first routing node detects the VPN identifier in the first power packet, a new forwarding behavior indicated by the VPN identifier may be executed.
Optionally, the VPN tunnel is a segment routing (segment routing based on Internet protocol version, srv 6) tunnel based on the sixth version of the internet protocol, and the VPN segment identifier or the path segment identifier in the outer layer packet header includes the network address of the first service instance. Or the VPN tunnel is a cross-three-layer network virtualization (network virtualization over layer, nvo 3) tunnel, and the option field in the outer layer header includes the network address of the first service instance. Or the VPN tunnel is an IPv6 tunnel, and the destination option header (destination option header, DOH) in the outer layer header includes the network address of the first service instance. Or the VPN tunnel is a multiprotocol label switching (multi-protocol label switching, MPLS) tunnel, and the MPLS extension header in the outer layer packet header includes the network address of the first service instance.
Optionally, the first original packet includes a network address of the first service instance. That is, when the second routing node encapsulates the first original packet, the network address of the first service instance may be inserted into the first original packet.
Optionally, the first original packet is an IPv6 packet, and the network address of the first service instance may be encapsulated in an option field of a Hop By Hop (HBH) option header of the IPv6 packet.
The first routing node obtains the second routing node and inserts a new option field in the HBH option header of the IPv6 message, and carries the network address of the first service instance through the new option field. And, the new option field can indicate a new forwarding behavior including: ignoring the destination address field of the IPv6 message, and performing table lookup forwarding based on the network address in the option field. In addition, the HBH option header carries the network address of the first service instance, so that each forwarding device between the first routing node and the first service instance can execute the new forwarding action under the instruction of the option field of the HBH option header.
Optionally, based on the network address of the first service instance, the process of forwarding the first original packet to the first service instance may include: and after the first original message is subjected to IP tunnel encapsulation, forwarding the first original message to the first service instance through an IP tunnel. Wherein the destination address of the IP tunnel may be a network address of the first service instance.
Alternatively, the source address of the IP tunnel may belong to an address space corresponding to the VPN identifier. For example, the source address of the IP tunnel may be an interface IP address of a routing egress interface in the first routing node to the first service instance.
Optionally, the method may further include: and receiving a return message sent by the first service instance through the IP tunnel, and sending the return message to a second routing node through the VPN tunnel based on the VPN to which the access interface receiving the return message belongs.
The tunnel information in the backhaul message may be generated after the first service instance exchanges addresses of the head and tail nodes with the tunnel information in the message sent by the first routing node.
Optionally, the first original message may include a destination address field and a network address of the first service instance. Based on the network address of the first service instance, the process of forwarding the first original message to the first service instance may include: ignoring the destination address field and forwarding the first original message to the first service instance based on the network address of the first service instance.
In the prior art, the destination address field of the first original message carries an identifier of a target computing service, and the target computing service can correspond to a plurality of service instances, and is compatible with the prior art as much as possible by carrying the network address of the first service instance at other positions except the destination address field, and after the first routing node receives the message, the destination address field can be ignored, and table lookup forwarding is performed based on the network address of the first service instance. Therefore, the first original message can be ensured to be accurately forwarded to the first service instance.
Optionally, the first original packet may be an IPv6 packet, and the network address of the first service instance may be encapsulated in an option field of an HBH option header of the IPv6 packet.
Optionally, before forwarding the first original packet to the first service instance based on the network address of the first service instance, the method may further include: and encapsulating the network address of the first service instance into a first original message.
After the first routing node decapsulates to obtain the first original message, if the network address of the first service instance, which is not encapsulated in the first original message, is detected, the network address of the first service instance obtained from the outer layer message header may be encapsulated in the first original message.
Optionally, the power computing network may further include a forwarding device, through which the first routing node is connected to a service node to which at least two service instances belong. Accordingly, based on the network address of the first service instance, the process of forwarding the first original packet to the first service instance may include: and forwarding the first original message to a service node to which the first service instance belongs through the forwarding equipment based on the network address of the first service instance.
It will be appreciated that the first routing node may be connected to the service node to which at least two service instances belong via one or more forwarding devices. Wherein each forwarding device is capable of forwarding the first original message based on the network address of the first service instance.
Alternatively, the network address of the first service instance may be an IP address. Such as an IPv4 address, or an IPv6 address.
In a second aspect, a power routing method is provided for use with a second routing node in a power network. The method comprises the following steps: receiving a first original message sent by client equipment, wherein the first original message comprises an identifier of a target computing power service requested by the client equipment; performing VPN tunnel encapsulation on the first original message to obtain a first calculation message, wherein the first calculation message comprises a network address of a first service instance for providing target calculation service; and then, the first calculation message is sent to the first routing node through the VPN tunnel so that the first routing node can forward the first original message to the first service instance based on the network address of the first service instance after the first calculation message is subjected to tunnel decapsulation.
Optionally, the first computing force message may further include: a VPN identification assigned to the first routing node, the VPN identification pointing to at least two service instances. The at least two business instances are each for providing the target computing power service, and the first business instance belongs to the at least two business instances.
Optionally, the method may further include: receiving a second original message, wherein the second original message comprises an identifier of a target computing power service; carrying out VPN tunnel encapsulation on the second original message to obtain a second calculation message, wherein the second calculation message comprises the VPN identifier and a network address of a second service instance, and the second service instance belongs to the at least two service instances; and then, sending a second calculation message to the first routing node through the VPN tunnel so that the first routing node can forward the second original message to the second service instance based on the network address of the second service instance after performing tunnel decapsulation on the second calculation message.
Optionally, before performing VPN tunnel encapsulation on the first original packet to obtain the first power packet, the method may further include: receiving at least one service route advertised by a first routing node, wherein the at least one service route comprises an identifier of a target computing power service, a VPN identifier, network addresses of at least two service instances, and computing power loads of the at least two service instances; thereafter, a first service instance is determined from the at least two service instances based on the computational load of the at least two service instances.
Wherein the second routing node may select a first service instance for providing the target computing power service for the client device based on a pre-configured computing network scheduling policy. For example, the first service instance may be the least computationally loaded service instance of the at least two service instances.
Optionally, before performing VPN tunnel encapsulation on the first original packet to obtain the first power packet, the method may further include: and determining the network address of the first service instance from a flow table of the service flow based on the identification information of the service flow to which the first original message belongs.
If the second routing node stores a flow table (also called a session table or a binding table) of the service flow to which the first original message belongs, the second routing node may directly determine the first service instance for providing the target computing power service for the client device from the flow table.
It may be understood that, if the first original packet is the first packet of the target computing service requested by the client device, the second routing node may select an appropriate first service instance from the routing table based on the computing network scheduling policy, and generate a flow table of the service flow to which the first original packet belongs based on the network address of the first service instance. And the second routing node can directly look up table and forward based on the flow table when receiving the subsequent message of the service flow.
Optionally, the outer layer header of the first computation message includes the network address of the first service instance.
Optionally, the VPN tunnel is a SRv tunnel, and the VPN segment identifier or the path segment identifier in the outer layer packet header includes a network address of the first service instance. Or the VPN tunnel is an NVO3 tunnel, and the option field in the outer layer header includes a network address of the first service instance. Or the VPN tunnel is an IPv6 tunnel, and the destination option header DOH in the outer layer packet header includes the network address of the first service instance. Or the VPN tunnel is an MPLS tunnel, and the MPLS extension header in the outer layer packet header includes the network address of the first service instance.
Alternatively, the first original message may be an IPv6 message. Correspondingly, the process of performing VPN tunnel encapsulation on the first original message to obtain the first power calculation message may include: encapsulating the network address of the first service instance in an option field of the HBH option header of the IPv6 message; and carrying out VPN tunnel encapsulation on the IPv6 message encapsulated with the network address of the first service instance to obtain a first calculation message.
In a third aspect, a power-calculation routing method is provided, and the power-calculation routing method is applied to a forwarding device, where the forwarding device is respectively connected with a first routing node and a service node to which at least two service instances belong, and the at least two service instances are used for providing a target power-calculation service. The method comprises the following steps: receiving an original message sent by a first routing node, wherein the original message comprises a destination address field and a network address of a first service instance, and the first service instance belongs to the at least two service instances; and then ignoring the destination address field and forwarding the original message to the first service instance based on the network address of the first service instance.
Optionally, the network address of the first service instance may be carried in an option field of the HBH option header of the original packet.
In a fourth aspect, a power routing method is provided and applied to a first routing node, where a VPN identifier allocated in the first routing node points to a first service instance. The method comprises the following steps: the second routing node is advertised a traffic route for the first traffic instance, the traffic route for the first traffic instance comprising an identification of a target computational power service provided by the first traffic instance, the VPN identification, a network address of the first traffic instance, and a computational power load of the first traffic instance.
Optionally, the VPN identification also points to a second service instance, which is also used to provide the target computing power service. The method may further comprise: the traffic route of the second traffic instance is advertised to a second routing node, the traffic route of the second traffic instance comprising an identification of the target computing power service, the VPN identification, a network address of the second traffic instance, and a computing power load of the second traffic instance.
In a fifth aspect, a power routing method is provided for a power network comprising a first routing node and a second routing node. The method comprises the following steps: the second routing node receives a first original message sent by the client device, wherein the first original message comprises an identifier of a target computing power service requested by the client device; the second routing node performs VPN tunnel encapsulation on the first original message to obtain a first calculation message, wherein the first calculation message comprises a network address of a first service instance; and then, the second routing node sends the first calculation message to the first routing node through the VPN tunnel. After the first routing node performs tunnel decapsulation on the received first computation message, forwarding the first original message to the first service instance based on the network address of the first service instance.
Optionally, the first computing force message further includes: and VPN identification allocated for the first routing node, wherein the VPN identification points to at least two service instances, the at least two service instances are used for providing target computing power service, and the first service instance belongs to the at least two service instances.
Optionally, the method may further include: the second routing node receives a second original message, wherein the second original message comprises the identification of the target computing power service; the second routing node performs VPN tunnel encapsulation on the second original message to obtain a second calculation message, wherein the second calculation message comprises the VPN identifier and a network address of a second service instance, and the second service instance belongs to the at least two service instances; and then, the second routing node sends the second calculation message to the first routing node through the VPN tunnel. After the first routing node performs tunnel decapsulation on the second computation message, forwarding the second original message to the second service instance based on the network address of the second service instance.
Optionally, before the second routing node performs VPN tunnel encapsulation on the first original packet to obtain the first computation packet, the method may further include: the first routing node advertises at least one first traffic route to the second routing node, the at least one first traffic route including an identification of the target traffic service, the VPN identification, the network addresses of the at least two traffic instances, and the traffic load of the at least two traffic instances. The second routing node determines a first service instance from the at least two service instances based on the computational power load of the at least two service instances.
Optionally, the power network may further include: and measuring proxy nodes. The method may further comprise, before the first routing node advertises the at least one first traffic route to the second routing node: the metric proxy node advertises at least one second traffic route to the first routing node. Wherein the at least one second traffic route includes an identification of the target computing power service, network addresses of the at least two traffic instances, and computing power loads of the at least two traffic instances.
Optionally, the outer layer header of the first computation message includes the network address of the first service instance. Before the first routing node forwards the first original message to the first service instance based on the network address of the first service instance, the method may further include: the first routing node obtains a network address of the first service instance from the outer layer header based on the indication of the VPN identification in the first computation message.
Optionally, the first original message includes a network address of the first service instance.
Optionally, the process of forwarding the first original packet to the first service instance by the first routing node based on the network address of the first service instance may include: and after the first routing node encapsulates the first original message through the IP tunnel, forwarding the first original message to the first service instance through the IP tunnel. Wherein the destination address of the IP tunnel is the network address of the first service instance.
Optionally, the power computing network further includes a forwarding device, and the first routing node is connected with the service node to which at least two service instances belong through the forwarding device. The first original message includes a destination address field and a network address of the first service instance. The process of the first routing node forwarding the first original message to the first service instance based on the network address of the first service instance may include: the first routing node ignores the destination address field and forwards the first original message to the forwarding device based on the network address of the first service instance; the forwarding device ignores the destination address field and forwards the first original message to the first service instance based on the network address of the first service instance.
Optionally, before the first routing node forwards the first original packet to the first service instance based on the network address of the first service instance, the method may further include: the first routing node encapsulates the network address of the first service instance into a first original message.
Optionally, the process of performing VPN tunnel encapsulation on the first original packet by the second routing node to obtain the first computation packet may include: the second routing node encapsulates the network address of the first service instance in the first original message; and the second routing node performs VPN tunnel encapsulation on the first original message encapsulated with the network address of the first service instance to obtain a first calculation message.
In a sixth aspect, a network device is provided, which may be a first routing node, a second routing node or a forwarding device. The network device comprises at least one module which may be used to implement the computing power routing method provided in any of the above first to fourth aspects.
In a seventh aspect, a network device is provided, which may be a first routing node, a second routing node or a forwarding device, the network device comprising: the system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor implements the computational power routing method provided in any one of the first aspect to the fourth aspect when the computer program is executed.
In an eighth aspect, there is provided a computer readable storage medium having instructions stored therein which, when executed on a processor, cause the processor to perform the method of computational force routing as provided in any of the first to fourth aspects above.
In a ninth aspect, there is provided a computer program product comprising instructions for execution by a processor to implement the method of computing power routing as provided in any one of the first to fourth aspects above.
In a tenth aspect, a chip is provided, which may be used to implement the power routing method provided in any one of the first to fourth aspects.
In an eleventh aspect, there is provided a power routing system comprising: the system comprises a first routing node, a second routing node and at least one service node connected with the first routing node. Wherein the first routing node is configured to implement the method as provided in the first or fourth aspect above; the second routing node is for implementing the method as provided in the second aspect above.
Optionally, the power routing system may further include: and the forwarding equipment is used for connecting the first routing node with at least one service node. Wherein the forwarding device is configured to implement the method as provided in the third aspect above.
Alternatively, the first routing node and the second routing node may each be a routing device supporting VPN forwarding, for example, each may be a Provider Edge (PE) device. The forwarding device may be a Customer Edge (CE) device.
In summary, the application provides a method, a device and a system for routing computation. In the scheme provided by the application, the computing message received by the first routing node through the VPN tunnel comprises the network address of the first service instance for providing the target computing service. Therefore, even if VPN identifiers distributed in the first routing node point to at least two service instances, the first routing node can accurately forward an original message obtained after tunnel decapsulation of the calculation message to the first service instance based on the network address of the first service instance.
Drawings
FIG. 1 is a schematic diagram of a power network according to an embodiment of the present application;
FIG. 2 is a schematic diagram of another power network according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a further power network according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a further power network according to an embodiment of the present application;
FIG. 5 is a flow chart of a method of power routing provided by an embodiment of the present application;
FIG. 6 is a flow chart of another method of power routing provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a further power network according to an embodiment of the present application;
FIG. 8 is a flow chart of yet another method of power routing provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of a first routing node according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a second routing node according to an embodiment of the present application;
Fig. 11 is a schematic structural diagram of a forwarding device according to an embodiment of the present application;
Fig. 12 is a schematic structural diagram of a network device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of another network device according to an embodiment of the present application.
Detailed Description
The following describes in detail the method, the device and the system for routing by computing force provided by the embodiment of the application with reference to the accompanying drawings.
To meet the demands of delay sensitive services, network edges corresponding to different client devices (also referred to as users) may be spread over different scale service sites for meeting the resource deployment demands of different services. Multiple service instances (SERVICE INSTANCE) may be deployed in the network for the same service. The service site may also be referred to as a service node, an edge site (edge site), or a mobile edge computing (mobile edge computing, MEC) site, among others. The business is also referred to as a computing service, and the business instance is also referred to as a computing service instance. In the traditional scheme, the service and the network are completely decoupled and are not perceived, service stations are independent from each other, and resources cannot be reused. As a result, service sites of some hotspots may be overloaded, while resources of some service sites are idle, i.e., resource utilization is low.
In order to improve the resource utilization rate of the service site and realize the optimal utilization of the whole network resource, a computing network based on dynamic anycast (DYNAMIC ANYCAST, dyncast) is generated. The power computing network can sense the power computing load of service examples deployed by each service site in real time, and can perform unified scheduling decision on a request of client equipment for accessing power computing service based on the power computing load and network resources so as to determine the service examples for providing the power computing service. The computational load of the service instance may include the utilization rate of the processor, the utilization rate of the memory, the number of online connections, and the like. The processor usage may include a central processing unit (central processing unit, CPU) usage, and/or a graphics processor (graphics processing unit, GPU) usage, etc. The number of online connections may also be referred to as the number of sessions, which refers to the total number of sessions established between the service instance and the respective client devices.
Fig. 1 is a schematic structural diagram of a power network according to an embodiment of the present application, and as shown in fig. 1, the power network may include a plurality of routing nodes 01 and a plurality of service nodes 02. Communication connection is established between the plurality of routing nodes 01, and communication connection is established between each service node 02 and one routing node 01. Each routing node 01 may be a device with a packet forwarding function, such as a router or a switch, and may also be referred to as a CFN node (node) or a scheduling node. As can be seen with reference to fig. 1, the plurality of routing nodes 01 may be located in a CFN layer, which may be carried by an internet protocol (Internet protocol, IP) layer.
At least one service instance is deployed in each service node 02 for providing computing power services, i.e. providing computing resources, and functions or services. Also, multiple service instances for providing the same computing service may run at different network locations, i.e., be deployed at different service nodes 02. Each service node 02 may be a single physical server (i.e. a service host), a server cluster or a cloud server, or may also be a virtual machine or a container, or may also be a load balancer in the cluster. The load balancer is connected with a plurality of service hosts in the cluster, a plurality of service instances deployed in the plurality of service hosts can be externally equivalent to one service instance, and the load balancer can forward a computation message (also called an application request) of the client device to the service hosts. A service instance may refer to an application service running in a host or server or cluster that is capable of responding to application requests by a client device.
With continued reference to fig. 1, the routing node 01 may also be connected to the client device 03 and configured to route the computational messages sent by the client device 03 to a matching service instance, so that the matching service instance provides computational services for the client device 03. The client device 03 may be referred to as a user device, and may be a terminal device such as a home gateway (RESIDENTIAL GATEWAY, RGW), a mobile terminal (e.g., a mobile phone or a notebook), a desktop computer or a personal computer (personal computer, PC), or may be a service client installed on a terminal or a host. The calculation message sent by the client device 03 may include a calculation request message and a subsequent message. The calculation request message may be a first message, i.e., a first packet, sent by the client device 03 in a session establishment process with the service instance, and the subsequent message may refer to other messages sent by the client device 03 in a session establishment process with the service instance, including a subsequent protocol message (e.g., a handshake message) for establishing a session with the service instance, and a calculation task message for acquiring a calculation service.
It will be appreciated that the routing node 01 directly connected to the client device 03 may also be referred to as an ingress (ingress) node, and the routing node 01 directly connected to the service node 02 to which the matching service instance belongs may also be referred to as an egress (egress) node. That is, the ingress node is a node where the computing power message sent by the client device 03 enters the computing power network, and the egress node is a node where the computing power message sent by the client device 03 leaves the computing power network.
Fig. 2 is a distributed architecture diagram of a computing power network according to an embodiment of the present application, where, as shown in fig. 2, the computing power network may further include: dynast metric proxy (DYNCAST METRIC AGENT, D-MA) nodes. The D-MA node may be configured to collect and publish the computational load metric update messages of the service instance, but not perform forwarding decisions. And, the D-MA node may operate in the routing node 01, or may be deployed in the service node 02 together with the service instance as a separate module.
Routing node 01 in Dyncast a power network, also referred to as a Dynast routing (Dynast Router, D-Router) node, is capable of sensing a metric related to the power load of a service instance and a metric related to network performance, and is capable of making forwarding decisions based on the service instance and maintaining instance viscosity. The instance viscosity refers to forwarding the data packets belonging to the same service requirement to the same service instance.
For example, referring to fig. 2, routing node D-R1 may receive the computational load of a service instance deployed in edge site 1 advertised by the D-MA node and may make forwarding decisions for requests of its connected client devices based on the computational load. The routing node D-R3 may receive the computational load of the service instance deployed in the edge site 2 advertised by the D-MA node and may make forwarding decisions for requests of its connected client devices based on the computational load. Wherein the routing nodes D-R1, D-R2 and D-R3 can be connected through an infrastructure (infra) network.
It is understood that in Dyncast computing networks, the Identity (ID) of the computing service, i.e. the service identity, is also known as Dyncast service ID (DYNCAST SERVICE ID, D-SID). The service identity may be a name, an ID or an address, for example also an anycast (anycast) address. And, the service identity is used for the client device 03 to access the computing power service, and can identify all service instances of the computing power service. The identity of a service instance may also be referred to as Dyncast binding ID (Dyncast binding ID, D-BID) which can represent a certain service instance to a computing service indicated by a particular D-SID. That is, the D-BID is used to bind traffic requests of a certain client device to a service instance accessing this D-BID representation.
It can also be appreciated that Dyncast-based computational power routing techniques mainly include the following five core functions:
1. business computing power perception: and acquiring and calculating a measurement value of the calculation force load index of the service instance, namely, a calculation force measurement value.
2. Business computing power annunciation: the metric value of the computational load index of the service instance is advertised to the scheduling node (i.e., routing node).
3. And (3) detecting a service network: the scheduling node detects the network performance and acquires a network metric value, namely a network performance metric value.
4. Service request scheduling: and the scheduling node performs joint scheduling based on computational power and network metrics on the Anycast-based service request, and selects a service site.
5. Traffic flow retention: session hold, flow viscosity, or instance viscosity so that traffic can be held at a selected service site.
In a VPN-based Dyncast algorithm network, the PE device of the VPN may act as a routing node 01, i.e. a D-Router node. The service node 02 may then be a CE device of the VPN, and directly connected to a PE device of the VPN. And the VPN exit interface and next hop on the PE device will point to the service instance deployed in the service node 02 to which it is connected. The basic principle of VPN is to package VPN tunnel head for message, and to establish special data transmission channel by VPN backbone network to realize transparent transmission of message. The tunnel of VPN iteration may be an IPv6 tunnel, an MPLS tunnel, or a virtual extended local area network (virtual extensible local area network, VXLAN) tunnel, etc. In the existing three-layer VPN (layer 3VPN, l3 VPN) technology, the VPN may be an IPv4 or IPv6VPN, or may be a global (global) IPv4 or IPv6 VPN.
In the process of service computing power notification, first, a D-MA node on the CE side or the user network interface (user network interface, UNI) side can notify computing power load information of a service instance to PE equipment, namely a D-router node. For example, the D-MA node may advertise the computational load information of the service instance through IGP or BGP.
For example, as shown in fig. 3, if a service instance of the computing service S1 is deployed in the edge site n, the D-MA node in the edge site n may notify the PE device Rn of the computing load information of the service instance. Network Layer Reachability Information (NLRI) and computing path attributes (computing path attribute) may be included in the computing load information. The NLRI includes a service identifier of the computing service S1: s1 IP, and a Next Hop (NHP) attribute including the IP address of the edge site n. The calculation strength path attribute includes a calculation strength value 10 of the service instance.
And then, the PE devices can mutually announce the service route carrying the calculation load information through the VPN control plane protocol. The prefix of the service route is a service identifier of the computing power service provided by the service instance, and the service identifier is an address of the VPN, such as an IPv4 address or an IPv6 address of the VPN. The VPN address may be the ANYCAST IP address of the service, which corresponds to the D-SID of Dyncast, which is able to access the destination address of the service instance as a service flow. The service route also carries the address of the next hop PE device, VPN ID (D-BID corresponding to Dyncast) and a calculation force group attribute. The computation effort community attribute may also be referred to as a computation effort path attribute, which is used to carry the computation effort metrics, i.e. computation effort metrics, of the business instance.
By way of example, as shown in fig. 3, traffic routes advertised by PE device Rn to other PE devices may include NLRI and computational path attributes. The NLRI includes a service identifier of the computing service S1: s1 IP, address family (ADDRESS FAMILY): ethernet VPN (ETHERNET VPN, EVPN), next hop attribute: IP address of PE device Rn and label (label): VPN SIDn. The computation path attributes include computation metrics 10 of the business instance.
It will be appreciated that a VPN may assign a VPN ID to each outgoing interface + next hop of a PE device, which VPN ID may be referred to as a D-BID. After the exit PE device connected with the edge site announces the service route to other PE devices, the other PE devices can generate the mapping relation between the D-SID and the D-BID and the calculation intensity information of the site granularity of the D-BID identification based on the received service route.
By way of example, PE device R1 in fig. 3 may generate a routing table as shown in table 1 based on the received traffic routes. The service identifier in the routing table may be ANYCAST IP addresses of the computing service S1: s1 IP, its corresponding next hop IP includes: r1 IP, R2 IP, and Rn IP. The VPN IDs corresponding to the three next hop IPs are respectively: VPN SID1, VPN SID2 and VPN SIDn. The calculated intensity value of the service example pointed by the VPN SID1 is 60, the calculated intensity value of the service example pointed by the VPN SID2 is 20, and the calculated intensity value of the service example pointed by the VPN SIDn is 10. The routing table also records network metric values corresponding to the VPN IDs, where the network metric values refer to network performance metric values between the egress PE device and the ingress PE device (e.g., R1) indicated by the next hop IP. The network metric value may be measured by the ingress PE device or may be configured in the ingress PE device.
TABLE 1
In the process of service request scheduling and traffic forwarding, the client device first obtains the service identifier of the computing service, namely ANYCAST IP address, from the Domain Name System (DNS) NAME SYSTEM according to the domain name information of the computing service requested by the client device. The client device can then send the original message with the ANYCAST IP address as the destination address.
The original message firstly reaches the entrance PE device accessed to the client terminal, and the entrance PE device inquires the forwarding table item of load sharing in the routing table according to the ANYCAST IP address, and makes a scheduling decision. Referring to table 1, it can be seen that each load sharing entry in the routing table includes a calculated metric value for a service instance of a corresponding edge site received by an ingress PE device via the control plane, and network performance data (i.e., a network metric value) measured or configured for the ingress PE device to the egress PE device connected to the corresponding edge site. When the scheduling decision of the ingress PE equipment is made, the selection of the edge site to which the original message is sent can be performed according to the service strategy corresponding to the service identifier, such as a designated or default scheduling algorithm.
After the ingress PE device obtains the next-hop IP and VPN ID according to the scheduling result, the message can be packaged into a tunnel of the next-hop IP, and the calculation message obtained after the tunnel is packaged is sent to the egress PE device connected with the selected edge site. The tunnel header of the calculation message obtained after tunnel encapsulation comprises the next hop IP and VPN ID. After the egress PE device receives the packet, it may determine, according to the VPN ID and the destination address of the original packet (i.e., ANYCAST IP of the computing service), the egress interface and the next hop of the service instance of the connected edge site, and forward the original packet to the selected service instance.
In the stage of service traffic maintenance, if the service policy corresponding to the service identifier requires to support traffic maintenance (also referred to as session maintenance), the ingress PE device may generate a binding table after selecting an edge site for a client device accessing the computing service, where the binding table may also be referred to as a session table or a flow table. And the binding table comprises the binding relation between the identification information of the service flow, the next hop address and the VPN ID. Traffic maintenance may refer to the continuous provision of a computing service to the same traffic flow by a fixed service instance. The identification information of the traffic flow may also be referred to as key information, which may be five-tuple information (source address, destination address, source port number, destination port number and protocol number) or two-tuple information (source address and destination address). Subsequent messages received by the ingress PE device can directly find the binding table for forwarding so as to realize continuity of the client device in accessing the computing service.
For example, the ingress PE device R1 may generate a binding table shown in table 2, where the identification information of the traffic flow in the binding table is two-tuple information including a source address and a destination address. For example, assume that a source address of a certain traffic flow is SIP1, a destination address is S1 IP, a corresponding next hop IP is Rn IP, and a corresponding VPN ID is VPN SIDn. Accordingly, as shown in fig. 4, the ingress PE device R1 may encapsulate VPN SIDn an original packet of the service flow, and forward the original packet to the egress PE device Rn after tunneling encapsulation. The egress PE device Rn may then forward the original message to the service instance in the edge site n for providing the computing service S1 based on this VPN SIDn.
TABLE 2
Source address Destination address Next hop IP VPN ID
SIP1 S1 IP Rn IP VPN SIDn
SIP2 S1 IP R2 IP VPN SID2
The computational power routing scheme multiplexes the VPN technology of the existing network equipment, and realizes the computational power routing function based on the expansion. The value-added of the network is realized because the forwarding protocol does not need to be expanded. In addition, the scheme can indirectly identify the position of the service instance by utilizing the VPN ID in the VPN technology and support the network problem positioning of the computational routing. However, the PE device does not acquire the actual address of the service instance, so that the technical solution cannot reliably support a part of the scenario. For example, for a scenario in which the PE device is not directly connected to the service instance by forwarding, but is connected to the service instance by at least one three-layer network forwarding device, and the PE device is connected to a plurality of service instances, forwarding from the PE device to the plurality of service instances cannot be distinguished by a VPN ID. That is, the egress PE device cannot accurately forward the power calculation message to the selected service instance based on the VPN ID in the power calculation message.
It will also be appreciated that it is also possible for a VPN in the related art to assign a VPN ID to each VPN instance, or to assign a VPN ID to each traffic route. However, since one VPN instance or one service route may point to multiple service instances, there are scenarios in which the PE device cannot distinguish the multiple service instances by one VPN ID.
The embodiment of the application provides a power-calculating routing method, which can ensure that in a VPN scene, even if one VPN ID in PE equipment points to a plurality of service instances, the PE equipment can realize accurate forwarding of power-calculating messages. The computational power routing method can be applied to a computational power network, wherein the computational power network comprises a first routing node and a second routing node. The first routing node may refer to a routing node directly connected to the service node, i.e. an egress node. The second routing node may refer to a routing node directly connected to the client device, i.e. an ingress node. And, the first routing node and the second routing node may both be routing devices supporting VPN forwarding. As shown in fig. 5, the method includes:
step 101, the second routing node receives a first original message sent by the client device.
In the embodiment of the application, if the client needs to request the target computing power service, the first original message can be sent to the second routing node directly connected with the client. The first original message includes an identification of the target computing service. The identity of the target computing service (which may also be referred to as a service identity or traffic identity) may be an IP address, for example an IPv6 address. And, the identity of the computing service may be an anycast address. Illustratively, the destination address field of the first original message carries an identification of the target computing service.
Step 102, the second routing node performs VPN tunnel encapsulation on the first original message to obtain a first calculation message, where the first calculation message includes a network address of the first service instance.
After receiving the first original message sent by the client device, the second routing node may perform VPN tunnel encapsulation on the first original message to obtain a first computing power message, where the first computing power message includes a network address of a first service instance for providing the target computing power service. The network address of the first service instance may be an IP address. Alternatively, the network address of the first service instance may be encapsulated in an outer layer header of the first power packet, or may be encapsulated in an option field of the first original packet.
It will be appreciated that multiple business instances in a computing network may each provide a target computing service, and that the multiple business instances may be deployed at different network locations, i.e., at different service nodes. And, each service instance of the plurality of service instances has a respective network address, the network address of each service instance being capable of identifying a unique location of the service instance in the network. The second routing node may determine a first service instance from the plurality of service instances for providing the target computing service to the client device based on the computing load (or the computing load and the network performance) of each service instance perceived by the second routing node, and encapsulate a network address of the first service instance into a first computing packet.
Step 103, the second routing node sends the first calculation message to the first routing node through the VPN tunnel.
In the embodiment of the application, because the first routing node and the second routing node are both routing devices supporting VPN forwarding, for example, PE devices capable of being VPN, the second routing node can send the first calculation message to the first routing node through a VPN tunnel. It may be appreciated that the first computation packet may further be encapsulated with a VPN identifier allocated by the VPN for the first routing node.
Alternatively, the VPN tunnel (i.e., the tunnel of VPN iterations) may be an IPv6 tunnel, SRv tunnel, MPLS tunnel, NVO3 tunnel, or the like. Wherein for SRv tunnels, the VPN identification may be a VPN segment identification (VPN SEGMENT IDENTIFIER, VPN SID), which may also be referred to as SRv service SID. For MPLS tunnels, the VPN identification may be a VPN label. For NVO3 tunnels, the VPN identification may be a virtual network identification (virtual network identifier, VNI), which may also be referred to as a tenant identification.
Step 104, after the first routing node performs tunnel decapsulation on the first computation message, forwarding the first original message to the first service instance based on the network address of the first service instance.
After the first routing node receives the first calculation message through the VPN tunnel between the first routing node and the second routing node, the first calculation message can be subjected to tunnel decapsulation to obtain a first original message and a network address of a first service instance. And then, the first routing node can forward the first original message to the first service instance based on the network address of the first service instance. Therefore, even if the outgoing interface and the next hop connected with the first service instance in the first routing node point to other service instances, namely one VPN identifier in the first routing node points to a plurality of service instances, the first routing node can realize accurate forwarding of the first original message based on the network address of the first service instance.
In summary, the embodiment of the present application provides a method for routing a computing power, where a computing power packet received by a first routing node through a VPN tunnel includes a network address of a first service instance. Therefore, even if one VPN mark distributed in the first routing node points to a plurality of service examples, the first routing node can accurately forward an original message obtained after tunnel decapsulation of the power calculation message to the first service example based on the network address of the first service example, so that the reliability of the power calculation routing is effectively ensured.
Fig. 6 is a flowchart of another method for routing a power network according to an embodiment of the present application, where the method may be applied to a power network including a first routing node and a second routing node. The first routing node may refer to a routing node directly connected to the serving node, i.e. an egress node. The second routing node may refer to a routing node directly connected to the client device, i.e. an ingress node, where the ingress node is a traffic scheduling node and has a network-level load balancing and session maintaining function.
The first routing node and the second routing node may be both routing devices supporting VPN forwarding, for example, may be PE devices. Correspondingly, the second routing node is also an ingress PE device of the VPN, and the first routing node is also an egress PE device of the VPN. As shown in fig. 6, the method includes:
step 201, the first routing node receives at least one second traffic route advertised by the metric proxy node.
In an embodiment of the application, a metric proxy node, which may also be referred to as a D-MA node, may be deployed in each service node in the computing network. The metric proxy node can collect the computational load of each service instance deployed in the service node, and can advertise a second service route carrying the computational load of the service instance to the routing node directly connected to the service node. For example, the metric-agent node may advertise the second traffic route to the routing node via BGP or IGP.
In the embodiment of the application, the first routing node can be connected with at least one service node through the forwarding device, and at least two service instances for providing the target computing power service are deployed in the at least one service node. Accordingly, the first routing node may receive at least one second traffic route advertised by a metric proxy node deployed in the at least one serving node. The at least one second traffic route may include an identification of the target computing power service, network addresses of the at least two traffic instances, and computing power loads of the at least two traffic instances. It will be appreciated that each second traffic route may include an identification of the target computing service, a network address of at least one traffic instance, and a computing load of the at least one traffic instance.
The identifier of the target computing power service may be an anycast address of the target computing power service, the network address of the service instance may be an IP address (i.e., the real position of the service instance in the computing power network), and the computing power load of the service instance may be represented by a computing power magnitude. The computation effort value may be calculated by the metric proxy node based on the usage of the processor of the service instance, the usage of the memory, and/or the number of online connections.
Optionally, the next hop attribute in the second service route may carry the network address of the service instance. Or the extended attribute or the extended TLV in the second service route carries the network address of the service instance. And, the next hop attribute in the second service route may carry the network address of the service node to which the service instance belongs. Since the next-hop attribute in the traffic route is typically used to carry the network address of the serving node, the network address of the traffic instance carried by the next-hop attribute may also be referred to as an implicit network address. Accordingly, the network address of the service instance carried by the extended attribute or the extended TLV may also be referred to as the displayed network address.
By way of example, as shown in fig. 7, it is assumed that the first routing device PE3 is connected to two service nodes (i.e. the edge site 3 and the edge site 4) through the forwarding device CE3, and that service instances for providing the target computing service S1 are deployed in both service nodes. Accordingly, the D-MA node in edge site 3 may advertise a second traffic route to first routing node PE3, the NLRI in the second traffic route including the identity of target computing service S1: the S1 IP, NHP carries a network address of a service instance for providing the target computing power service S1: e3IP, the calculation force path attribute carries the calculation force value 10 of the service instance.
Similarly, the D-MA node in edge site 4 may advertise a second traffic route to first routing node PE3, the NLRI in the second traffic route including the identity of target computing service S1: the S1 IP, NHP carries a network address of a service instance for providing the target computing power service S1: e4IP, the calculation force path attribute carries the calculation force value 10 of the service instance.
It will be appreciated that the metric proxy node may also be deployed in the first routing node. Accordingly, the metric proxy node may obtain the network address of the at least two service instances, the computing load, and the identity of the target computing service S1 from the service node. For example, the metrology proxy node may obtain the information via a hypertext transfer protocol (hyper text transfer protocol, HTTP) request.
Step 202, the first routing node advertises at least one first traffic route to the second routing node.
After the first routing node receives at least one second service route advertised by the metric proxy node, the network address of each service instance can be extracted from the at least one second service route according to the configured policy. For example, the first routing node may extract an implicit network address or a displayed network address from each of the second traffic routes according to the configured policies. The first routing node may then advertise at least one first traffic route to other routing nodes including the second routing node, e.g., the first routing node may advertise the at least one first traffic route to other routing nodes via BGP.
Wherein the at least one first traffic route may include an identification of the target computing power service, an identification of the VPN assigned in the first routing node, network addresses of the at least two traffic instances, and computing power load of the at least two traffic instances. Optionally, the at least one first traffic route may further include a network address of the first routing node. Each first traffic route may include an identification of the target computing power service, the VPN identification, a network address of at least one traffic instance, and a computing power load of the at least one traffic instance.
For example, each first service route may carry network addresses of a plurality of service instances and computational load of the plurality of service instances. Or each first service route may carry a network address of a service instance and a computational load of the service instance. That is, the first routing node may advertise one first traffic route for each traffic instance. And, the plurality of first traffic routes advertised by the first routing node may include an NLRI and an additional path (add-path) attribute. The NLRI includes an identifier of the target computing service (i.e. a prefix of the service route), and a path identifier (path ID), where the additional path attribute carries a network address of a service instance and a computing load of the service instance. And the prefixes of the plurality of first service routes are the same, the path IDs are different, and the path IDs can distinguish different service instances.
Assuming that the at least two service instances for providing the target computing power service include a first service instance and a second service instance, the first routing node may advertise a first service route of the first service instance and a first service route of the second service instance, respectively, to the second routing node. The first service route of the first service instance comprises an identifier of the target computing power service, a VPN identifier, a network address of the first service instance and a computing power load. The first service route of the second service instance comprises an identifier of the target computing power service, a VPN identifier, a network address of the second service instance and a computing power load.
In the embodiment of the present application, the network address of the service instance may be carried in an extended attribute or TLV of the first service route. The VPN identification in the first traffic route may be a VPN identification pointing to at least one traffic instance in the first traffic route. For example, assuming that each outgoing interface+next hop in the first routing node is assigned a VPN identifier, the first traffic route of each traffic instance carries the VPN identifier assigned to the outgoing interface+next hop of the traffic instance. That is, if a certain outgoing interface+next hop points to a plurality of service instances, the same VPN identifier is carried in the first service route of the plurality of service instances.
For example, as shown in fig. 7, assuming that the VPN identifier allocated by the outgoing interface connected to the forwarding device CE3 in the first routing node PE3 is VPN SID3, the VPN SID3 may point to two service instances of the target computing power service S1, where the network addresses of the two service instances are E3IP and E4IP, respectively. Accordingly, the first routing node PE3 may advertise the first service routes of the two service instances to other routing nodes including the second routing node PE1, where VPN identifiers carried in the first service routes of the two service instances are VPN SID3.
Step 203, the second routing node generates a routing table based on the received at least one first traffic route.
In the embodiment of the application, after the second routing node receives the first service route advertised by the first routing node and other routing nodes, the routing table can be generated based on the first service route. The routing table may also be referred to as a load sharing forwarding table, and the routing table may record an identifier of a target computing service, a network address of each service instance for providing the target computing service, a direct next hop from the second routing node to each service instance, a VPN identifier corresponding to each service instance, and a computing load of each service instance. Wherein, the VPN identifier corresponding to each service instance refers to the VPN identifier in the first service route of the service instance. The computational load of each service instance may be characterized by a computational effort magnitude.
For example, referring to fig. 7, a service instance for providing the target computing power service S1 is deployed in each of the edge sites connected to the first routing node PE3 and the routing node PE2, and a service instance for providing the target computing power service S1 is also deployed in the edge site 1 connected to the second routing node PE 1. Thus, the second routing node PE1 can generate a routing table such as that shown in Table 3 based on the first traffic routes advertised by the first routing node PE3 and the routing node PE2, and the second traffic routes advertised by the metric proxy node D-MA in the edge site 1.
Referring to table 3, the identifier (may also be referred to as a service identifier or a service IP) of the target computing service in the routing table is S1 IP, and the network addresses of the corresponding service instances are respectively: e1IP, E2IP, E3IP and E4IP. Wherein, the next hop IP corresponding to the service instance E1IP is EIIP, the corresponding VPN identifier is VPN SID1, and the calculated metric value is 60. The next hop IP corresponding to the service instance E2IP is the network address of the routing node PE 2: PE 2IP, the corresponding VPN label is VPN SID2, and the calculated magnitude is 20. The next hop IP corresponding to the service instances E3IP and E4IP are both network addresses of the first routing node PE 3: PE 3IP, the corresponding VPN marks are VPN SID3, and the calculated magnitude is 10. The VPN SID1 is a VPN identifier allocated to an outgoing interface connected to the edge site 1 in the second routing node PE1, the VPN SID2 is a VPN identifier allocated to an outgoing interface connected to the edge site 2 in the routing node PE2, and the VPN SID3 is a VPN identifier allocated to an outgoing interface connected to the forwarding device CE3 in the first routing node PE 3.
TABLE 3 Table 3
Optionally, as shown in table 1, a network metric value of each service instance is also recorded in the routing table, where the network metric value may be calculated based on network performance between the second routing node and each service instance. The network performance may be characterized by at least one of the following parameters: delay, packet loss rate, jitter, etc. And, the network metric value may be calculated by the second routing node based on the detected network performance, or may be calculated and advertised by a metric proxy node in the service node to which each service instance belongs.
It will be appreciated that the first routing node may also receive traffic routes advertised by other routing nodes and may generate a routing table based on the received traffic routes. The routing table may also record an identifier of the target computing power service, a network address of each service instance for providing the target computing power service, a direct-connection next hop from the first routing node to each service instance, and a VPN identifier corresponding to each service instance.
For example, for the computational power network shown in fig. 7, the first routing node PE3 may generate a routing table such as that shown in table 4. The identification of the target computing service in the routing table is S1 IP, and the network addresses of the corresponding service examples are respectively: e1IP, E2IP, E3IP and E4IP. The next hop IP corresponding to the service instance E1IP is the network address of the second routing node PE 1: PE1IP, the corresponding VPN identification is VPN SID1. The next hop IP corresponding to the service instance E2IP is the network address of the routing node PE 2: PE 2IP, the corresponding VPN identification is VPN SID2. The next hop IP corresponding to the service instance E3IP and E4IP are both network addresses of the forwarding device CE 3: IP3, the corresponding VPN marks are VPN SID3.
TABLE 4 Table 4
It will also be appreciated that, in the above step 201, if the metric proxy node advertises the second traffic route of the traffic instance to the first routing node through the IGP, each forwarding device between the metric proxy node and the first routing node may also generate an entry in which the network address of the traffic instance is recorded. For example, each forwarding device may record the network address of the service instance in an extension field of its routing table, which may also include information such as the identity of the target computing service, the egress interface, and the next hop. Or the forwarding device may generate a new entry to record the network address of the service instance and the network address of each service instance in the new entry can correspond to a key (key) field in the routing table. The key field may include information such as next hop IP and/or outgoing interface.
Step 204, the second routing node receives the first original message sent by the client device.
In the embodiment of the present application, if the client device needs to request the target computing power service, the first original message may be sent to the second routing node directly connected to the client device. The first original message may include an identification of a target computing service, which may be an IP address, for example, an IPv6 address. And, the identity of the computing service may be an anycast address. For example, the destination address field of the first original message may carry an identification of the target computing service.
Step 205, the second routing node determines a first service instance from the routing table for providing the target computing power service.
After receiving the first original message sent by the client device, the second routing node may determine, from the routing table, a first service instance for providing the target computing service based on the identifier of the target computing service carried in the first original message. For example, the second routing node may compare the computational load of at least two service instances recorded in the routing table for providing the target computational service based on a pre-configured computational network scheduling algorithm, and determine a first service instance satisfying a service condition from the at least two service instances.
Wherein the service condition may include: the computational load of the service instance is less than the load threshold. The load threshold may be preconfigured in the second routing node or may be determined based on the computational load of the respective service instance. For example, the load threshold may be an average of the computational load, a smaller quartile, or a larger quartile, etc., of the individual business instances. Or the load threshold may be the computational load of each service instance, which is located at the second position after being sequenced from small to large, and correspondingly, the second routing node may determine the service instance with the least computational load as the first service instance meeting the service condition.
Optionally, the second routing node may determine the first service instance based on the network performance of the respective service instance in addition to the computational load. Accordingly, the service condition may further include: network performance between the service instance and the second routing node is better than the performance threshold. The performance threshold may be preconfigured in the second routing node or may be determined based on network metric values for the respective service instances. For example, it may be the mean, smaller quartile, or larger quartile of the network metric values for the respective service instances, etc. For example, the second routing node may determine the service instance with the least computational load and the optimal network performance as the first service instance satisfying the service condition.
It will be appreciated that the above description is given taking the first request message (i.e. the first packet) sent by the first original message for the client device for the target computing power service as an example. That is, for the first packet sent by the client device for the target computing power service, the second routing node may determine, based on a pre-configured computing network scheduling algorithm, a first service instance that meets a service condition from at least two service instances.
Wherein, the first packet sent by the client device may be used to request to create a session with a service instance that provides the target computing power service, which may also be referred to as a computing power request message. After the client device creates a session with a service instance that provides the target computing power service, the computing power service can be obtained through the session. The power request message may be a request message for a transmission control protocol (transmission control protocol, TCP) connection, or may be a request message for a fast UDP internet connection protocol (quick UDP Internet connections, QUIC) connection. Of course, the computing power request message may also be other first packets for establishing a session above a network layer (such as an application layer and a transport layer), which is not limited by the embodiment of the present application.
After the second routing node determines the first service instance, a flow table (also referred to as a session table or a binding table) of the service flow to which the computing power request message belongs may also be generated. The flow table records the corresponding relation between the identification information of the service flow and the network address of the first service instance. Correspondingly, when the second routing node receives a subsequent message for the target computing power service sent by the client device, the network address of the first service instance can be directly determined from the flow table of the service flow. That is, if the first original packet is a subsequent packet sent by the client device for the target computing service, the second routing node may take care of determining the network address of the first service instance from the flow table of the service flow based on the identification information of the service flow to which the first original packet belongs.
The identification information of the service flow may also be called key information, which may be quintuple information or two-tuple information, etc. The subsequent message sent by the client device may include: a subsequent protocol message (e.g., a subsequent handshake message) other than the first packet for creating a session with a computing force service instance providing the target computing force service, and a computing force task message for acquiring the target computing force service.
For example, assume that the power calculation request packet sent by the client device belongs to the service flow 2, and the destination address thereof is the identifier of the target power calculation service S1: s1 IP. If the second routing node determines, based on the routing table shown in table 3, that the network address of the first service instance for providing the target computing power service S1 to the client device is E4IP, the second routing node may generate a flow table such as shown in table 5. The identification information of the service flow recorded in the flow table includes the source address and the destination address of the service flow 2, and the next hop IP corresponding to the service flow is also recorded: PE3 IP, VPN identification: VPN SID3, and the network address E4IP of the corresponding first service instance.
TABLE 5
Source address Destination address Next hop IP VPN identification Service instance IP
Source IP for traffic flow 1 S1 IP PE2 IP VPN SID2 E2IP
Source IP for traffic flow 2 S1 IP PE3 IP VPN SID3 E4IP
It will be appreciated that the second routing node may also be configured with a service policy corresponding to the target computing power service, where the service policy is used to indicate whether session maintenance needs to be supported. If the service policy indicates that session maintenance is supported, the second routing node may generate a flow table of the service flow to which the computing power request message belongs after determining the first service instance, so as to forward a subsequent message of the service flow based on the flow table, thereby maintaining the viscosity of the service flow.
If the service policy indicates that session maintenance is not required to be supported, the second routing node may not need to generate a flow table of the service flow to which the computing power request message belongs. Correspondingly, after receiving the subsequent message sent by the client device, the second routing node can re-determine the service instance for providing the target computing power service from at least two service instances based on a pre-configured computing network scheduling algorithm. It will be appreciated that, as the computational load and network performance of each service instance vary, the service instances determined by the second routing node for providing the targeted computational service may or may not be the same for different messages of the same traffic flow.
And 206, the second routing node performs VPN tunnel encapsulation on the first original message to obtain a first calculation message.
After the second routing node determines the first service instance for providing the target power calculation service, the second routing node can perform VPN tunnel encapsulation on the first original message based on the network address of the first service instance to obtain a first power calculation message. The first calculation message obtained after the tunnel encapsulation comprises the network address of the first service instance. Wherein, performing VPN tunnel encapsulation on the first original packet may refer to: and packaging the first original message into a VPN tunnel of the next hop IP corresponding to the network address of the first service instance. The next hop IP corresponding to the network address of the first service instance is the IP of the first routing node.
It will be appreciated that during VPN tunnel encapsulation, the second routing node encapsulates the VPN identification and tunnel identification (which may also be referred to as tunnel information) for the first original packet. The tunnel identifier is used for realizing forwarding of the first calculation message between the second routing node and the first routing node. The VPN identifier is used for the first routing node to determine an address space corresponding to a destination address of the first original message, and searches a corresponding table entry to guide a subsequent message forwarding flow.
Alternatively, the next hop IP tunnel (i.e., VPN tunnel) may be an IPv6 tunnel, SRv tunnel, MPLS tunnel, NVO3 tunnel, or the like. NVO3 is IP-based as a three-layer bearer network (L3 underley) over which an overlay network (overlay) is tunneled, which can support large-scale tenant networks, and there are various VPN tunnel encapsulation manners, such as VXLAN encapsulation and general network virtualization encapsulation (generic network virtualization encapsulation, geneve), etc.
In one possible implementation, the second routing node may encapsulate the network address of the first service instance into an outer layer header of the first computation message. The outer layer header may include a base header or may also include an extension header. The network address of the first service instance may be encapsulated in the base header or the extension header.
As a first example of the above implementation, the VPN tunnel may be a SRv tunnel, and the VPN segment identifier (VPN SID) or the path segment identifier (PATH SEGMENT IDENTIFIER, path SID) in the outer layer header encapsulates the network address of the first service instance. The parameter (arguments) field of the VPN SID may carry the network address of the first service instance, and the portion of the VPN SID other than the network address of the first service instance may be used as the VPN identifier.
In this first example, the VPN SID may also define new forwarding behavior, including: the network address of the service instance is read from the parameter field of the VPN SID. Likewise, the path segment identification may also define new forwarding behaviors including: the network address of the service instance is read from the path segment identification.
It is appreciated that the outer layer header of the first power packet may include a segment routing header (segment routing header, SRH), and that both the VPN SID and path segment identification may be located in the SRH. For example, the VPN SID and path segment identification may each be one SID in a segment list (SEGMENT LIST) of the SRH.
As a second example of the above implementation, the VPN tunnel may be an NVO3 tunnel, and the option field in the outer layer header includes the network address of the first service instance. If the NVO3 tunnel is Geneve tunnels, the outer layer header may include a Geneve header (GENEVE HEADER), and the newly defined option field in the Geneve header can carry the network address of the first service instance. If the NVO3 tunnel is another NVO3 tunnel such as VXLAN, the newly defined option field in the basic header or the extension header in the outer layer header can carry the network address of the first service instance.
In this second example, the outer layer header may further include a VNI (i.e., VPN identifier), where the VNI may further define new forwarding behaviors including: the network address of the service instance is continuously obtained from an outer header (e.g., geneve header, header or extended header).
As a third example of the above implementation, the VPN tunnel is an IPv6 tunnel, and the DOH in the outer layer header (i.e., the outer layer IPv6 header) includes the network address of the first service instance. The DOH is an extension header of the outer layer IPv6 message header. And, the DOH also carries a VPN ID, for example, the DOH sequentially encapsulates the VPN ID and the network address of the first service instance. Wherein the VPN ID may define a new forwarding behavior, the new forwarding behavior comprising: the network address of the service instance is continuously acquired from the DOH.
As a fourth example of the above implementation, the VPN tunnel is an MPLS tunnel, and the MPLS extension header in the outer layer header includes the network address of the first service instance. And, the outer layer message header also includes an MPLS label stack, where the MPLS label stack includes a VPN label (i.e., VPN identifier).
In this fourth example, the VPN identification in the MPLS label stack, or a special indication label carried in the MPLS extension header, may be defined with new forwarding behaviors, which may include: the network address of the service instance is obtained from the MPLS extension header.
The above implementation manner is described by taking the case that the second routing node encapsulates the network address of the first service instance to the outer layer header of the first computation message as an example, that is, the implementation manner does not change the content of the first original message. In another possible implementation manner of the embodiment of the present application, the second routing node may encapsulate the network address of the first service instance in the first original packet, and then perform VPN tunnel encapsulation on the first original packet encapsulated with the network address of the first service instance, to obtain the first computation packet. That is, the other implementation would change the content of the first original message.
The first original message may be an IPv6 message, and the second routing node may encapsulate the network address of the first service instance in an option field of the HBH option header of the IPv6 message. And, an option field of the HBH option header in the IPv6 packet may be used to instruct forwarding of the IPv6 packet based on the network address of the first service instance. That is, the option field of the HBH option header defines a new forwarding behavior, which includes: ignoring the destination address field of the IPv6 message and forwarding based on the network address lookup table entry in the option field. It can be appreciated that if the option field is not included in the IPv6 message, the device that receives the IPv6 message may look up the table entry based on the destination address field of the IPv6 message and forward the table entry.
For example, assuming that the network address of the first service instance determined by the second routing node PE1 for providing the target power service S1 to the client device is E4IP, the second routing node PE1 may encapsulate the network address E4IP in an option field or an outer layer header of the HBH option header of the first original packet, to obtain the first power packet.
Step 207, the second routing node sends the first computation message to the first routing node through the VPN tunnel.
In the embodiment of the present application, since the first routing node and the second routing node are both routing devices supporting VPN forwarding, for example, may be PE devices of VPN, so the second routing node may send the first computation packet to the first routing node through a VPN tunnel. Referring to step 206, the first computation message further encapsulates VPN identifiers allocated by VPN to the first routing node. The VPN tunnel may be an IPv6 tunnel, SRv tunnel, MPLS tunnel, NVO3 tunnel, or the like.
Step 208, the first routing node performs tunnel decapsulation on the first computation message to obtain a first original message.
After the first routing node receives the first calculation message through the VPN tunnel between the first routing node and the second routing node, the first calculation message can be subjected to tunnel decapsulation to obtain a first original message and a network address of a first service instance. The tunneling decapsulating the first computation message may refer to: and stripping the VPN mark and the tunnel mark encapsulated in the first calculation message.
It may be understood that, if the first original packet is an IPv6 packet, and the option field of the HBH option header in the IPv6 packet encapsulates the network address of the first service instance, the first routing node may obtain the network address of the first service instance from the option field.
If the outer layer header of the first power calculation message is encapsulated with the network address of the first service instance, the first routing node may obtain the network address of the first service instance from the outer layer header based on the indication of the VPN identifier in the first power calculation message. That is, a new forwarding behavior of the VPN identifier is defined in the first routing node, where the new forwarding behavior includes: and acquiring the network address of the service instance from the outer layer message header. After the first routing node receives the first calculation message and recognizes the VPN identifier, a new forwarding behavior indicated by the VPN identifier may be executed, so as to obtain a network address of the first service instance.
In conjunction with the above step 206, if the VPN tunnel between the first routing node and the second routing node is SRv tunnels, after the first routing node decapsulates the first power packet, the network address of the service instance may be read from the parameter field of the VPN SID based on the indication of the VPN SID in the outer layer packet header. Or the first routing node may read the network address of the first service instance from the path segment identifier based on the indication of the path segment identifier in the outer layer header.
If the VPN tunnel is an NVO3 tunnel, after the first routing node decapsulates the first power packet, the VNI may be obtained from an outer layer packet header (e.g., geneve header, packet header or extended packet header) first, and then based on the indication of the VNI, the network address of the first service instance may be continuously obtained from the outer layer packet header.
If the VPN tunnel is an IPv6 tunnel, after the first routing node decapsulates the first power packet, the VPN ID may be first obtained from the DOH of the first original packet, and then, based on the indication of the VPN ID, the network address of the first service instance is continuously obtained from the DOH.
If the VPN tunnel is an MPLS tunnel, after the first routing node decapsulates the first computation packet, the network address of the first service instance may be obtained from the MPLS extension header according to an indication of a VPN label in the MPLS label stack or according to an indication of a special indication label in the MPLS extension header.
Step 209, the first routing node forwards the first original packet based on the network address of the first service instance.
After the first routing node obtains the network address of the first service instance from the unpacked first computing power message, the first original message can be forwarded to the first service instance based on the network address of the first service instance. Therefore, even if the outgoing interface and the next hop connected with the first service instance in the first routing node point to other service instances, namely one VPN identifier in the first routing node can point to a plurality of service instances, the first routing node can also realize accurate forwarding of the first original message based on the network address of the first service instance.
As a first possible implementation, the first routing node may forward the first original packet in tunnel mode. In the tunnel mode, the first routing node may perform IP tunnel encapsulation on the first original packet, and forward the first original packet to the first service instance through an IP tunnel. Wherein the destination address of the IP tunnel is the network address of the first service instance. The source address of the IP tunnel belongs to the address space corresponding to the VPN identifier. For example, the source address of the IP tunnel may be an interface IP address of a routing egress interface in the first routing node to the first service instance.
In the first implementation manner, after the first routing node sends the first original message to the service node to which the first service instance belongs through the IP tunnel, the network module in the service node can remove the IP tunnel header in the received message, so as to obtain the first original message with the destination address being the identifier of the target computing service. The network module can then provide the first original message to the first service instance for processing.
When the first service instance feeds back the backhaul message to the client device, the tunnel header may not be encapsulated any more. Or the first service instance may exchange the addresses of the head node and the tail node of the received IP tunnel encapsulated by the first routing node, generate new originating tunnel information, encapsulate the backhaul message into the IP tunnel, and send the backhaul message to the first routing node. After receiving the backhaul message sent by the first service instance through the IP tunnel, the first routing node may send the backhaul message to the second routing node through the corresponding VPN tunnel based on the VPN to which the ingress interface receiving the backhaul message belongs. That is, the first routing node may perform VPN forwarding on the backhaul packet based on a VPN ID of a VPN to which the ingress interface of the backhaul packet belongs. After receiving the backhaul message through the VPN tunnel, the second routing node may decapsulate the backhaul message and forward the backhaul message to the client device.
In a second possible implementation manner, the first original message obtained by decapsulating the first routing node includes a destination address field and a network address of the first service instance, where the destination address field carries an identifier of the target computing power service. The first routing node may forward the first original message through a secondary table look-up mode. In the secondary table look-up mode, the first routing node may ignore the destination address field and forward the first original message to the first service instance based on the network address of the first service instance.
Because the destination address field of the first original message carries the identifier of the target computing power service, the target computing power service can correspond to a plurality of service instances, and therefore the first routing node can ignore the destination address field and perform table lookup forwarding based on the network address of the first service instance. Therefore, the first original message can be ensured to be accurately forwarded to the first service instance.
Optionally, the first original message may be an IPv6 message, and the option field of the HBH option header of the IPv6 message encapsulates the network address of the first service instance. And, the option field of the HBH option header defines a new forwarding behavior, which includes: ignoring the destination address field of the IPv6 message, and performing table lookup forwarding based on the network address in the option field. Correspondingly, the first routing node can execute the new forwarding behavior when detecting that the option field is included in the HBH option header of the IPv6 message. It may be understood that if the first routing node detects that the HBH option header of the IPv6 packet does not include the option field, table lookup forwarding may be performed based on the destination address field of the IPv6 packet.
In this second implementation, the network address of the first service instance in the first original packet may be encapsulated by the second routing node. Or if the network address of the first service instance is not encapsulated in the first original message, that is, the network address of the first service instance is encapsulated in the outer layer message header of the first power calculation message, the first routing node decapsulates the first power calculation message to obtain the first original message, and then the network address of the first service instance can be encapsulated in the first original message. For example, if the first original packet is an IPv6 packet, the first routing node may encapsulate the network address of the first service instance into an option field of the HBH option header of the IPv6 packet.
Step 210, the forwarding device forwards the first original packet to the first service instance based on the network address of the first service instance.
In the embodiment of the present application, the power network may further include a forwarding device, and the first routing node may be connected to a service node to which at least two service instances belong through the forwarding device. The forwarding device may be a forwarding device of a three-layer network, for example, a CE device. And the first routing node can be connected with the service node to which the at least two service instances belong through one or more forwarding devices.
If the first routing node forwards the first original message in the secondary table look-up mode, each forwarding device between the first routing node and the service node may ignore the destination address field of the first original message after receiving the first original message, and forward the first original message based on the network address of the first service instance in the first original message.
Optionally, the first original message may be an IPv6 message, and the option field of the HBH option header of the IPv6 message encapsulates the network address of the first service instance. Each forwarding device between the first routing node and the service node may execute a new forwarding action corresponding to an option field in the HBH option header based on an indication of the option field: ignoring the destination address field of the IPv6 message, and performing table lookup forwarding based on the network address in the option field.
For example, referring to fig. 7, assuming that the network address of the first service instance encapsulated in the option field of the HBH option header of the IPv6 packet is E4IP, the first routing node PE3 may forward the IPv6 packet to the forwarding device CE3 based on the network address E4 IP. The forwarding device CE3 may further perform a table lookup based on the network address E4IP, and may forward the IPv6 packet to the first service instance in the edge site 4 according to the table lookup result.
It may be appreciated that, by the HBH option carrying the network address of the first service instance, it may be convenient for each forwarding device between the first routing node and the first service instance to be able to perform the new forwarding action under the indication of the option field of the HBH option. It may be further understood that, for the scheme of the secondary table lookup, the backhaul message fed back by the first service instance does not need to be specially processed, and the network address of the first service instance does not need to be encapsulated in the backhaul message.
It can be further understood that, in the tunnel mode, the forwarding device between the first routing node and the service node to which the service instance belongs does not need to perform special forwarding processing on the first original packet, and only needs to perform packet tunnel encapsulation and decapsulation processing on the first routing node and the service instance. In the secondary table look-up mode, the service node to which the service instance belongs can be more easily promoted ecologically without supporting message tunnel encapsulation and decapsulation processing.
Step 211, the second routing node receives the second original message.
In the embodiment of the present application, the second routing node may further receive a second original packet sent by the client device, where the second original packet includes an identifier of the target computing power service requested by the client device. It will be appreciated that the client device that sends the second original message may be the same client device as the client device that sends the first original message, or may be a different client device. That is, the second original message and the first original message may belong to the same service flow or may belong to different service flows.
And 212, the second routing node performs VPN tunnel encapsulation on the second original message to obtain a second calculation message.
After the second routing node receives the second original message, a second service instance for providing the target computing power service can be determined from the routing table based on the identification of the target computing power service in the second original message. And then, the second routing node can perform VPN tunnel encapsulation on the second original message based on the network address of the second service instance to obtain a second calculation message. The second computation message includes the VPN identifier and the network address of the second service instance. For example, the outer layer header of the second computation packet includes the network address of the second service instance, or the second original packet includes the network address of the second service instance.
In the embodiment of the present application, the first service instance and the second service instance may be connected to the same outbound interface of the first routing node, and correspondingly, the VPN identifier in the second power calculation message is the same as the VPN identifier in the first power calculation message, and are VPN identifiers allocated to the outbound interface by the VPN. That is, the VPN identification of the outgoing interface may point to at least two service instances including the first service instance and the second service instance.
The process of determining the second service instance by the second routing node may refer to the description related to step 205, and the process of performing VPN tunnel encapsulation on the second original packet by the second routing node may refer to the description related to step 206, which is not repeated herein.
Step 213, the second routing node sends the second calculation message to the first routing node through the VPN tunnel.
In the embodiment of the application, since the first routing node and the second routing node are both the routing devices of the VPN, for example, may be PE devices of the VPN, the second routing node may send the second calculation message to the first routing node through the VPN tunnel. The VPN tunnel may be an IPv6 tunnel, SRv tunnel, MPLS tunnel, NVO3 tunnel, or the like.
Step 214, after the first routing node performs tunnel decapsulation on the second computation message, forwarding the second original message to the second service instance based on the network address of the second service instance.
After the first routing node receives the second calculation message through the VPN tunnel between the first routing node and the second routing node, the second calculation message can be subjected to tunnel decapsulation to obtain a second original message and a network address of a second service instance. And then, the first routing node can forward the second original message to the second service instance based on the network address of the second service instance. For example, the first routing node may forward the second original message in a tunnel mode or a secondary table look-up mode.
The process of tunnel decapsulating the second power packet by the first routing node may refer to the description related to step 208, and the process of forwarding the second original packet by the first routing node may refer to the description related to step 209 and step 210, which are not described herein.
Based on the above steps, in the VPN-based Dyncast's power routing scheme provided in the embodiment of the present application, the network location (i.e., the real IP location) of the service instance is introduced. Based on this, if the egress node is not directly connected to the service instance by forwarding, but is connected through one or more three-layer network forwarding devices, and one egress interface of the egress node points to multiple service instances, the egress node may directly forward the original packet to the service instance selected by the ingress node according to the real location IP of the service instance.
It can be understood that the sequence of the steps of the power routing method provided by the embodiment of the application can be properly adjusted, and the steps can be correspondingly increased or decreased according to the situation. For example, steps 211 to 214 described above may be performed before step 210; or the step 201 may be deleted according to the situation, that is, the first routing node may obtain the network address and the computational load of each service instance in other manners; or step 210 may be deleted as appropriate.
In summary, the embodiment of the present application provides a method for routing a computing power, where a computing power packet received by a first routing node through a VPN tunnel includes a network address of a first service instance. Therefore, even if one VPN mark distributed in the first routing node points to a plurality of service examples, the first routing node can accurately forward an original message obtained after tunnel decapsulation of the power calculation message to the first service example based on the network address of the first service example, so that the reliability of the power calculation routing is effectively ensured.
In addition, in the method provided by the embodiment of the application, because the measurement proxy node and the first routing node can issue the network position (namely the real position) of the service instance, the network problem positioning of the computational routing can be more intuitively supported. It can be understood that the method provided by the embodiment of the application realizes further improvement of the power-based routing scheme of Dyncast based on VPN, and the method not only can support the scene which cannot be supported by the traditional scheme (for example, the scene which is not forwarding direct connection between the exit node and the service instance), but also can be suitable for the scene which is supported by the traditional scheme, for example, the scene which is forwarding direct connection between the exit node and the service instance.
Fig. 8 is a flowchart of yet another method for routing a computing power provided by an embodiment of the present application, where the method may be applied to a forwarding device in a computing power network. The forwarding device is respectively connected with the first routing node and the service node to which at least two service instances belong, wherein the at least two service instances are used for providing the target computing power service. As shown in fig. 8, the method includes:
Step 301, receiving an original message sent by a first routing node, where the original message includes a destination address field and a network address of a first service instance.
In the embodiment of the application, the first routing node can send the original message to the forwarding device, and the original message is obtained by tunnel decapsulating the received calculation message by the first routing node. And the original message includes a destination address field and a network address of the first service instance. Wherein the destination address field may carry an identification of the target computing service. The first service instance belongs to at least two service instances connected by the first routing node.
Alternatively, the original packet may be an IPv6 packet, and the network address of the first service instance may be carried in an option field of the HBH option header of the Pv6 packet.
Step 302, ignoring the destination address field, and forwarding the original message to the first service instance based on the network address of the first service instance.
After the forwarding device receives the original message and recognizes the network address of the first service instance in the original message, the destination address field can be ignored, and the original message is forwarded to the first service instance based on the network address of the first service instance. For example, if the original packet is an IPv6 packet, the forwarding device may execute a new forwarding action corresponding to an option field in the HBH option header of the IPv6 packet based on the indication of the option field: ignoring the destination address field of the IPv6 message, and performing table lookup forwarding based on the network address in the option field.
The implementation process of this step 301 may refer to the related description of the step 209, and the implementation process of this step 302 may refer to the related description of the step 210.
In summary, the embodiment of the present application provides a power routing method, where an original message sent by a first routing node and received by a forwarding device includes a network address of a first service instance. And the forwarding device can ignore the destination address field in the original message and forward the original message to the first service instance based on the network address of the first service instance. Therefore, the original message can be ensured to be accurately forwarded to the first service instance, and the reliability of the calculation routing is further effectively ensured.
Fig. 9 is a schematic structural diagram of a first routing node according to an embodiment of the present application, where the first routing node may be applied to a computing network such as that shown in fig. 1 or fig. 7, and may implement the steps performed by the first routing node in the foregoing method embodiment. Referring to fig. 9, the first routing node includes:
The receiving module 401 is configured to receive a first power calculation message sent by a second routing node through a VPN tunnel, where the first power calculation message is obtained by performing VPN tunnel encapsulation on a first original message sent by a client device by the second routing node, and the first power calculation message includes a network address of a first service instance. The functional implementation of the receiving module 401 may refer to the relevant descriptions of step 103 and step 207 in the above-described method embodiment.
The sending module 402 is configured to forward the first original packet to the first service instance based on the network address of the first service instance after tunneling decapsulating the first computation packet. The function implementation of the transmitting module 402 may refer to the relevant descriptions of the steps 104, 208 and 209 in the above method embodiment.
Optionally, the first computing force message may further include: a VPN identification assigned to the first routing node.
Alternatively, the VPN identification may point to at least two service instances, each for providing the target computing power service, the first service instance belonging to the at least two service instances.
Optionally, the receiving module 401 may be further configured to receive a second power calculation message sent by a second routing node through a VPN tunnel, where the second power calculation message is obtained by VPN tunnel encapsulation of a second original message by the second routing node, and the second power calculation message includes the VPN identifier and a network address of a second service instance, where the second service instance belongs to the at least two service instances. The functional implementation of the receiving module 401 may also refer to the relevant description of step 213 in the above-described method embodiment.
The sending module 402 may be further configured to forward the second original packet to the second service instance based on the network address of the second service instance after tunneling the second computation packet. The functional implementation of the sending module 402 may also refer to the relevant description of step 214 in the above-described method embodiment.
Optionally, the sending module 402 may be further configured to advertise the first service route of the first service instance to the second routing node before the receiving module 401 receives the first computation packet sent by the second routing node through the VPN tunnel. The first traffic route includes an identification of a target computing power service provided by the first traffic instance, a VPN identification, a network address of the first traffic instance, and a computing power load of the first traffic instance. The functional implementation of the sending module 402 may also refer to the relevant description of step 202 in the above-described method embodiment.
Optionally, the power network may further include: and the metric proxy node corresponds to the first service instance. The receiving module 401 may be further configured to receive the second traffic route of the first traffic instance advertised by the metric proxy node before the sending module 402 advertises the first traffic route of the first traffic instance to the second routing node. Wherein the second traffic route includes an identification of the target computing power service, a network address of the first traffic instance, and a computing power load of the first traffic instance. The functional implementation of the sending module 402 may also refer to the relevant description of step 201 in the above method embodiment.
Optionally, the next hop attribute in the second service route carries the network address of the first service instance; or the extended attribute or the extended TLV in the second service route carries the network address of the first service instance.
Optionally, the outer layer header of the first computation message includes the network address of the first service instance.
Optionally, the sending module 402 may be further configured to obtain, before forwarding the first original packet to the first service instance based on the network address of the first service instance, the network address of the first service instance from the outer layer header based on the indication of the VPN identification in the first computation packet.
Optionally, the VPN tunnel is a SRv tunnel, and the VPN segment identifier or the path segment identifier in the outer layer packet header includes a network address of the first service instance; or the VPN tunnel is an NVO3 tunnel, and the option field in the outer layer message header comprises the network address of the first service instance; or the VPN tunnel is an IPv6 tunnel, and the DOH in the outer layer message header comprises the network address of the first service instance; or the VPN tunnel is an MPLS tunnel, and the MPLS extension header in the outer layer packet header includes the network address of the first service instance.
Optionally, the first original packet may include a network address of the first service instance.
Optionally, the first original message is an IPv6 message, and the network address of the first service instance is encapsulated in an option field of an HBH option header of the IPv6 message.
Optionally, the sending module 402 may be configured to forward the first original packet to the first service instance through an IP tunnel after IP tunneling encapsulation is performed on the first original packet. Wherein the destination address of the IP tunnel is the network address of the first service instance.
Alternatively, the source address of the IP tunnel may belong to an address space corresponding to the VPN identifier.
Optionally, the receiving module 401 may be further configured to receive a backhaul packet sent by the first service instance through the IP tunnel. The sending module 402 may be further configured to send the backhaul packet to the second routing node through a VPN tunnel based on the VPN to which the ingress interface that receives the backhaul packet belongs.
Optionally, the first original message may include a destination address field and a network address of the first service instance. The sending module 402 may be configured to ignore the destination address field and forward the first original packet to the first service instance based on the network address of the first service instance.
Optionally, the first original message is an IPv6 message, and the network address of the first service instance is encapsulated in an option field of an HBH option header of the IPv6 message.
Optionally, the sending module 402 may be further configured to encapsulate the network address of the first service instance into the first original packet before forwarding the first original packet to the first service instance based on the network address of the first service instance.
Optionally, the power computing network may further include a forwarding device, through which the first routing node is connected to a service node to which at least two service instances belong. The sending module 402 may be configured to forward, based on the network address of the first service instance, the first original packet to a service node to which the first service instance belongs through the forwarding device.
Alternatively, the network address of the first service instance may be an IP address.
In summary, the embodiment of the present application provides a first routing node, where the computing power packet received by the first routing node through the VPN tunnel includes a network address of a first service instance. Therefore, even if one VPN mark distributed in the first routing node points to a plurality of service examples, the first routing node can accurately forward an original message obtained after tunnel decapsulation of the power calculation message to the first service example based on the network address of the first service example, so that the reliability of the power calculation routing is effectively ensured.
Fig. 10 is a schematic structural diagram of a second routing node according to an embodiment of the present application, where the second routing node may be applied to a computing network such as that shown in fig. 1 or fig. 7, and the steps performed by the second routing node in the foregoing method embodiment may be implemented. Referring to fig. 10, the second routing node includes:
a receiving module 501, configured to receive a first original packet sent by a client device, where the first original packet includes an identifier of a target computing service requested by the client device. The functional implementation of the receiving module 501 may refer to the relevant descriptions of step 101 and step 204 in the above method embodiments.
The encapsulation module 502 is configured to perform VPN tunnel encapsulation on the first original packet to obtain a first computing power packet, where the first computing power packet includes a network address of a first service instance for providing a target computing power service. The functional implementation of the encapsulation module 502 may refer to the relevant descriptions of steps 102 and 206 in the method embodiments described above.
The sending module 503 is configured to send the first calculation message to the first routing node through the VPN tunnel, so that after the first routing node performs tunnel decapsulation on the first calculation message, forward the first original message to the first service instance based on the network address of the first service instance. The function implementation of the sending module 503 may refer to the relevant descriptions of step 103 and step 207 in the above method embodiment.
Optionally, the first computing force message further includes: and VPN identification allocated for the first routing node, wherein the VPN identification points to at least two service instances, the at least two service instances are used for providing target computing power service, and the first service instance belongs to the at least two service instances.
Optionally, the receiving module 501 may be further configured to receive a second original packet, where the second original packet includes an identifier of the target computing service. The functional implementation of the receiving module 501 may also refer to the relevant description of step 211 in the above method embodiment.
The encapsulation module 502 may be further configured to perform VPN tunnel encapsulation on the second original packet to obtain a second power calculation packet, where the second power calculation packet includes the VPN identifier and a network address of a second service instance, and the second service instance belongs to the at least two service instances. The functional implementation of the encapsulation module 502 may also refer to the relevant description of step 212 in the method embodiment described above.
The sending module 503 may be further configured to send the second power calculation message to the first routing node through a VPN tunnel, so that after the first routing node performs tunnel decapsulation on the second power calculation message, forward the second original message to the second service instance based on the network address of the second service instance. The function implementation of the sending module 503 may also refer to the description related to step 213 in the above method embodiment.
Optionally, the receiving module 501 may be further configured to receive at least one traffic route advertised by the first routing node before the encapsulating module 502 encapsulates the first original packet in the VPN tunnel to obtain the first power packet. The at least one traffic route includes an identification of the target computing service, the VPN identification, network addresses of the at least two traffic instances, and the computing load of the at least two traffic instances.
And, the receiving module 501 may be further configured to determine a first service instance from the at least two service instances based on the computational power load of the at least two service instances. The functional implementation of the receiving module 501 may also refer to the relevant descriptions of step 202, step 203 and step 205 in the above method embodiments.
The receiving module 501 may be further configured to determine, before the encapsulating module 502 encapsulates the first original packet in a VPN tunnel to obtain a first power packet, a network address of the first service instance from a flow table of the service flow based on identification information of the service flow to which the first original packet belongs.
Optionally, the outer layer header of the first computation message includes the network address of the first service instance.
Optionally, the VPN tunnel is a SRv tunnel, and the VPN segment identifier or the path segment identifier in the outer layer packet header includes a network address of the first service instance; or the VPN tunnel is an NVO3 tunnel, and the option field in the outer layer message header comprises the network address of the first service instance; or the VPN tunnel is an IPv6 tunnel, and the DOH in the outer layer message header comprises the network address of the first service instance; or the VPN tunnel is an MPLS tunnel, and the MPLS extension header in the outer layer packet header includes the network address of the first service instance.
Optionally, the first original packet may be an IPv6 packet; the encapsulation module 502 may be configured to encapsulate the network address of the first service instance in an option field of the HBH option header of the IPv6 packet, and perform VPN tunnel encapsulation on the IPv6 packet encapsulated with the network address of the first service instance, to obtain a first power packet.
In summary, the embodiment of the present application provides a second routing node, where a calculation message sent by the second routing node to the first routing node through the VPN tunnel includes a network address of the first service instance. Therefore, even if one VPN mark distributed in the first routing node points to a plurality of service examples, the first routing node can accurately forward an original message obtained by tunnel decapsulating the power calculation message to the first service example based on the network address of the first service example, so that the reliability of the power calculation routing is effectively ensured.
Fig. 11 is a schematic structural diagram of a forwarding device provided in an embodiment of the present application, where the forwarding device may be applied to a computing network such as shown in fig. 1 or fig. 7, and the steps performed by the forwarding device in the foregoing method embodiment may be implemented. The forwarding device is respectively connected with the first routing node and the service node to which at least two service instances belong, wherein the at least two service instances are used for providing the target computing power service. Referring to fig. 11, the forwarding apparatus includes:
The receiving module 601 is configured to receive an original packet sent by a first routing node, where the original packet includes a destination address field and a network address of a first service instance, and the first service instance belongs to the at least two service instances. The functional implementation of the receiving module 601 may refer to the relevant descriptions of step 209 and step 301 in the above method embodiments.
The sending module 602 is configured to ignore the destination address field, and forward the original message to the first service instance based on the network address of the first service instance. The functional implementation of the sending module 602 may refer to the relevant descriptions of step 210 and step 302 in the above method embodiments.
Optionally, the network address of the first service instance may be carried in an option field of the HBH option header of the original packet.
In summary, the embodiment of the present application provides a forwarding device, where an original packet sent by a first routing node and received by the forwarding device includes a network address of a first service instance. And the forwarding device can ignore the destination address field in the original message and forward the original message to the first service instance based on the network address of the first service instance. Therefore, the original message can be ensured to be accurately forwarded to the first service instance, and the reliability of the calculation routing is further effectively ensured.
It may be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the first routing node, the second routing node, the forwarding device and the modules described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein.
It should be understood that the first routing node, the second routing node, and the forwarding device provided in the embodiments of the present application may also be implemented as application-specific integrated circuits (asics), or programmable logic devices (programmable logic device, plds), which may be complex program logic devices (complex programmable logical device, cplds), field-programmable gate arrays (fpgas), general-purpose array logic (GENERIC ARRAY logic, GAL), or any combinations thereof. In addition, the computing power routing method provided by the foregoing method embodiment may also be implemented by software, and when the computing power routing method provided by the foregoing method embodiment is implemented by software, each functional module in the first routing node, the second routing node, and the forwarding device may also be a software module.
Fig. 12 is a schematic structural diagram of a network device according to an embodiment of the present application. The network device may be applied to a computing network such as that shown in fig. 1 or 7, and may be a first routing node, a second routing node, or a forwarding device in the computing network. Referring to fig. 12, the network device includes: a processor 701, a memory 702, a network interface 703 and a bus 704.
In which a computer program 7021 is stored in the memory 702, the computer program 7021 is used to realize various application functions. The processor 701 is configured to execute the computer program 7021 to implement the method provided by the above-described method embodiment applied to the first routing node, the second routing node, or the forwarding device. For example, the processor 701 may be configured to execute the computer program 7021 to implement the functions of the respective modules shown in any one of fig. 9 to 11.
The processor 701 may be a central processing unit (central processing unit, CPU), which processor 701 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processor, DSP), ASIC, FPGA, graphics processor (graphics processing unit, GPU) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or any conventional processor.
The memory 702 may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an erasable programmable ROM (erasable PROM), an electrically erasable programmable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (double DATA DATE SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHLINK DRAM, SLDRAM), and direct memory bus random access memory (direct rambus RAM, DR RAM).
The network interface 703 may be plural and the network interface 703 is used to enable a communication connection (which may be wired or wireless) with other devices. In this embodiment of the present application, the network interface 703 is used for receiving and transmitting a message. The other devices may be terminals, servers, VMs, etc. or other network devices.
A bus 704 is used to connect the processor 701, the memory 702, and the network interface 703. Also, the bus 704 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. But for clarity of illustration, the various buses are labeled as bus 704 in the figures.
If the network device is a first routing node, the processor 701 may be configured to receive, through the network interface 703, a first power calculation message sent by a second routing node through a VPN tunnel, where the first power calculation message is obtained by performing VPN tunnel encapsulation on a first original message sent by a client device by the second routing node, and the first power calculation message includes a network address of a first service instance. The processor 701 is configured to forward the first original packet to the first service instance through the network interface 703 based on the network address of the first service instance after tunneling the first calculation packet. For details of the processing procedure of the processor 701, please refer to the steps executed by the first routing node in the method embodiments shown in fig. 5 and fig. 6, which are not described herein.
If the network device is a second routing node, the processor 701 may be configured to receive, through the network interface 703, a first original packet sent by the client device, where the first original packet includes an identifier of a target computing service requested by the client device; performing VPN tunnel encapsulation on the first original message to obtain a first calculation message, wherein the first calculation message comprises a network address of a first service instance for providing target calculation service; and then, the first calculation message is sent to the first routing node through the VPN tunnel so that the first routing node can forward the first original message to the first service instance based on the network address of the first service instance after the first calculation message is subjected to tunnel decapsulation. For details of the processing procedure of the processor 701, please refer to the steps executed by the second routing node in the method embodiments shown in fig. 5 and fig. 6, which are not described herein.
If the network device is a forwarding device, the processor 701 may be configured to receive, through the network interface 703, an original packet sent by the first routing node, where the original packet includes a destination address field and a network address of a first service instance, where the first service instance belongs to the at least two service instances; and then ignoring the destination address field and forwarding the original message to the first service instance based on the network address of the first service instance. For details of the processing procedure of the processor 701, please refer to the steps executed by the forwarding device in the method embodiments shown in fig. 6 and fig. 8, which are not described herein.
Fig. 13 is a schematic structural diagram of another network device according to an embodiment of the present application. The network device may be applied to a computing network such as that shown in fig. 1 or 7, and may be a first routing node, a second routing node, or a forwarding device in the computing network. As shown in fig. 13, the network device may include: a main control board 801 and at least one interface board (interface board is also called line card or service board), for example interface board 802 and interface board 803 are shown in fig. 13. The network device may also comprise a switching fabric 804 in the case of multiple interface boards, the switching fabric 804 being configured to perform data exchanges between the interface boards.
The main control board 801 is also called a main processing unit (main processing unit, MPU) or a routing processing card (route processor card), and the main control board 801 is used for performing functions such as system management, equipment maintenance and protocol processing. The main control board 801 is mainly provided with 3 types of functional units: the system comprises a system management control unit, a system clock unit and a system maintenance unit. The main control board 801 includes: a central processor 8011 and a memory 8012.
The interface boards 802 and 803 are also called line interface unit cards (line processing unit, lpus), line cards (line cards) or service boards, and are used to provide various service interfaces and implement forwarding of messages. The service interface provided by the interface board may include: SONET/SDH based data packet (packet over SONET/SDH, POS) interfaces, gigabit Ethernet (gigabit Ethernet, GE) interfaces, and asynchronous transfer mode (asynchronous transfer mode, ATM) interfaces, among others. Where SONET refers to synchronous optical network (synchronous optical network), SDH refers to synchronous digital hierarchy (synchronous DIGITAL HIERARCHY). The main control board 801, the interface board 802 and the interface board 803 are connected with the system backboard through a system bus to realize intercommunication. As shown in fig. 13, the interface board 802 includes one or more central processing units 8021 thereon. The central processor 8021 is used for controlling and managing the interface board 802 and communicating with the central processor 8011 on the main control board 801. The memory 8024 on the interface board 802 is used for storing forwarding table entries, and the network processor 8022 can forward the message by looking up the forwarding table entries stored in the memory 8024. Memory 8024 may also be used for storing program codes.
The interface board 802 further includes one or more physical interface cards 8023, where the one or more physical interface cards 8023 are configured to receive a message sent by a previous hop node, and send a processed message to a next hop node according to an instruction of the central processor 8021.
Furthermore, it is understood that the central processor 8021 and/or the network processor 8022 in the interface board 802 in fig. 13 may be dedicated hardware or chips, such as ASIC, to implement the above functions, i.e., a manner known as a forwarding plane processing by dedicated hardware or chips. In other embodiments, the central processor 8021 and/or the network processor 8022 may also employ a general purpose processor, such as a general purpose CPU, to perform the functions described above.
It should be further understood that the master control board 801 may have one or more blocks, and that the multiple blocks may include a primary master control board and a backup master control board. The interface boards may have one or more blocks, the more data processing capabilities the network device is, the more interface boards are provided. As shown in fig. 13, the network device includes an interface board 802 and an interface board 803. When the distributed forwarding mechanism is adopted, the structure of the interface board 803 is basically the same as that of the interface board 802, and the operation on the interface board 803 is basically similar to that of the interface board 802, so that the description is omitted for brevity. In the case of a network device having multiple interface boards, the multiple interface boards may communicate with each other through one or more switch fabric 804, and load sharing and redundancy backup may be implemented to provide high capacity data exchange and processing capabilities.
Under the centralized forwarding architecture, the network device may not need the switch board 804, and the interface board bears the processing function of the service data of the whole system. Therefore, the data access and processing power of the network device of the distributed architecture is greater than that of the network device of the centralized architecture. The specific architecture employed is not limited herein, depending on the specific networking deployment scenario.
In an embodiment of the application, memory 8012 and memory 8024 may be, but are not limited to, ROM or other types of static storage devices that can store static information and instructions, RAM or other types of dynamic storage devices that can store information and instructions, EEPROM, compact disk read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), magnetic disk or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 8024 in the interface board 802 may be stand alone and connected to the central processor 8021 by a communication bus; or the memory 8024 may be integrated with the central processor 8021. The memory 8012 in the main control board 801 may exist independently and be connected to the central processing unit 8011 through a communication bus; or the memory 8012 may be integrated with the central processor 8011.
The program code stored in the memory 8024 is controlled to be executed by the central processor 8021, and the program code stored in the memory 8012 is controlled to be executed by the central processor 8011. The central processor 8021 and/or the central processor 8011 may implement the methods performed by the first routing node, the second routing node, or the forwarding device provided by the above-described method embodiments by executing program code. The memory 8024 and/or the program code stored by the memory 8012 may include one or more software elements. The one or more software elements may be the functional modules shown in any of figures 9 to 11.
In an embodiment of the present application, the physical interface card 8023 may be a device using any transceiver or the like for communicating with other devices or communication networks, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc.
Alternatively, the apparatus shown in any one of fig. 9 to 12 may be implemented using the structure shown in fig. 13.
The embodiment of the application also provides a computer readable storage medium, wherein instructions are stored in the computer readable storage medium, and when the instructions are executed by a processor, the instructions cause the processor to execute the steps executed by the first routing node, the second routing node or the forwarding device in the above-mentioned method embodiment.
Embodiments of the present application also provide a computer program product comprising instructions which, when run on a processor, cause the processor to perform the steps performed by the first routing node, the second routing node or the forwarding device as in the method embodiments described above.
The embodiment of the application also provides a chip which comprises a programmable logic circuit and/or program instructions, and the chip can realize the steps executed by the first routing node, the second routing node or the forwarding device in the embodiment of the method when running.
The embodiment of the application also provides a power-calculating routing system which comprises a first routing node, a second routing node and at least one service node connected with the first routing node. Wherein the second routing node is connected with the client device, and the plurality of service instances deployed in the at least one service node are each configured to provide a target computing power service.
For example, referring to fig. 1, the power routing system may include a plurality of routing nodes 01, where a routing node 01 connected to a service node 02 of the plurality of routing nodes 01 may be a first routing node, i.e. an egress node, and the egress node may implement the steps performed by the first routing node in the above-described method embodiment. The routing node 01 connected to the client device 03 may be a second routing node, i.e. an ingress node, which may implement the steps performed by the second routing node in the above-described method embodiments.
The entry node is a traffic scheduling node, and has load balancing and session maintaining functions at a network level. And, the first routing node and the second routing node may be both routing devices supporting VPN forwarding, for example, may be PE devices. Correspondingly, the second routing node is also an ingress PE device of the VPN, and the first routing node is also an egress PE device of the VPN.
Optionally, the power routing system may further comprise a forwarding device through which the first routing node is connected to the at least one service node. The forwarding device is configured to implement the steps performed by the forwarding device in the above-described method embodiments. For example, referring to fig. 7, the forwarding device may be a CE device. And, the forwarding device may support IP forwarding and/or secondary look-up table forwarding.
It will be appreciated that the first routing node may be structured as shown in fig. 9, 12 and 13, the second routing node may be structured as shown in fig. 10, 12 and 13, and the forwarding device may be structured as shown in fig. 11, 12 and 13.
In embodiments of the present application, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "at least one" means one or more, and "a plurality" means two or more. The term "and/or" in the present application is merely an association relation describing the association object, and indicates that three kinds of relations may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
While the application has been described in terms of various alternative embodiments, it will be apparent to those skilled in the art that various equivalent modifications and alterations can be made without departing from the scope of the application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (44)

1. A power routing method, characterized in that it is applied to a first routing node in a power network; the method comprises the following steps:
Receiving a first calculation message sent by a second routing node through a VPN tunnel, wherein the first calculation message is obtained by the second routing node performing VPN tunnel encapsulation on a first original message sent by client equipment, and the first calculation message comprises a network address of a first service instance;
and after the first calculation message is subjected to tunnel decapsulation, forwarding the first original message to the first service instance based on the network address of the first service instance.
2. The method of claim 1, wherein the first computing power message further comprises: and VPN identification allocated for the first routing node.
3. The method of claim 2, wherein the VPN identification points to at least two service instances, each for providing a target computing power service, the first service instance belonging to the at least two service instances.
4. A method according to claim 3, characterized in that the method further comprises:
Receiving a second calculation message sent by the second routing node through the VPN tunnel, wherein the second calculation message is obtained by carrying out VPN tunnel encapsulation on a second original message by the second routing node, the second calculation message comprises the VPN identifier and a network address of a second service instance, and the second service instance belongs to the at least two service instances;
and after the second calculation message is subjected to tunnel decapsulation, forwarding the second original message to the second service instance based on the network address of the second service instance.
5. The method according to any of claims 2 to 4, wherein prior to said receiving the first computation message sent by the second routing node through the VPN tunnel, the method further comprises:
And advertising a first service route of the first service instance to the second routing node, wherein the first service route comprises an identifier of a target computing power service provided by the first service instance, the VPN identifier, a network address of the first service instance and a computing power load of the first service instance.
6. The method of claim 5, wherein the computing network further comprises: a metric proxy node corresponding to the first service instance; before advertising the first traffic route of the first traffic instance to the second routing node, the method further comprises:
Receiving a second service route of the first service instance advertised by the metric proxy node;
Wherein the second traffic route includes an identification of the target computing power service, a network address of the first traffic instance, and a computing power load of the first traffic instance.
7. The method of claim 6, wherein a next hop attribute in the second traffic route carries a network address of the first traffic instance;
Or the extended attribute or the extended type length value TLV in the second service route carries the network address of the first service instance.
8. The method according to any one of claims 1 to 7, wherein an outer header of the first computation message includes a network address of the first service instance.
9. The method of claim 8, wherein prior to forwarding the first original message to the first service instance based on the network address of the first service instance, the method further comprises:
and acquiring the network address of the first service instance from the outer layer message header based on the indication of the VPN identifier in the first calculation message.
10. The method according to claim 8 or 9, wherein the VPN tunnel is a segment route SRv tunnel based on internet protocol version six, and the VPN segment identifier or the path segment identifier in the outer layer header includes the network address of the first service instance;
or the VPN tunnel is a cross-three-layer network virtualized NVO3 tunnel, and the option field in the outer layer message header comprises the network address of the first service instance;
Or the VPN tunnel is an IPv6 tunnel of a sixth version of an Internet protocol, and a destination option header DOH in the outer layer message header comprises a network address of the first service instance;
Or the VPN tunnel is a multiprotocol label switching MPLS tunnel, and the MPLS extension header in the outer layer message header comprises the network address of the first service instance.
11. The method according to any of claims 1 to 7, wherein the first original message comprises a network address of the first service instance.
12. The method of claim 11, wherein the first original message is an IPv6 message, and wherein the network address of the first service instance is encapsulated in an option field of a hop-by-hop HBH option header of the IPv6 message.
13. The method according to any one of claims 1 to 12, wherein forwarding the first original message to the first service instance based on the network address of the first service instance comprises:
after IP tunnel encapsulation is carried out on the first original message, the first original message is forwarded to the first service instance through an IP tunnel;
wherein, the destination address of the IP tunnel is the network address of the first service instance.
14. The method of claim 13, wherein the source address of the IP tunnel belongs to an address space corresponding to the VPN identification.
15. The method according to claim 13 or 14, characterized in that the method further comprises:
receiving a return message sent by the first service instance through the IP tunnel;
and transmitting the backhaul message to the second routing node through the VPN tunnel based on the VPN to which the ingress interface receiving the backhaul message belongs.
16. The method according to any of claims 1 to 12, wherein the first original message comprises a destination address field and a network address of the first service instance; the forwarding the first original packet to the first service instance based on the network address of the first service instance includes:
and ignoring the destination address field, and forwarding the first original message to the first service instance based on the network address of the first service instance.
17. The method of claim 16, wherein the first original message is an IPv6 message, and wherein the network address of the first service instance is encapsulated in an option field of an HBH option header of the IPv6 message.
18. The method according to claim 16 or 17, characterized in that before said forwarding the first original message to the first service instance based on the network address of the first service instance, the method further comprises:
And encapsulating the network address of the first service instance into the first original message.
19. The method according to any one of claims 1 to 18, wherein the computing network further comprises a forwarding device, and the first routing node is connected to a service node to which at least two service instances belong through the forwarding device; the forwarding the first original packet to the first service instance based on the network address of the first service instance includes:
And forwarding the first original message to a service node to which the first service instance belongs through the forwarding equipment based on the network address of the first service instance.
20. The method according to any of claims 1 to 19, wherein the network address of the first service instance is an IP address.
21. A power routing method, characterized by being applied to a second routing node in a power network; the method comprises the following steps:
Receiving a first original message sent by client equipment, wherein the first original message comprises an identifier of a target computing power service requested by the client equipment;
performing Virtual Private Network (VPN) tunnel encapsulation on the first original message to obtain a first calculation message, wherein the first calculation message comprises a network address of a first service instance for providing the target calculation service;
And sending the first calculation message to a first routing node through a VPN tunnel, so that the first routing node can forward the first original message to the first service instance based on the network address of the first service instance after performing tunnel decapsulation on the first calculation message.
22. The method of claim 21, wherein the first computing power message further comprises: and VPN identifiers distributed for the first routing node, wherein the VPN identifiers point to at least two service instances, the at least two service instances are used for providing the target computing power service, and the first service instance belongs to the at least two service instances.
23. The method of claim 22, wherein the method further comprises:
receiving a second original message, wherein the second original message comprises an identifier of the target computing power service;
Performing VPN tunnel encapsulation on the second original message to obtain a second calculation message, wherein the second calculation message comprises the VPN identifier and a network address of a second service instance, and the second service instance belongs to the at least two service instances;
And sending the second calculation message to the first routing node through the VPN tunnel, so that the first routing node can forward the second original message to the second service instance based on the network address of the second service instance after performing tunnel decapsulation on the second calculation message.
24. The method according to claim 22 or 23, wherein before VPN tunneling the first original message to obtain a first calculation message, the method further comprises:
receiving at least one service route advertised by the first routing node, wherein the at least one service route comprises an identifier of the target computing power service, a VPN identifier, network addresses of the at least two service instances, and computing power loads of the at least two service instances;
The first service instance is determined from the at least two service instances based on the computational power load of the at least two service instances.
25. The method according to any one of claims 21 to 24, wherein before performing VPN tunneling on the first original packet to obtain a first calculation packet, the method further comprises:
And determining the network address of the first service instance from a flow table of the service flow based on the identification information of the service flow to which the first original message belongs.
26. The method according to any one of claims 21 to 25, wherein the outer header of the first calculation message includes the network address of the first service instance.
27. The method of claim 26, wherein the VPN tunnel is a segment route SRv tunnel based on internet protocol version six, and wherein the VPN segment identifier or the path segment identifier in the outer layer header includes the network address of the first service instance;
or the VPN tunnel is a cross-three-layer network virtualized NVO3 tunnel, and the option field in the outer layer message header comprises the network address of the first service instance;
Or the VPN tunnel is an IPv6 tunnel of a sixth version of an Internet protocol, and a destination option header DOH in the outer layer message header comprises a network address of the first service instance;
Or the VPN tunnel is a multiprotocol label switching MPLS tunnel, and the MPLS extension header in the outer layer message header comprises the network address of the first service instance.
28. The method according to any one of claims 21 to 25, wherein the first original message is an IPv6 message; the performing VPN tunnel encapsulation on the first original message to obtain a first power calculation message includes:
Encapsulating the network address of the first service instance in an option field of a hop-by-hop HBH option header of the IPv6 message;
and carrying out tunnel encapsulation on the IPv6 message which is encapsulated with the network address of the first service instance to obtain a first calculation message.
29. A power routing method, applied to a power network comprising a first routing node and a second routing node, the method comprising:
the second routing node receives a first original message sent by client equipment, wherein the first original message comprises an identifier of a target computing power service requested by the client equipment;
the second routing node performs Virtual Private Network (VPN) tunnel encapsulation on the first original message to obtain a first calculation message, wherein the first calculation message comprises a network address of a first service instance;
The second routing node sends the first force calculation message to the first routing node through a VPN tunnel;
after the first routing node performs tunnel decapsulation on the first computation message, forwarding the first original message to the first service instance based on the network address of the first service instance.
30. The method of claim 29, wherein the first computing power message further comprises: and VPN identifiers distributed for the first routing node, wherein the VPN identifiers point to at least two service instances, the at least two service instances are used for providing the target computing power service, and the first service instance belongs to the at least two service instances.
31. The method of claim 30, wherein the method further comprises:
the second routing node receives a second original message, wherein the second original message comprises the identification of the target computing power service;
The second routing node performs VPN tunnel encapsulation on the second original message to obtain a second calculation message, wherein the second calculation message comprises the VPN identifier and a network address of a second service instance, and the second service instance belongs to the at least two service instances;
The second routing node sends the second calculation message to the first routing node through the VPN tunnel;
and after the first routing node performs tunnel decapsulation on the second computation message, forwarding the second original message to the second service instance based on the network address of the second service instance.
32. The method according to claim 30 or 31, wherein before the second routing node performs VPN tunneling on the first original packet to obtain a first power packet, the method further comprises:
The first routing node advertises at least one first traffic route to the second routing node, the at least one first traffic route including an identification of the target power service, the VPN identification, network addresses of the at least two traffic instances, and power loads of the at least two traffic instances;
the second routing node determines the first service instance from the at least two service instances based on the computational power load of the at least two service instances.
33. The method of claim 32, wherein the computing power network further comprises: a metric proxy node; before the first routing node advertises at least one first traffic route to the second routing node, the method further comprises:
The metric proxy node advertises at least one second traffic route to the first routing node;
wherein the at least one second traffic route comprises an identification of the target computational power service, network addresses of the at least two traffic instances, and computational power loads of the at least two traffic instances.
34. The method according to any one of claims 29 to 33, wherein an outer header of the first computation message includes a network address of the first service instance; before the first routing node forwards the first original message to the first service instance based on the network address of the first service instance, the method further includes:
The first routing node obtains the network address of the first service instance from the outer layer header based on the indication of the VPN identification in the first computation message.
35. The method according to any of claims 29 to 33, wherein the first original message comprises a network address of the first service instance.
36. The method according to any of claims 29 to 35, wherein the first routing node forwarding the first original message to the first service instance based on a network address of the first service instance, comprising:
and after the first routing node encapsulates the first original message through an IP tunnel, forwarding the first original message to the first service instance through the IP tunnel, wherein the destination address of the IP tunnel is the network address of the first service instance.
37. The method according to any one of claims 29 to 35, further comprising a forwarding device in the power network, wherein the first routing node is connected to a service node to which at least two service instances belong through the forwarding device; the first original message comprises a destination address field and a network address of the first service instance;
The first routing node forwards the first original message to the first service instance based on the network address of the first service instance, including:
The first routing node ignores the destination address field and forwards the first original message to the forwarding device based on the network address of the first service instance;
The forwarding device ignores the destination address field and forwards the first original message to the first service instance based on the network address of the first service instance.
38. The method of claim 37, wherein prior to the first routing node forwarding the first original message to the first service instance based on the network address of the first service instance, the method further comprises:
the first routing node encapsulates a network address of the first service instance into the first original message.
39. The method of claim 37, wherein the second routing node performs VPN tunnel encapsulation on the first original packet to obtain a first computation packet, including:
the second routing node encapsulates the network address of the first service instance in the first original message;
And the second routing node performs VPN tunnel encapsulation on the first original message which is encapsulated with the network address of the first service instance to obtain a first calculation message.
40. A network device, the network device comprising: a memory, a processor and a computer program stored on the memory and capable of running on the processor, the processor implementing the method of any one of claims 1 to 28 when the computer program is executed.
41. A computer readable storage medium having instructions stored therein which, when executed on a processor, cause the processor to perform the method of any of claims 1 to 28.
42. A computer program product comprising instructions for execution by a processor to implement the method of any one of claims 1 to 28.
43. A chip for implementing the method of any one of claims 1 to 28.
44. A power routing system, the power routing system comprising: the system comprises a first routing node, a second routing node and at least one service node connected with the first routing node;
wherein the first routing node is configured to implement the method of any one of claims 1 to 20;
said second routing node being adapted to implement the method of any of claims 21 to 28.
CN202211700619.4A 2022-10-17 2022-12-28 Calculation routing method, device and system Pending CN117914820A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2022112689344 2022-10-17
CN202211268934 2022-10-17

Publications (1)

Publication Number Publication Date
CN117914820A true CN117914820A (en) 2024-04-19

Family

ID=90689912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211700619.4A Pending CN117914820A (en) 2022-10-17 2022-12-28 Calculation routing method, device and system

Country Status (1)

Country Link
CN (1) CN117914820A (en)

Similar Documents

Publication Publication Date Title
US10938714B2 (en) Communication between distinct network domains
CN112470436B (en) Systems, methods, and computer-readable media for providing multi-cloud connectivity
RU2704714C1 (en) Technologies using ospf for providing maximum depth of node and/or communication link segment identifier
US20230078123A1 (en) Method for Forwarding Packet in SRV6 Service Function Chain and SF Device
US10084697B2 (en) Methods and apparatus for internet-scale routing using small-scale border routers
CN112953831A (en) Message forwarding method and device
US20230300070A1 (en) Packet Sending Method, Device, and System
WO2022001835A1 (en) Method and apparatus for sending message, and network device, system and storage medium
US11362954B2 (en) Tunneling inter-domain stateless internet protocol multicast packets
EP3054634A1 (en) Scheme for performing one-pass tunnel forwarding function on two-layer network structure
CN107872389A (en) Business load balance between symmetrical subnet in the networks for returning connection more
CN113660164A (en) Message forwarding method and network equipment
EP4246901A1 (en) Packet transmission method and apparatus
CN113542111A (en) Message forwarding method and network equipment
CN110022263B (en) Data transmission method and related device
CN113726652B (en) Notification information processing method and device and storage medium
US11949584B2 (en) Utilizing domain segment identifiers for inter-domain shortest path segment routing
US11909629B2 (en) Seamless segment routing for multiprotocol label switching (MPLS) interworking
WO2023274083A1 (en) Route publishing method and apparatus, packet forwarding method and apparatus, device, and storage medium
US11611508B2 (en) Packet forwarding method and network device
CN117914820A (en) Calculation routing method, device and system
CN111010344B (en) Message forwarding method and device, electronic equipment and machine-readable storage medium
US20230318966A1 (en) Packet Transmission Method, Correspondence Obtaining Method, Apparatus, and System
CN117615017A (en) Calculation force request method, device and system
US20230388228A1 (en) Packet forwarding method and device, and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination