CN112751766A - Message forwarding method and device and computer storage medium - Google Patents

Message forwarding method and device and computer storage medium Download PDF

Info

Publication number
CN112751766A
CN112751766A CN201911046986.5A CN201911046986A CN112751766A CN 112751766 A CN112751766 A CN 112751766A CN 201911046986 A CN201911046986 A CN 201911046986A CN 112751766 A CN112751766 A CN 112751766A
Authority
CN
China
Prior art keywords
vpnsid
data center
proxy
vms
center routing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911046986.5A
Other languages
Chinese (zh)
Other versions
CN112751766B (en
Inventor
闫朝阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201911046986.5A priority Critical patent/CN112751766B/en
Priority to PCT/CN2020/124463 priority patent/WO2021083228A1/en
Publication of CN112751766A publication Critical patent/CN112751766A/en
Application granted granted Critical
Publication of CN112751766B publication Critical patent/CN112751766B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering

Abstract

The application discloses a message forwarding method, a message forwarding device and a computer storage medium, and belongs to the technical field of network function virtualization. In the method, the CSG receives a message, selects one proxy VPNSID from a plurality of proxy VPNSIDs included in a first forwarding table as a target proxy VPNSID forwarding message, and forwards the message to a data center routing node indicated by the target proxy VPNSID. That is, in the method, the CSG only needs to be responsible for sharing the load to each data center routing node when sharing the load, and each data center routing node completes the actual load sharing. The maximum load sharing path number of the data center routing node can be up to 128, so that the message forwarding method provided by the embodiment of the application is equivalent to increasing the final load sharing path number of the CSG, thereby improving the load sharing efficiency.

Description

Message forwarding method and device and computer storage medium
Technical Field
The present application relates to the field of network function virtualization technologies, and in particular, to a method and an apparatus for forwarding a packet, and a computer storage medium.
Background
In 5G Regional Data Center (RDC) technology, a Virtual Network Function (VNF) may be deployed on multiple virtual machines, and each of the multiple virtual machines may individually execute the VNF to implement load sharing of the VNF. Therefore, when a base station service gateway (CSG) receives a packet carrying the identifier of the VNF, the CSG needs to forward the packet to one of the virtual machines, and the virtual machine executes the VNF based on the packet.
In the related art, for any VNF, a CSG obtains, in advance, a Virtual Private Network Segment Identifier (VPNSID) of each virtual machine in a plurality of virtual machines deployed for the VNF, so as to obtain a plurality of VPNSIDs. And obtaining the private network route of the VNF, wherein the private network route of the VNF is used for uniquely identifying the VNF. The CSG establishes a correspondence between the VPNSIDs and the private network routes of the VNF. When a CSG receives a certain message, if the message carries the private network route of the VNF, the message is mapped to one VPNSID in the multiple VPNSIDs according to the corresponding relation through a multi-path hash algorithm, and then the VPNSID is used as the destination address of the message to be forwarded, so that the message is forwarded to the virtual machine indicated by the VPNSID.
However, in the method for forwarding a packet, since the number of load sharing that can be supported by the current CSG is maximum 8, the corresponding relationship includes maximum 8 VPNSIDs, thereby limiting the efficiency of load sharing.
Disclosure of Invention
The application provides a message forwarding method, a message forwarding device and a computer storage medium, which can improve the efficiency of load sharing.
The technical scheme is as follows:
in a first aspect, a method for forwarding a packet is provided, where the method is applied to a CSG in a communication network. Wherein the communication network further comprises a plurality of data center routing nodes, a plurality of VRFs, each of which has one or more of the aforementioned plurality of VMs connected thereto, and a plurality of VMs for executing the target VNF. Each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID.
In the method, a CSG receives a message, wherein the message carries an identifier of a target VNF; and the CSG selects one proxy VPNSID from the plurality of proxy VPNSIDs included in the first forwarding table as a target proxy VPNSID, forwards the message by using the target proxy VPNSID as a destination address so as to forward the message to the data center routing node indicated by the target proxy VPNSID, and is used for indicating the data center routing node indicated by the target proxy VPNSID to forward the message according to the plurality of VPNSIDs included in the second forwarding table. The first forwarding table is a forwarding table corresponding to the identifier of the target VNF, and the plurality of proxy VPNSIDs included in the first forwarding table refer to proxy VPNSIDs configured corresponding to VPNSIDs of the plurality of VMs. The plurality of VPNSIDs included in the second forwarding table refer to VPNSIDs of VMs corresponding to the target proxy VPNSID among VPNSIDs of the plurality of VMs.
In this embodiment, in order to avoid being limited by the maximum load sharing route number of the CSG, one proxy VPNSID may be configured for each data center routing node. For the VPNSID of any VM, a plurality of proxy VPNSIDs are correspondingly configured for the VPNSID of the VM. Therefore, the proxy VPNSID can be used to replace the original VPNSID of the VM in the local forwarding table of the CSG, so that the CSG only needs to be responsible for sharing the load to each data center routing node when sharing the load, and each data center routing node completes the actual load sharing. The maximum load sharing path number of the data center routing node can be up to 128, so that the message forwarding method provided by the embodiment of the application is equivalent to increasing the final load sharing path number of the CSG, thereby improving the load sharing efficiency.
Optionally, in the method, the CSG acquires a proxy VPNSID corresponding to the VPNSID of the plurality of VMs, and adds the acquired proxy VPNSID to the first forwarding table.
In the embodiment of the present application, the proxy VPNSID is used to replace the VPNSID of the original VM in the local forwarding table of the CSG, so that the CSG needs to obtain the proxy VPNSIDs corresponding to the VPNSIDs of the multiple VMs before forwarding the packet to construct the first forwarding table provided in the embodiment of the present application, thereby improving the efficiency of subsequent load sharing.
Optionally, the CSG obtains the proxy VPNSID corresponding to the VPNSIDs of the multiple VMs, specifically: the CSG receives a first notification message sent by any VRF in a plurality of VRFs, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected with the VRF.
In an implementation manner, the VRF may actively report, to the CSG, a plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in each VM connected to the VRF, so that the CSG constructs the first forwarding table, and efficiency of constructing the first forwarding table by the CSG is improved.
Optionally, the communication network further comprises an RR. At this time, the CSG obtains proxy VPNSIDs corresponding to the VPNSIDs of the multiple VMs, specifically: and the CSG receives a second notification message sent by the RR, wherein the second notification message carries the proxy VPNSID corresponding to the VPNSIDs of the VMs.
In another implementation manner, the RR reports the proxy VPNSID corresponding to the VPNSID of the multiple VMs to the CSG, so that the CSG constructs the first forwarding table, and the flexibility of the CSG constructing the first forwarding table is improved.
In a second aspect, a packet forwarding method is provided, where the method is applied to any data center routing node in a plurality of data center routing nodes in a communication network. The communication network further includes a CSG, a plurality of VRFs, each VRF having one or more of the plurality of VMs connected thereto, and a plurality of VMs for executing the target VNF. Each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID.
In the method, any data center routing node receives a message sent by the CSG, wherein the message carries the VPNSID of a target agent. And under the condition that the target proxy VPNSID carried by the message is the proxy VPNSID of any data center routing node, the any data center routing node selects one VPNSID from the VPNSIDs of the VMs in the second forwarding table, and the any data center routing node forwards the message by taking the selected VPNSID as a target address. The multiple VPNSIDs included in the second forwarding table refer to VPNSIDs of multiple VMs corresponding to a proxy VPNSID of any data center routing node.
In the embodiment of the application, the proxy VPNSID is adopted to replace the original VPNSID of the VM in the local forwarding table of the CSG, so that the CSG only needs to be responsible for sharing the load to each data center routing node when sharing the load, and each data center routing node completes the actual load sharing. Therefore, when receiving the packet, the data center routing node needs to forward the packet to one VM of the multiple VMs according to the second forwarding table, so as to implement load sharing. Since the maximum load sharing path number of the data center routing node can be up to 128, the packet forwarding method provided by the embodiment of the present application is equivalent to increasing the path number of the final load sharing by the CSG, thereby improving the load sharing efficiency.
Optionally, in the method, the any data center routing node acquires VPNSIDs of multiple VMs corresponding to a proxy VPNSID of the any data center routing node, and adds the acquired VPNSIDs of the VMs to the second forwarding table.
In the embodiment of the present application, each data center routing node completes actual load sharing, so that the data center routing node needs to acquire VPNSIDs of multiple VMs corresponding to its own proxy VPNSID before forwarding a packet, so as to construct the second forwarding table provided in the embodiment of the present application, thereby improving the efficiency of subsequent load sharing.
Optionally, the obtaining, by the any data center routing node, VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the any data center routing node specifically includes: the routing node of any data center receives a third notification message sent by any VRF in a plurality of VRFs, wherein the third notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected with any VRF; and the any data center routing node acquires the VPNSID of the VM corresponding to the proxy VPNSID of the any data center routing node from the third notification message according to the proxy VPNSID of the any data center routing node.
In an implementation manner, the VRF may actively report, to the data center routing node, the multiple proxy VPNSIDs corresponding to the VPNSID of each VM in each VM connected to the VRF, so that the data center routing node constructs the second forwarding table, and the efficiency of the data center routing node in constructing the second forwarding table is improved.
Optionally, the VPNSIDs of the plurality of VMs corresponding to the proxy VPNSID of any data center routing node are configured on any data center routing node by an administrator.
In another implementation manner, VPNSIDs of multiple VMs corresponding to the proxy VPNSID of any data center routing node may be directly configured manually, so that flexibility of the data center routing node in constructing the second forwarding table is improved.
In a third aspect, a message forwarding method is provided, where the method is applied to any one of multiple VRFs in a communication network, and the communication network further includes a CSG, multiple data center routing nodes, and multiple VMs for executing a target VNF, and each of the multiple VRFs is connected to one or more of the multiple VMs. Each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID.
In the method, for any VRF, the VRF acquires a plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in a plurality of VMs connected with the VRF; the VRF issues a first notification message to the CSG, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in a plurality of VMs connected with the VRF.
In this embodiment of the application, the VRF may actively report, to the CSG, a plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in each VM connected to the VRF, so that the CSG constructs the first forwarding table, and efficiency of constructing the first forwarding table by the CSG is improved.
Optionally, after the VRF obtains the plurality of proxy VPNSIDs corresponding to the VPNSID of each of the plurality of VMs connected to the VRF, the VRF further issues a third notification message to the plurality of data center routing nodes, where the third notification message carries the plurality of proxy VPNSIDs corresponding to the VPNSID of each of the plurality of VMs connected to the VRF.
In the embodiment of the application, the VRF may also actively report, to the data center routing node, the multiple proxy VPNSIDs corresponding to the VPNSID of each VM in each VM connected to the VRF, so that the data center routing node constructs the second forwarding table, and the efficiency of the data center routing node in constructing the second forwarding table is improved.
In a fourth aspect, a method for forwarding a packet is provided, where the method is applied to an RR in a communication network. The communication network further includes a CSG, a plurality of data center routing nodes, a plurality of VRFs, each VRF having one or more of the plurality of VMs connected thereto, and a plurality of VMs for executing the target VNF. Each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID.
In the method, an RR acquires VPNSID of each VM in a plurality of VMs connected with any VRF in a plurality of VRFs; for the obtained VPNSID of any VM, the RR determines, based on the correspondence between the locally stored VPNSID and the proxy VPNSID, a proxy VPNSID corresponding to the VPNSID of the VM, and obtains a plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in a plurality of VMs connected to any VRF; and the RR sends a second notification message to the CSG, wherein the second notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in a plurality of VMs connected with each VRF in the plurality of VRFs.
When the proxy VPNSIDs corresponding to the VPNSIDs of the multiple VMs are reported to the CSG by the RR, and the CSG constructs the first forwarding table, the RR needs to first acquire the multiple proxy VPNSIDs corresponding to the VPNSIDs of each VM in the multiple VMs, and then send the second notification message to the CSG, so that the CSG constructs the first forwarding table, and the flexibility of the CSG in constructing the first forwarding table is improved.
Optionally, in the method, a corresponding relationship between the VPNSID locally stored by the RR and the proxy VPNSID is configured on the RR by a manager, so that flexibility of constructing the first forwarding table by the CSG is improved.
In a fifth aspect, a CSG in a communication network is provided. Wherein the communication network further comprises a plurality of data center routing nodes, a plurality of VRFs, each of which has one or more of the aforementioned plurality of VMs connected thereto, and a plurality of VMs for executing the target VNF. Each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID.
The CSG includes:
a receiving module, configured to receive a packet, where the packet carries an identifier of a target VNF;
a selection module for selecting one proxy VPNSID from the plurality of proxy VPNSIDs included in the first forwarding table as a target proxy VPNSID;
and the forwarding module is used for forwarding the message by taking the target proxy VPNSID as a destination address so as to forward the message to the data center routing node indicated by the target proxy VPNSID, and is used for indicating the data center routing node indicated by the target proxy VPNSID to forward the message according to the plurality of VPNSIDs included by the second forwarding table. The first forwarding table is a forwarding table corresponding to the identifier of the target VNF, and the plurality of proxy VPNSIDs included in the first forwarding table refer to proxy VPNSIDs configured corresponding to VPNSIDs of the plurality of VMs. The plurality of VPNSIDs included in the second forwarding table refer to VPNSIDs of VMs corresponding to the target proxy VPNSID among VPNSIDs of the plurality of VMs.
Optionally, the CSG further includes an adding module, configured to obtain a proxy VPNSID corresponding to the VPNSID of the multiple VMs, and add the obtained proxy VPNSID to the first forwarding table.
Optionally, the adding module is specifically configured to: receiving a first notification message sent by any one of the VRFs, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected with the VRF.
Optionally, the communication network further comprises an RR. At this time, the adding module is specifically configured to: and receiving a second notification message sent by the RR, wherein the second notification message carries the proxy VPNSID corresponding to the VPNSID of the plurality of VMs.
Technical effects of the modules included in the CSG provided in the fifth aspect may refer to the packet forwarding method provided in the first aspect, and are not described in detail here.
In a sixth aspect, a data center routing node of a plurality of data center routing nodes in a communication network is provided. The communication network further includes a CSG, a plurality of VRFs, each VRF having one or more of the plurality of VMs connected thereto, and a plurality of VMs for executing the target VNF. Each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID.
The data center routing node comprises:
and the receiving module is used for receiving the message sent by the CSG, wherein the message carries the target proxy VPNSID.
And the selecting module is used for selecting one VPNSID from the VPNSIDs of the plurality of VMs in the second forwarding table under the condition that the target proxy VPNSID carried by the message is the proxy VPNSID of any data center routing node.
And the forwarding module is used for forwarding the message by taking the selected VPNSID as a destination address. The multiple VPNSIDs included in the second forwarding table refer to VPNSIDs of multiple VMs corresponding to a proxy VPNSID of the data center routing node.
Optionally, the data center routing node further includes:
and the adding module is used for acquiring the VPNSIDs of the plurality of VMs corresponding to the proxy VPNSID of the data center routing node and adding the acquired VPNSIDs of the VMs to the second forwarding table.
Optionally, the obtaining module is specifically configured to: receiving a third notification message sent by any VRF in the plurality of VRFs, wherein the third notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected with any VRF; and acquiring the VPNSID of the VM corresponding to the proxy VPNSID of the data center routing node from the third notification message according to the proxy VPNSID of the data center routing node.
Optionally, VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node are configured on the data center routing node by an administrator.
The technical effects of the modules included in the data center routing node provided by the sixth aspect may refer to the packet forwarding method provided by the second aspect, and are not described in detail here.
A seventh aspect provides a method for any one of a plurality of VRFs in a communication network, the communication network further comprising a CSG, a plurality of data center routing nodes, and a plurality of VMs for executing a target VNF, wherein each of the plurality of VRFs has one or more of the plurality of VMs connected thereto. Each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID.
The VRF includes:
an obtaining module, configured to obtain a plurality of proxy VPNSIDs corresponding to a VPNSID of each of a plurality of VMs connected to the VRF;
and the issuing module is used for issuing a first notification message to the CSG, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the plurality of VMs connected with the VRF.
Optionally, the issuing module is further configured to issue a third notification message to the multiple data center routing nodes, where the third notification message carries multiple proxy VPNSIDs corresponding to the VPNSID of each of the multiple VMs connected to the VRF.
Technical effects of the modules included in the VRF provided in the seventh aspect may refer to the packet forwarding method provided in the third aspect, and are not described in detail herein.
In an eighth aspect, an RR in a communication network is provided. The communication network further includes a CSG, a plurality of data center routing nodes, a plurality of VRFs, each VRF having one or more of the plurality of VMs connected thereto, and a plurality of VMs for executing the target VNF. Each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID.
The RR includes:
the obtaining module is used for obtaining VPNSID of each VM in a plurality of VMs connected with any VRF in a plurality of VRFs; for the obtained VPNSID of any VM, the RR determines, based on the correspondence between the locally stored VPNSID and the proxy VPNSID, a proxy VPNSID corresponding to the VPNSID of the VM, and obtains a plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in a plurality of VMs connected to any VRF;
and the issuing module is used for sending a second notification message to the CSG, wherein the second notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in a plurality of VMs connected with each VRF in the plurality of VRFs.
Optionally, the corresponding relationship between the VPNSID locally stored by the RR and the proxy VPNSID is configured on the RR by a manager, so that the flexibility of constructing the first forwarding table by the CSG is improved.
Technical effects of the modules included in the RR provided in the eighth aspect may refer to the packet forwarding method provided in the fourth aspect, and are not described in detail here.
A ninth aspect provides a CSG in a communication network, the communication network further comprising a plurality of data center routing nodes, a plurality of Virtual Routing Forwarding (VRFs), and a plurality of VMs for executing a target Virtual Network Function (VNF), each VRF of the plurality of VRFs having one or more of the plurality of VMs connected thereto, each VM of the plurality of VMs being configured with a VPNSID, each data center routing node of the plurality of data center routing nodes being configured with a proxy VPNSID;
the CSG comprises a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any of the first aspects described above.
In a tenth aspect, a data center routing node in a communication network is provided, where the communication network includes a plurality of data center routing nodes, a CSG, a plurality of VRFs, and a plurality of VMs for executing a target VNF, each VRF is connected with one or more of the plurality of VMs, each VM in the plurality of VMs is configured with a VPNSID, and each data center routing node in the plurality of data center routing nodes is configured with a proxy VPNSID;
any one of the plurality of data center routing nodes comprises a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any of the second aspects described above.
In an eleventh aspect, a VRF in a communication network is provided, the communication network including a plurality of VRFs, a CSG, a plurality of data center routing nodes, and a plurality of VMs for executing a target VNF, one or more of the plurality of VMs being connected to each VRF in the plurality of VRFs, each VM in the plurality of VMs being configured with a VPNSID, and each data center routing node in the plurality of data center routing nodes being configured with a proxy nsvpid;
any VRF in the plurality of VRFs comprises a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any of the above third aspects.
A twelfth aspect provides an RR in a communication network, the communication network further comprising a CSG, a plurality of data center routing nodes, a plurality of VRFs, and a plurality of VMs for executing a target VNF, each VRF having one or more of the plurality of VMs connected thereto, each VM of the plurality of VMs being configured with one VPNSID, each data center routing node of the plurality of data center routing nodes being configured with one proxy VPNSID;
the RR comprises a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any of the fourth aspects above.
In a thirteenth aspect, a chip is provided, where the chip is disposed in a CSG in a communication network, the communication network further includes a plurality of data center routing nodes, a plurality of virtual routing forwarding VRFs, and a plurality of VMs for executing a target virtual network function VNF, where each VRF in the plurality of VRFs is connected with one or more of the plurality of VMs, each VM in the plurality of VMs is configured with one VPNSID, and each data center routing node in the plurality of data center routing nodes is configured with one proxy VPNSID;
the chip comprises a processor and an interface circuit;
the interface circuit is used for receiving instructions and transmitting the instructions to the processor;
the processor is configured to perform the method of any of the first aspect above.
In a fourteenth aspect, a chip is provided, where the chip is disposed in any one of a plurality of data center routing nodes included in a communication network, the communication network further includes a CSG, a plurality of VRFs, and a plurality of VMs used for executing a target VNF, each VRF is connected to one or more of the plurality of VMs, each VM of the plurality of VMs is configured with a VPNSID, and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID;
the chip comprises a processor and an interface circuit;
the interface circuit is used for receiving instructions and transmitting the instructions to the processor;
the processor is configured to perform any of the aspects of the second aspect described above.
In a fifteenth aspect, a chip is provided, where the chip is disposed in any one of a plurality of VRFs included in a communication network, the communication network further includes a CSG, a plurality of data center routing nodes, and a plurality of VMs for executing a target VNF, one or more of the plurality of VMs are connected to each VRF in the plurality of VRFs, each VM in the plurality of VMs is configured with one VPNSID, and each data center routing node in the plurality of data center routing nodes is configured with one proxy VPNSID;
the chip comprises a processor and an interface circuit;
the interface circuit is used for receiving instructions and transmitting the instructions to the processor;
the processor is configured to perform any of the aspects of the third aspect described above.
A sixteenth aspect provides a chip disposed in an RR of a communication network, where the communication network further includes a CSG, a plurality of data center routing nodes, a plurality of VRFs, and a plurality of VMs for executing a target VNF, where each VRF is connected to one or more of the VMs, each VM of the VMs is configured with a VPNSID, and each data center routing node of the data center routing nodes is configured with a proxy nsvpid;
the chip comprises a processor and an interface circuit;
the interface circuit is used for receiving instructions and transmitting the instructions to the processor;
the processor is configured to perform any of the aspects of the fourth aspect described above.
A sixteenth aspect provides a packet forwarding system, where the system includes a CSG, multiple data center routing nodes, multiple virtual routing forwarding VRFs, and multiple VMs for executing a target virtual network function VNF, where each VRF in the multiple VRFs is connected with one or more of the multiple VMs, each VM in the multiple VMs is configured with a virtual private network segment identifier VPNSID, and each data center routing node in the multiple data center routing nodes is configured with a proxy VPNSID;
the CSG is used for acquiring proxy VPNSID corresponding to the VPNSID of the plurality of VMs and adding the acquired proxy VPNSID into the first forwarding table;
and any one of the data center routing nodes is used for acquiring the VPNSIDs of the VMs corresponding to the proxy VPNSID of the data center routing node and adding the acquired VPNSIDs of the VMs to the second forwarding table.
Optionally, the CSG is specifically configured to: and receiving a first notification message sent by any VRF of the VRFs, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected with the VRF.
Optionally, the CSG is specifically configured to: and the CSG receives a second notification message sent by the RR, wherein the second notification message carries the proxy VPNSID corresponding to the VPNSIDs of the VMs.
Optionally, the data center routing node is configured to receive a third notification message sent by any one of the multiple VRFs, where the third notification message carries multiple proxy VPNSIDs corresponding to the VPNSID of each VM in each VM connected to the any VRF; and acquiring the VPNSID of the VM corresponding to the proxy VPNSID of the data center routing node from the third notification message according to the proxy VPNSID of the data center routing node.
Optionally, VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node are configured on the data center routing node by an administrator.
The technical effects of each node in the message forwarding system may also refer to the technical effects of the message forwarding methods provided in the first aspect, the second aspect, the third aspect, and the fourth aspect, which are not described herein again.
Drawings
Fig. 1 is a schematic architecture diagram of a communication network according to an embodiment of the present application;
fig. 2 is a schematic architecture diagram of another communication network provided in an embodiment of the present application;
fig. 3 is a flowchart of a message forwarding method according to an embodiment of the present application;
fig. 4 is a message forwarding flow diagram provided in an embodiment of the present application;
fig. 5 is a flowchart of a method for configuring a first forwarding table and a second forwarding table according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a network device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another network device provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of an interface board in the network device shown in fig. 7 according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a CSG provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of a data center routing node according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of another network device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that reference herein to "a plurality" means two or more. In the description of the present application, "/" indicates an OR meaning, for example, A/B may indicate A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in order to facilitate clear description of technical solutions of the embodiments of the present application, in the embodiments of the present application, terms such as "first" and "second" are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
Before explaining the message forwarding method provided by the embodiment of the present application, a communication network related to the embodiment of the present application is explained first.
Fig. 1 is a schematic architecture diagram of a communication network according to an embodiment of the present application. As shown in fig. 1, the communication network 100 includes a plurality of CSGs, a plurality of Provider Edges (PEs), a plurality of data center routing nodes, a plurality of data centers, and a plurality of Virtual Route Forwarding (VRF).
The data center may be an RDC, a Central Data Center (CDC) or an Edge Data Center (EDC), and fig. 1 illustrates an RDC as an example of the data center. The data center routing node may be a Data Center Gateway (DCGW) deployed between the PE and the data center, and may also be a DC Spine router deployed between the data center and the VRF, which is not specifically limited in this embodiment of the present application. Fig. 1 illustrates an example of a data center routing node as a DCGW deployed between a PE and a data center.
As shown in fig. 1, any CSG communicates with any DCGW through multiple PEs in the backbone network. And any DCGW communicates with any RDC. Any RDC communicates with each VRF to which one or more VMs are connected (one VM is connected to each VRF as illustrated in fig. 1). There are multiple VMs executing the same VNF, and may be connected to different VRFs, respectively. As shown in fig. 1, there are three VMs for executing the VNF shown in fig. 1, which are respectively the three VMs connected by the first three VRFs from top to bottom in fig. 1.
To enable accurate load sharing for a certain VNF, an end.dx type SID, also referred to as VPNSID, is configured for each of the VMs executing the VNF. That is, in the embodiment of the present application, each VPNSID is used to uniquely identify one VM. In this way, the forwarding table of the CSG in the related art includes multiple VPNSIDs corresponding to the identifier of the VNF, so that the CSG forwards the packet to the VM indicated by one of the VPNSIDs according to the forwarding table.
As shown in fig. 1, the current CSG supports at most 8 paths of load sharing, that is, when the CSG receives a certain packet, the CSG can forward the packet to at most one VM of 8 VMs for processing based on a multipath hash algorithm, which seriously affects the efficiency of load sharing. The embodiment of the present application provides a method for forwarding a packet in this scenario, so as to improve the efficiency of load sharing.
In addition, the VNF shown in fig. 1 may be an Access Management Function (AMF), a Session Management Function (SMF), a User Plane Function (UPF), or the like.
In addition, any of the above VRFs is connected to a VM through a designated Access Circuit (AC) tri-layer interface or sub-interface, which is not described in detail herein.
It should be noted that the number of the devices shown in fig. 1 is only for illustration, and does not constitute a limitation to the architecture of the communication network provided in the embodiment of the present application.
For the sake of convenience in the following description, the communication network shown in fig. 1 is simplified, and the simplified communication network is shown in fig. 2. The subsequent method of forwarding the message is illustrated by the communication network shown in fig. 2. As shown in fig. 2, the communication network 200 includes a CSG, a plurality of data center routing nodes (illustrated in fig. 2 by taking two data center routing nodes as an example and respectively labeled as data center routing node 1 and data center routing node 2), a plurality of VRFs (illustrated in fig. 2 by taking two VRFs as an example and respectively labeled as VRF1 and VRF2), and a plurality of VMs for executing a target VNF, and one or more VMs of the plurality of VMs are connected to each VRF of the plurality of VRFs (illustrated in fig. 2 by taking two VMs connected to each VRF as an example).
Further, as shown in fig. 2, the network is divided into a plurality of domains, each domain includes a set of hosts and a set of routers, and the hosts and routers within one domain are collectively managed by one controller. CSG, data center routing node 1, and data center routing node 2 in fig. 2 are located in the same domain, and data center routing node 1, data center routing node 2, and VRFs 1 and VRF2 are located in another domain. Also disposed within each domain is a Route Reflector (RR), labeled RR1 and RR2 in fig. 2, respectively. Wherein the function of the route reflector in each domain is: any routing device in the domain can communicate with other routing devices through the route reflector without directly establishing network connection between the two routing devices, thereby reducing the consumption of network resources.
The functions of the nodes in the communication network shown in fig. 2 will be described in detail in the following embodiments, which are not necessarily described herein.
The following takes the communication network shown in fig. 2 as an example to describe the packet forwarding method provided in the embodiment of the present application, and for other node deployment situations in the communication network described in fig. 1, the following embodiment may be referred to implement packet forwarding.
Fig. 3 is a flowchart of a message forwarding method according to an embodiment of the present application. As shown in fig. 3, the method comprises the steps of:
step 301: and the CSG receives a message, wherein the message carries the identifier of the target VNF.
In this embodiment, in order to avoid being limited by the maximum load sharing route number of the CSG, one proxy VPNSID may be configured for each data center routing node. For a VPNSID of any VM, multiple proxy VPNSIDs may be configured for the VPNSID of the VM. Therefore, the proxy VPNSID can be used to replace the original VPNSID of the VM in the local forwarding table of the CSG, so that the CSG only needs to be responsible for sharing the load to each data center routing node when sharing the load, the actual load sharing is completed by each data center routing node, and the maximum load sharing route number of the data center routing node can be up to 128.
Therefore, the CSG stores a first forwarding table corresponding to the identifier of the target VNF, where the first forwarding table includes a plurality of proxy VPNSIDs, so that the CSG forwards the packet through steps 302 and 303 described below. The plurality of proxy VPNSIDs included in the first forwarding table refer to proxy VPNSIDs corresponding to VPNSIDs of a plurality of VMs for executing the target VNF. The configuration process of the first forwarding table will be described in the following embodiments, and will not be described here.
Fig. 4 is a schematic diagram of packet forwarding provided in an embodiment of the present application. As shown in FIG. 4, the first forwarding table stored in the CSG includes two proxy VPNSIDs, DE:: B100 and DF:: B100. B100 is the proxy VPNSID of data center routing node 1. B100 is the proxy VPNSID of data center routing node 2.
And aiming at the VPNSID of any VM, configuring a plurality of proxy VPNSIDs for the VPNSIDs of the VMs. As shown in FIG. 4, two corresponding proxy VPNSIDs are configured for the VPNSID of the first VM from top to bottom (the VPNSID is A8:1:: B100), DE:: B100 and DF:: B100. The VPNSID for the second VM from top to bottom (the VPNSID is A8:1:: B101) is also configured with two corresponding proxy VPNSIDs, DE:: B100 and DF:: B100, respectively. Furthermore, these two corresponding proxy VPNSIDs can be configured as well for the third VM and the fourth VM from top to bottom. That is, for the VPNSID of each VM shown in FIG. 4, two corresponding proxy VPNSIDs are configured, DE:: B100 and DF:: B100, respectively.
After the VPNSID of each VM configures the corresponding proxy VPNSID as described above, at this time, two proxy VPNSIDs (DE:: B100 and DF:: B100, respectively) are included in the first forwarding table, and are all proxy VPNSIDs corresponding to VPNSIDs of a plurality of VMs executing the target VNF. The process of configuring the proxy VPNSID corresponding to the VPNSID of each VM and the specific implementation manner of generating the first forwarding table will be described in detail in the following embodiments of generating the first forwarding table, which are not repeated herein.
Step 302: the CSG selects one proxy VPNSID from among the plurality of proxy VPNSIDs included in the first forwarding table as a target proxy VPNSID.
In a specific implementation, the CSG may select one proxy VPNSID from a plurality of proxy VPNSIDs included in the first forwarding table through a multipath hashing algorithm. The multi-path hashing algorithm may be an equal-cost multi-path (ECMP) algorithm. At this time, the probability of selecting the plurality of proxy VPNSIDs in the first forwarding table is the same, so that the CSG can uniformly share the received message to each data center routing node. For example, for the forwarding flow shown in fig. 4, the agent VPNSID selected by the CSG based on the multipath hashing algorithm is DE:: B100, which indicates that the packet needs to be forwarded to the data center routing node 1 at this time.
It is to be understood that in another implementation, different hash algorithms may be used to make the probabilities that the proxy VPNSIDs in the first forwarding table are selected different, and a specific type of hash algorithm may be determined according to the load balancing policy.
Step 303: the CSG forwards the message using the target agent VPNSID as the destination address.
In the embodiment of the present application, since the proxy VPNSID is used to replace the VPNSID in the related art, after the CSG selects the proxy VPNSID from the first forwarding table as the target proxy VPNSID, the CSG can forward the packet by using the target proxy VPNSID as the destination address of the packet.
For example, for the forwarding flow shown in fig. 4, the CSG uses the proxy VPNSID as DE:, B100 is used as the destination address of the packet (the destination address is labeled as DA in fig. 4), and the source address of the packet (the source address is labeled as SA in fig. 4) is the CSG. In addition, as shown in fig. 4, a payload (payload) is also included in the message. The steps denoted by (r) in fig. 4 are used to explain the foregoing process.
Through the above steps 301 to 303, the CSG may forward the packet to the data center routing node indicated by the selected target agent VPNSID. And a second forwarding table is configured in the data center routing node indicated by the target proxy VPNSID, the second forwarding table includes multiple VPNSIDs and is used for indicating the data center routing node corresponding to the target proxy VPNSID to forward the message according to the multiple VPNSIDs included in the second forwarding table, and the multiple VPNSIDs included in the second forwarding table refer to the VPNSIDs corresponding to the target proxy VPNSID in the VPNSIDs of the multiple VMs. Therefore, for any data center routing node, if the data center routing node is the data center routing node indicated by the target proxy VPNSID selected by the CSG, the received packet may be processed through the following steps 304 to 306. The configuration process for the second forwarding table will be explained in the following embodiments, which will not be explained here.
Step 304: for any data center routing node, the data center routing node receives a message sent by the CSG, and the message carries the target proxy VPNSID.
Since any data center routing node in the network can receive the message sent by the CSG, for any data center routing node, when the data center routing node receives the message sent by the CSG, it needs to determine whether the message is processed by itself. In a specific implementation manner, since the destination address carried in the packet is the target proxy VPNSID, the data center routing node may compare whether the target proxy VPNSID carried in the packet is consistent with the configured proxy VPNSID of itself. If the message is inconsistent with the data center routing node, the message is processed by other data center routing nodes, and at the moment, the message is forwarded to other data center routing nodes for processing. If they are consistent, it indicates that the message is handled by itself, at which point the data center routing node may continue forwarding the message through steps 305 and 306 described below.
For example, for the forwarding process shown in fig. 4, when the data center routing node 1 receives the packet, since the proxy VPNSID carried by the packet is DE:: B100, the proxy VPNSID is the own proxy VPNSID, and therefore, the data center routing node 1 may continue to forward the packet through the following steps 305 and 306.
When the data center routing node 2 receives the packet, because the target proxy VPNSID carried by the packet is inconsistent with the proxy VPNSID of the data center routing node 2, the data center routing node 2 forwards the packet to the data center routing node indicated by the target proxy VPNSID.
Step 305: and under the condition that the target proxy VPNSID carried by the message is the proxy VPNSID of the data center routing node, the data center routing node selects one VPNSID from the VPNSIDs of the VMs in the second forwarding table, wherein the VPNSIDs of the VMs in the second forwarding table are the VPNSIDs of the VMs corresponding to the proxy VPNSID of any data center routing node.
Since the data center routing node locally stores a second forwarding table corresponding to its own proxy VPNSID, and the second forwarding table includes VPNSIDs of multiple VMs corresponding to its own proxy VPNSID, when a proxy VPNSID carried by a packet is a proxy VPNSID of any data center routing node, the data center routing node may directly select one VPNSID from the second forwarding table through a multipath hash algorithm, so as to forward the packet through the following step 306.
The multi-path hash algorithm may also be an ECMP algorithm. At this time, the probability that the VPNSID of each VM in the second forwarding table is selected is the same, so that the data center routing node uniformly shares the received message to each VM. It is to be understood that in another implementation manner, different hash algorithms may be used to make the probabilities that the VPNSIDs of the VMs in the second forwarding table are selected different, and a specific type of hash algorithm may be determined specifically according to a load balancing policy.
For example, for the forwarding flow shown in FIG. 4, assume that the second forwarding table of data center routing node 1 includes two VPNSIDs, respectively A8:1:: B100 and A8:1:: B101, so data center routing node 1 can select one of the two VPNSIDs through the ECMP algorithm.
Step 306: and the data center routing node forwards the message by taking the selected VPNSID as a destination address.
Since the message is finally processed by the VM, after selecting a VPNSID from the second forwarding table, the data center routing node may forward the message using the selected VPNSID as a destination address, so that the VM indicated by the selected VPNSID processes the message.
For example, for the forwarding flow shown in fig. 4, if the VPNSID selected by the data center routing node 1 from the second forwarding table is A8:1:: B100, at this time, as shown in fig. 4, the data center routing node 1 may forward A8:1:: B100 as the destination address of the packet (the destination address is marked as DA in fig. 4). If the VPNSID selected by the data center routing node 1 from the second forwarding table is A8:1:: B101, then the data center routing node 1 may forward A8:1:: B101 as the destination address of the message, as shown in FIG. 4. It should be noted that two steps denoted by (c) in fig. 4 are used to describe the foregoing process, and two steps denoted by (c) are in an alternative relationship.
In addition, since each VM is deployed on a VRF, when the data center routing node forwards the packet using the selected VPNSID as a destination address, the VRF that is the VM that is indicated by the deployed selected VPNSID first receives the packet, and then forwards the packet to the VM that is indicated by the selected VPNSID. The steps marked as (c) in fig. 4 are used to illustrate the above process.
In the process of forwarding the packet shown in fig. 3, it is necessary to configure a first forwarding table on the CSG, and configure a second forwarding table in each data center routing node, where specific functions of the first forwarding table and the second forwarding table have been explained in the above embodiments, and then, the configuration processes of the first forwarding table and the second forwarding table are explained.
Fig. 5 is a flowchart of a method for configuring a first forwarding table and a second forwarding table according to an embodiment of the present application. As shown in fig. 5, the method includes the following steps:
step 501: for any VRF, the VRF acquires the VPNSID configured for any VM of a plurality of VMs connected with the VRF.
The VPNSID configured for any one of the VMs connected to the VRF may be configured by a network controller, or may be directly configured on the VRF by a manager, which is not specifically limited in this application. If the network controller is used for configuring the VPNSID, the network controller issues the configuration VPNSID to the VRF after configuring the VPNSID of any VM of the plurality of VMs connected with the VRF, so that the VRF acquires the VPNSID configured for any VM of the plurality of VMs connected with the VRF.
In a specific implementation, a network controller or administrator configures VPNSID for each connected VM according to a location identifier (Locator) of the VRF. For example, for the communications network shown in FIG. 4, the location of the VRF1 is identified as A8:1:: 64, and as shown in FIG. 4, a network controller or administrator can configure two VPNSIDs for two VMs connected to the VRF1, respectively, where the VPNSID configured for the first VM from top to bottom in FIG. 4 is A8:1:: B100, and the NSVPID configured for the second VM from top to bottom is A8:1:: B101.
Step 502: for any data center routing node, the data center routing node obtains a proxy VPNSID configured for the data center routing node.
The proxy VPNSID of the data center routing node configured on the data center routing node may be configured by a network controller, or may be configured on the data center routing node by an administrator, which is not specifically limited in this application. If the network controller is used for configuring the proxy VPNSID, the network controller issues the configured proxy VPNSID to the data center routing node after configuring the proxy VPNSID of the data center routing node, so that the data center routing node acquires the proxy VPNSID configured for the data center routing node.
In a specific implementation manner, a network controller or a manager configures a proxy VPNSID for the data center routing node according to a location identifier (Locator) of the data center routing node. For example, for the communications network shown in FIG. 4, the location identifier of data center routing node 1 is DE:/64. As shown in FIG. 4, a network controller or administrator may configure data center routing node 1 with a proxy VPNSID, DE:: B100. In the same manner as previously described, the data center routing node 2 is configured with a proxy VPNSID of DF:/64B 100 based on the location of the data center routing node 2 as shown in FIG. 4.
Step 503: the CSG acquires proxy VPNSIDs corresponding to VPNSIDs of the plurality of VMs for executing the target VNF, and adds the acquired proxy VPNSIDs to the first forwarding table.
In this application, the CSG may obtain the proxy VPNSID corresponding to the VPNSID of the multiple VMs for executing the target VNF through the following two specific implementation manners:
the first implementation mode comprises the following steps: for any VRF, the VRF acquires a plurality of proxy VPNSIDs configured corresponding to the VPNSID of any one of a plurality of VMs connected with the VRF. The VRF issues a first notification message to the CSG, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of any one of a plurality of VMs connected with the VRF. The CSG receives the first notification message sent by the VRF. The CSG may obtain the proxy VPNSID corresponding to the VPNSID of the multiple VMs according to the first advertisement message sent by each VRF.
For any VM, the plurality of proxy VPNSIDs configured for the VPNSID of the VM correspondingly may be configured by the network controller, or may be directly configured on the VRF by a manager, which is not specifically limited in this application. If the network controller is used for configuring a plurality of proxy VPNSIDs corresponding to the VPNSIDs, after the configuration of the proxy VPNSIDs corresponding to the VPNSIDs of the VMs is finished, the network controller distributes the configured proxy VPNSIDs corresponding to the VPNSIDs of the VMs to the VRFs, so that the VRFs acquire the proxy VPNSIDs configured corresponding to the VPNSIDs of the VMs.
As shown in fig. 2, the VRF and the CSG are located in different domains, and therefore, any of the VRFs may publish the first advertisement message to the CSG through MP-BGP/EVPN (a multi link Protocol (MP) based on Border Gateway Protocol (BGP) and Ethernet Virtual Private Network (EVPN)).
In the first implementation manner, the VRF actively reports to the CSG the multiple proxy VPNSIDs configured corresponding to the VPNSID of any one of the multiple VMs connected to the VRF.
The second implementation mode comprises the following steps: the RR in the communication network locally stores in advance a correspondence relationship between the VPNSID and the proxy VPNSID, where the correspondence relationship includes VPNSIDs of a plurality of VMs and a plurality of proxy VPNSIDs corresponding to the VPNSID of each VM, and a construction process of the correspondence relationship will be described in detail below and will not be described herein. For any VRF, RR obtains VPNSID of each VM in each VM connected with the VRF, and for the obtained VPNSID of any VM, RR can determine a plurality of proxy VPNSIDs corresponding to the VPNSID of the VM based on the corresponding relation between the VPNSID and the proxy VPNSID. After acquiring the plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in each VM connected to each VRF, the RR may send a second notification message to the CSG, where the second notification message carries the plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in each VM connected to each VRF. And the CSG receives a second notification message sent by the RR, and can acquire the plurality of proxy VPNSIDs according to the second notification message.
The RR is specifically RR2 in fig. 4. In addition, the RR sends the second notification message to the CSG through the MP-BGP/EVPN.
In addition, for any VRF, the implementation manner of the RR acquiring the VPNSID of each VM in the VMs connected to the VRF may be: and the RR sends a VPNSID acquisition request to the VRF, wherein the VPNSID acquisition request is used for instructing the VRF to send the VPNSID of each VM in the various VMs connected with the VRF to the RR. That is, in the second implementation, the VRF is passively used to send the VPNSID of each VM to which the VRF is connected to the RR.
In addition, in a second implementation, the correspondence between the VPNSID stored locally by the RR and the proxy VPNSID may be configured directly on the RR by a manager. In a specific implementation manner, a manager may configure, on the RR, a plurality of proxy VPNSIDs corresponding to the VPNSID of any one of the plurality of VMs connected to the VRF through a management system or a command line, so that the RR acquires the plurality of proxy VPNSIDs corresponding to the VPNSID of any one of the plurality of VMs connected to the VRF, thereby constructing a correspondence between the VPNSIDs and the proxy VPNSIDs.
Step 504: for any data center routing node, the data center routing node acquires VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node, and the data center routing node adds the acquired VPNSIDs of the multiple VMs to a second forwarding table corresponding to the proxy VPNSID of the data center routing node.
The data center routing node may obtain a plurality of VPNSIDs corresponding to the proxy VPNSID of the data center routing node through the following two specific implementation manners:
the first implementation mode comprises the following steps: for any VRF, the VRF acquires a plurality of proxy VPNSIDs corresponding to the VPNSIDs of any one of a plurality of VMs connected with any VRF. And the VRF issues a third notification message to each data center routing node, wherein the third notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of any one of the VMs connected with the VRF. For any data center routing node, the data center routing node receives the third notification message sent by the VRF, and obtains the VPNSID of the VM corresponding to the proxy VPNSID of the data center routing node from the third notification message according to the proxy VPNSID of the data center routing node. After the data center routing node receives the third notification messages issued by all the VRFs, VPNSIDs of all VMs corresponding to the proxy VPNSID of the data center routing node can be determined according to all the third notification messages.
The first implementation manner in step 503 may be referred to by the VRF acquiring the multiple proxy VPNSIDs corresponding to the VPNSID of any VM of the multiple VMs connected to any VRF, and is not described in detail here.
As can be seen from the communication network shown in fig. 2, the VRF and the data center routing node are located in the same domain, and therefore, the VRF issues the third notification message to each data center routing node through an internal network Protocol (IGP).
In the first implementation manner, the VRF actively reports, to the data center routing node, a plurality of proxy VPNSIDs configured corresponding to the VPNSID of any one of the plurality of VMs connected to the VRF.
The second implementation mode comprises the following steps: for any data center routing node, VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node are configured on the data center routing node by a manager. For example, a manager may directly configure, on the data center routing node, VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node through a command line or a management system.
The two implementations of step 503 and step 504 described above may be used in combination, and when the first implementation of step 503 and the first implementation of step 504 are used to configure the first forwarding table and the second forwarding table, such a configuration may also be referred to as a fully dynamic configuration. In the full dynamic configuration mode, the VRF actively reports the VPNSID of the VM and the corresponding proxy VPNSID, so that the CSG and the data center routing node can acquire the VPNSID of the VM and the corresponding proxy VPNSID, and configure respective forwarding tables.
When the first forwarding table and the second forwarding table are configured through the first implementation manner in step 503 and the second implementation manner in step 504, such a configuration manner may also be referred to as a semi-dynamic configuration manner. In a semi-dynamic configuration mode, the VRF actively reports the VPNSID of the VM and the corresponding proxy VPNSID to the CSG so that the CSG configures a first forwarding table, and a manager directly configures the proxy VPNSID and the corresponding VPNSID of the VM at a data center routing node so that the data center routing node generates a second forwarding table.
When the first forwarding table and the second forwarding table are configured by the second implementation manner in step 503 and the second implementation manner in step 504, such a configuration manner may also be referred to as a static configuration manner. In the static configuration mode, the VRF does not report any information actively, so the RR is required to obtain the VPNSID of each VM connected to the VRF from the VRF, determine the proxy VPNSID corresponding to the VPNSID of each VM according to the correspondence between the locally stored VPNSID and the proxy VPNSID, notify the VPNSID to the CSG, and generate the first forwarding table by the CSG. Since the VRF does not report any information actively, the data center routing node can only manually configure the VPNSID of the VM corresponding to its own proxy VPNSID by an administrator.
Through the different implementation modes, the flexibility of configuring the first forwarding table and the second forwarding table is improved.
Fig. 6 is a schematic structural diagram of a network device according to an embodiment of the present application, where the network device 600 may be any node in the communication networks in the embodiments shown in fig. 1 to 5, for example, may be a CSG, a data center routing node, a VRF, and the like. The network device 600 may be a switch, a router, or other network device that forwards messages. In this embodiment, the network device 600 includes: main control board 610, interface board 630, and interface board 640. A plurality of interface boards may include a switch network board (not shown) for performing data exchange between the interface boards (the interface boards are also called line cards or service boards).
The main control board 610 is used to complete functions such as system management, device maintenance, and protocol processing. The interface boards 630 and 640 are used to provide various service interfaces (e.g., POS interface, GE interface, ATM interface, etc.), and implement forwarding of messages. The main control board 610 mainly has 3 types of functional units: the system comprises a system management control unit, a system clock unit and a system maintenance unit. The main control board 610, the interface board 630 and the interface board 640 are connected to the system backplane through a system bus to realize intercommunication. The interface board 630 includes one or more processors 631 thereon. The processor 631 is used for controlling and managing the interface board, communicating with the central processor on the main control board, and forwarding and processing the message. The memory 632 of the interface board 630 is used for storing a forwarding table entry, and the processor 631 forwards the message by looking up the forwarding table entry stored in the memory 632.
The interface board 630 includes one or more network interfaces 633 for receiving messages sent by other devices, and sending the messages according to the instructions of the processor 631. The specific implementation process can refer to steps 301, 303, 304 and 306 in the embodiment shown in fig. 3. And are not described in detail herein.
The processor 631 is configured to execute the processing steps and functions of any node in the communication network described in the embodiments shown in fig. 1 to 5, and specifically refer to step 302 (processing when serving as a CSG) or step 305 (processing when serving as a data center routing node) in the embodiment shown in fig. 3, step 501 (processing when serving as a VRF), step 502 (processing when serving as a data center routing node), step 503 (processing when serving as a CSG), and step 504 (processing when serving as a data center routing node) in the embodiment shown in fig. 5. And are not described in detail herein.
It can be understood that, as shown in fig. 6, the present embodiment includes a plurality of interface boards, and a distributed forwarding mechanism is adopted, and operations on the interface board 640 are basically similar to those of the interface board 630 under this mechanism, and are not described again for brevity. In addition, it is understood that the processors 631 and/or 641 in the interface board 630 in fig. 6 may be dedicated hardware or chips, such as a network processor or an application specific integrated circuit (application specific integrated circuit), to implement the above functions, which is a manner of processing the forwarding plane by using dedicated hardware or chips. The specific implementation of the network processor using such dedicated hardware or chip can refer to the embodiment shown in fig. 7 below. In other embodiments, the processors 631 and/or 641 may also use a general-purpose processor, such as a general-purpose CPU, to implement the functions described above.
In addition, it should be noted that there may be one or more main control boards, and when there are multiple main control boards, the main control board may include an active main control board and a standby main control board. The interface board may have one or more blocks, the more interface boards are provided the stronger the data processing capacity of the device. Under the condition of a plurality of interface boards, the plurality of interface boards can communicate through one or a plurality of exchange network boards, and when a plurality of interface boards exist, the redundant backup of load sharing can be realized together. Under the centralized forwarding architecture, the device does not need a switching network board, and the interface board undertakes the processing function of the service data of the whole system. Under the distributed forwarding architecture, the device comprises a plurality of interface boards, and can realize data exchange among the plurality of interface boards through the exchange network board, thereby providing large-capacity data exchange and processing capacity. Therefore, the data access and processing capabilities of network devices in a distributed architecture are greater than those of devices in a centralized architecture. Which architecture is specifically adopted depends on the specific networking deployment scenario, and is not limited herein.
In particular embodiments, Memory 632 may be a read-only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only Memory (EEPROM), a compact disc read-only Memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), a magnetic disk or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 632 may be separate and coupled to the processor 631 via a communication bus. The memory 632 may also be integrated with the processor 631.
The memory 632 is used for storing program codes, and is controlled by the processor 631 to execute the path detecting method provided by the above embodiments. The processor 631 is operative to execute program code stored in the memory 632. One or more software modules may be included in the program code. The one or more software modules may be provided in any of the embodiments of fig. 9 and 10 below.
In an embodiment, the network interface 633 may be any device using a transceiver or the like, and is used to communicate with other devices or communication networks, such as an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), and the like.
Fig. 7 is a schematic structural diagram of another network device provided in an embodiment of the present application, where the network device 700 may be any node in the communication networks in the embodiments shown in fig. 1 to 5, for example, may be a CSG, a data center routing node, a VRF, and the like. The network device 700 may be a switch, router, or other network device that forwards messages. In this embodiment, the network device 700 includes: main control board 710, interface board 730, switch board 720 and interface board 740. The main control board 710 is used to complete functions of system management, device maintenance, protocol processing, and the like. The switch network board 720 is used to complete data exchange between interface boards (interface boards are also called line cards or service boards). The interface boards 730 and 740 are used to provide various service interfaces (e.g., POS interface, GE interface, ATM interface, etc.) and implement forwarding of data packets. The control plane is composed of the control units of the main control board 710, the control units on the interface boards 730 and 740, and the like. The main control board 710 mainly has 3 types of functional units: the system comprises a system management control unit, a system clock unit and a system maintenance unit. The main control board 710, the interface boards 730 and 740, and the switch board 720 are connected to the system backplane through the system bus for intercommunication. The central processor 731 on the interface board 730 is used for controlling and managing the interface board and communicating with the central processor on the main control board. The forwarding table entry storage 734 on the interface board 730 is used for storing a forwarding table entry, and the network processor 732 performs forwarding of the packet by looking up the forwarding table entry stored in the forwarding table entry storage 734.
The physical interface card 733 of the interface board 730 is configured to receive a packet. The specific implementation process can refer to steps 301 and 304 in the embodiment shown in fig. 3. And are not described in detail herein.
The network processor 732 is configured to execute the processing steps and functions of any node described in the embodiments shown in fig. 1 to 5, and specifically refer to step 302 (processing when serving as a CSG) or step 305 (processing when serving as a data center routing node) in the embodiment shown in fig. 3, step 501 (processing when serving as a VRF), step 502 (processing when serving as a data center routing node), step 503 (processing when serving as a CSG), and step 504 (processing when serving as a data center routing node) in the embodiment shown in fig. 5. And are not described in detail herein.
The processed message is then sent to other devices via the physical interface card 733. The specific implementation process can refer to the steps 303 and 306 in the embodiment shown in fig. 3. And are not described in detail herein.
It can be understood that, as shown in fig. 7, the present embodiment includes a plurality of interface boards, and a distributed forwarding mechanism is adopted, and operations on the interface board 740 are basically similar to those of the interface board 730 in this mechanism, and are not described again for brevity. In addition, as described above, the functions of the network processors 732 and 742 in fig. 7 may be implemented by replacing with an application specific integrated circuit (asic).
In addition, it should be noted that there may be one or more main control boards, and when there are multiple main control boards, the main control board may include an active main control board and a standby main control board. The interface board may have one or more blocks, the more interface boards are provided the stronger the data processing capacity of the device. There may also be one or more physical interface cards on an interface board. The exchange network board may not have one or more blocks, and when there are more blocks, the load sharing redundancy backup can be realized together. Under the centralized forwarding architecture, the device does not need a switching network board, and the interface board undertakes the processing function of the service data of the whole system. Under the distributed forwarding architecture, the device can have at least one switching network board, and the data exchange among a plurality of interface boards is realized through the switching network board, so that the large-capacity data exchange and processing capacity is provided. Therefore, the data access and processing capabilities of network devices in a distributed architecture are greater than those of devices in a centralized architecture. Which architecture is specifically adopted depends on the specific networking deployment scenario, and is not limited herein.
Fig. 8 is a schematic structural diagram of an interface board 800 in the network device shown in fig. 7 according to an embodiment of the present application, where the network device where the interface board 800 is located may be any node in the communication network in the embodiments shown in fig. 1 to 5, and may be, for example, a CSG, a data center routing node, a VRF, and the like. The interface board 800 may include a Physical Interface Card (PIC) 830, a Network Processor (NP) 810, and a traffic management module (traffic management) 820.
Wherein, PIC: the physical interface card (physical interface card) is used for realizing the docking function of a physical layer, so that the original flow enters an interface board of the network equipment, and the processed message is sent out from the PIC card.
The network processor NP 810 is used to implement forwarding processing of the packet. Specifically, the processing of the uplink packet includes: processing of the message ingress interface, forwarding table lookup (related to the relevant content of the first forwarding table or the second forwarding table as in the above embodiments); and (3) downlink message processing: forwarding table lookup (related to the relevant contents of the first forwarding table or the second forwarding table as in the above embodiments), and so on.
The traffic management TM 820 is used to implement QoS, line-speed forwarding, large-capacity caching, queue management, and other functions. Specifically, the uplink traffic management includes: uplink Qos processing (such as congestion management and queue scheduling) and slicing processing; the downlink traffic management comprises the following steps: packet processing, multicast replication, and downstream Qos processing (e.g., congestion management and queue scheduling).
It is understood that, in the case of a network device having a plurality of interface boards 800, the plurality of interface boards 800 can communicate with each other through the switching network 840.
It should be noted that fig. 8 only shows an exemplary processing flow or module inside the NP, the processing order of each module in the specific implementation is not limited thereto, and other modules or processing flows may be deployed as required in practical applications. The examples of the present application are not to be construed as limiting.
Fig. 9 is a schematic structural diagram of a CSG provided in an embodiment of the present application. Wherein the communication network further comprises a plurality of data center routing nodes, a plurality of VRFs, each of which has one or more of the aforementioned plurality of VMs connected thereto, and a plurality of VMs for executing the target VNF. Each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID.
As shown in fig. 9, the CSG 900 includes:
the receiving module 901 is configured to receive a message, where the message carries an identifier of a target VNF. The detailed implementation refers to step 301 in the embodiment of fig. 3.
A selecting module 902 is configured to select one proxy VPNSID from the plurality of proxy VPNSIDs included in the first forwarding table as the target proxy VPNSID. The detailed implementation refers to step 302 in the embodiment of fig. 3.
A forwarding module 903, configured to forward the packet using the target proxy VPNSID as a destination address, so as to forward the packet to the data center routing node indicated by the target proxy VPNSID, and to instruct the data center routing node indicated by the target proxy VPNSID to forward the packet according to the multiple VPNSIDs included in the second forwarding table. The first forwarding table is a forwarding table corresponding to the identifier of the target VNF, and the plurality of proxy VPNSIDs included in the first forwarding table refer to proxy VPNSIDs configured corresponding to VPNSIDs of the plurality of VMs. The plurality of VPNSIDs included in the second forwarding table refer to VPNSIDs of VMs corresponding to the target proxy VPNSID among VPNSIDs of the plurality of VMs. The detailed implementation refers to step 303 in the embodiment of fig. 3.
Optionally, the CSG further includes an adding module, configured to obtain a proxy VPNSID corresponding to the VPNSID of the multiple VMs, and add the obtained proxy VPNSID to the first forwarding table.
Optionally, the adding module is specifically configured to: receiving a first notification message sent by any one of the VRFs, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected with the VRF.
Optionally, the communication network further comprises an RR. At this time, the adding module is specifically configured to: and receiving a second notification message sent by the RR, wherein the second notification message carries the proxy VPNSID corresponding to the VPNSID of the plurality of VMs.
Technical effects of the modules included in the CSG provided in the fifth aspect may refer to the packet forwarding method provided in the first aspect, and are not described in detail here.
In this embodiment, in order to avoid being limited by the maximum load sharing route number of the CSG, one proxy VPNSID may be configured for each data center routing node. For the VPNSID of any VM, a plurality of proxy VPNSIDs are correspondingly configured for the VPNSID of the VM. Therefore, the proxy VPNSID can be used to replace the original VPNSID of the VM in the local forwarding table of the CSG, so that the CSG only needs to be responsible for sharing the load to each data center routing node when sharing the load, and each data center routing node completes the actual load sharing. The maximum load sharing path number of the data center routing node can be up to 128, so that the message forwarding method provided by the embodiment of the application is equivalent to increasing the final load sharing path number of the CSG, thereby improving the load sharing efficiency.
It should be noted that: in the foregoing embodiment, when forwarding a packet, the CSG is described by only dividing the functional modules, and in practical applications, the function allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the CSG and the message forwarding method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
Fig. 10 is a schematic structural diagram of any data center routing node in a plurality of data center routing nodes in a communication network according to an embodiment of the present application. The communication network further includes a CSG, a plurality of VRFs, each VRF having one or more of the plurality of VMs connected thereto, and a plurality of VMs for executing the target VNF. Each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID.
As shown in fig. 10, the data center routing node 1000 includes:
the receiving module 1001 is configured to receive a message sent by the CSG, where the message carries the target proxy VPNSID. The detailed implementation refers to step 304 in the embodiment of fig. 3.
The selecting module 1002 is configured to select one VPNSID from VPNSIDs of multiple VMs included in the second forwarding table when the target proxy VPNSID carried in the packet is a proxy VPNSID of any data center routing node. The detailed implementation refers to step 305 in the embodiment of fig. 3.
A forwarding module 1003, configured to forward the packet using the selected VPNSID as a destination address. The multiple VPNSIDs included in the second forwarding table refer to VPNSIDs of multiple VMs corresponding to a proxy VPNSID of the data center routing node. The detailed implementation refers to step 306 in the embodiment of fig. 3.
Optionally, the data center routing node further includes:
and the adding module is used for acquiring the VPNSIDs of the plurality of VMs corresponding to the proxy VPNSID of the data center routing node and adding the acquired VPNSIDs of the VMs to the second forwarding table.
Optionally, the obtaining module is specifically configured to: receiving a third notification message sent by any VRF in the plurality of VRFs, wherein the third notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected with any VRF; and acquiring the VPNSID of the VM corresponding to the proxy VPNSID of the data center routing node from the third notification message according to the proxy VPNSID of the data center routing node.
Optionally, VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node are configured on the data center routing node by an administrator.
In this embodiment, in order to avoid being limited by the maximum load sharing route number of the CSG, one proxy VPNSID may be configured for each data center routing node. For the VPNSID of any VM, a plurality of proxy VPNSIDs are correspondingly configured for the VPNSID of the VM. Therefore, the proxy VPNSID can be used to replace the original VPNSID of the VM in the local forwarding table of the CSG, so that the CSG only needs to be responsible for sharing the load to each data center routing node when sharing the load, and each data center routing node completes the actual load sharing. The maximum load sharing path number of the data center routing node can be up to 128, so that the message forwarding method provided by the embodiment of the application is equivalent to increasing the final load sharing path number of the CSG, thereby improving the load sharing efficiency.
It should be noted that: in the data center routing node provided in the foregoing embodiment, when forwarding a packet, only the division of each functional module is used for illustration, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, an internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the data center routing node and the message forwarding method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
In addition, an embodiment of the present application further provides any one of a plurality of VRFs in a communication network, where the communication network further includes a CSG, a plurality of data center routing nodes, and a plurality of VMs used for executing a target VNF, and each of the plurality of VRFs is connected to one or more of the plurality of VMs. Each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID.
The VRF includes:
an obtaining module, configured to obtain a plurality of proxy VPNSIDs corresponding to the VPNSID of each of the plurality of VMs connected to the VRF.
And the issuing module is used for issuing a first notification message to the CSG, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the plurality of VMs connected with the VRF.
Optionally, the issuing module is further configured to issue a third notification message to the multiple data center routing nodes, where the third notification message carries multiple proxy VPNSIDs corresponding to the VPNSID of each of the multiple VMs connected to the VRF.
In this embodiment of the application, the VRF may actively report, to the CSG, a plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in each VM connected to the VRF, so that the CSG constructs the first forwarding table, and efficiency of constructing the first forwarding table by the CSG is improved.
It should be noted that: in the VRF provided in the foregoing embodiment, when forwarding a packet, only the division of each functional module is described as an example, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the VRF and the message forwarding method provided by the above embodiments belong to the same concept, and the specific implementation process thereof is described in detail in the method embodiments and will not be described herein again.
In addition, the embodiment of the application also provides an RR in the communication network. The communication network further includes a CSG, a plurality of data center routing nodes, a plurality of VRFs, each VRF having one or more of the plurality of VMs connected thereto, and a plurality of VMs for executing the target VNF. Each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID.
The RR includes:
the obtaining module is used for obtaining VPNSID of each VM in a plurality of VMs connected with any VRF in a plurality of VRFs; for the obtained VPNSID of any VM, the RR determines, based on the correspondence between the locally stored VPNSID and the proxy VPNSID, a proxy VPNSID corresponding to the VPNSID of the VM, and obtains a plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in a plurality of VMs connected to any VRF;
and the issuing module is used for sending a second notification message to the CSG, wherein the second notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in a plurality of VMs connected with each VRF in the plurality of VRFs.
Optionally, the corresponding relationship between the VPNSID locally stored by the RR and the proxy VPNSID is configured on the RR by a manager, so that the flexibility of constructing the first forwarding table by the CSG is improved.
When the proxy VPNSIDs corresponding to the VPNSIDs of the multiple VMs are reported to the CSG by the RR, and the CSG constructs the first forwarding table, the RR needs to first acquire the multiple proxy VPNSIDs corresponding to the VPNSIDs of each VM in the multiple VMs, and then send the second notification message to the CSG, so that the CSG constructs the first forwarding table, and the flexibility of the CSG in constructing the first forwarding table is improved.
It should be noted that: in the RR provided in the foregoing embodiment, only the division of each function module is used for illustration when forwarding a packet, and in practical applications, the function allocation may be completed by different function modules as needed, that is, an internal structure of a device is divided into different function modules to complete all or part of the functions described above. In addition, the RR provided by the above embodiment and the message forwarding method embodiment belong to the same concept, and the specific implementation process thereof is described in detail in the method embodiment and is not described herein again.
In addition, the embodiment of the present application further provides a packet forwarding system, where the system includes a CSG, a plurality of data center routing nodes, a plurality of virtual routing forwarding VRFs, and a plurality of VMs used for executing a target virtual network function VNF, where each VRF in the plurality of VRFs is connected to one or more of the plurality of VMs, each VM in the plurality of VMs is configured with a virtual private network segment identifier VPNSID, and each data center routing node in the plurality of data center routing nodes is configured with an agent VPNSID;
the CSG is used for acquiring proxy VPNSID corresponding to the VPNSID of the plurality of VMs and adding the acquired proxy VPNSID into the first forwarding table;
and any one of the data center routing nodes is used for acquiring the VPNSIDs of the VMs corresponding to the proxy VPNSID of the data center routing node and adding the acquired VPNSIDs of the VMs to the second forwarding table.
Optionally, the CSG is specifically configured to: and receiving a first notification message sent by any VRF of the VRFs, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected with the VRF.
Optionally, the CSG is specifically configured to: and the CSG receives a second notification message sent by the RR, wherein the second notification message carries the proxy VPNSID corresponding to the VPNSIDs of the VMs.
Optionally, the data center routing node is configured to receive a third notification message sent by any one of the multiple VRFs, where the third notification message carries multiple proxy VPNSIDs corresponding to the VPNSID of each VM in each VM connected to the any VRF; and acquiring the VPNSID of the VM corresponding to the proxy VPNSID of the data center routing node from the third notification message according to the proxy VPNSID of the data center routing node.
Optionally, VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node are configured on the data center routing node by an administrator.
The functions of the nodes in the message forwarding system have been described in detail in the foregoing embodiments, and are not described here.
Fig. 11 is a schematic structural diagram of a network device 1100 according to an embodiment of the present application. Any node in the communication networks in the embodiments of fig. 1 to fig. 5, such as a CSG, a data center routing node, a VRF, and the like, may be implemented by the network device 1100 shown in fig. 11, in which case, the network device 1100 may be a switch, a router, or other network devices that forward a packet. In addition, the network controller in the embodiments of fig. 1 to fig. 5 may also be implemented by the network device 1100 shown in fig. 11, and at this time, specific functions of the network device 1100 may refer to a specific implementation manner of the network controller in any one of the embodiments of fig. 1 to fig. 5, which is not described herein again. Referring to fig. 11, the device includes at least one processor 1101, a communication bus 1102, memory 1103, and at least one communication interface 1104.
The processor 1101 may be a general processing unit (CPU), an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of programs in accordance with the present disclosure.
Communication bus 1102 may include a path that transfers information between the aforementioned components.
The Memory 1103 may be a read-only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only Memory (EEPROM), a compact disc read-only Memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), a magnetic disk or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these. The memory 1103 may be separate and coupled to the processor 1101 by a communication bus 1102. The memory 1103 may also be integrated with the processor 1101.
The memory 1103 is used for storing program codes, and is controlled by the processor 1101 to execute the path detection method provided in any one of the above embodiments. The processor 1101 is configured to execute program code stored in the memory 1103. One or more software modules may be included in the program code. Any node in the communication network in the embodiments provided in fig. 1-5 may determine the data for developing the application by the processor 1101 and one or more software modules in the program code in the memory 1103. The one or more software modules may be software modules provided in either of the embodiments of fig. 9 and 10.
The communication interface 1104 may be any device, such as a transceiver, for communicating with other devices or communication networks, such as ethernet, radio access network (radio access network ran), Wireless Local Area Networks (WLAN), etc.
In particular implementations, a network device may include multiple processors, such as processor 1101 and processor 1105 as shown in FIG. 11, for one embodiment. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In the above embodiments, the implementation may be wholly or partly realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above-mentioned embodiments are provided not to limit the present application, and any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (21)

1. A message forwarding method is applied to a base station service gateway (CSG) in a communication network, wherein the communication network further comprises a plurality of data center routing nodes, a plurality of Virtual Routing Forwarding (VRFs) and a plurality of VMs (virtual machines) for executing a target Virtual Network Function (VNF), and each VRF in the VRFs is connected with one or more VMs; each VM of the plurality of VMs is configured with a Virtual Private Network Segment Identifier (VPNSID), and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID;
the method comprises the following steps:
the CSG receives a message, wherein the message carries the identifier of the target VNF;
the CSG selects one proxy VPNSID from a plurality of proxy VPNSIDs included in a first forwarding table as a target proxy VPNSID, the first forwarding table is a forwarding table corresponding to the identifier of the target VNF, and the plurality of proxy VPNSIDs included in the first forwarding table refer to the proxy VPNSIDs configured corresponding to the VPNSIDs of the plurality of VMs;
and the CSG forwards the message by taking the target proxy VPNSID as a destination address so as to forward the message to a data center routing node indicated by the target proxy VPNSID, and the CSG is used for indicating the data center routing node indicated by the target proxy VPNSID to forward the message according to a plurality of VPNSIDs included by a second forwarding table, wherein the plurality of VPNSIDs included by the second forwarding table refer to the VPNSIDs of the VMs corresponding to the target proxy VPNSID in the VPNSIDs of the VMs.
2. The method of claim 1, wherein the method further comprises:
the CSG acquires proxy VPNSIDs corresponding to the VPNSIDs of the VMs, and adds the acquired proxy VPNSIDs to the first forwarding table.
3. The method of claim 2, wherein the CSG obtaining a proxy VPNSID corresponding to VPNSIDs of the plurality of VMs comprises:
and the CSG receives a first notification message sent by any VRF in the VRFs, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected with the VRF.
4. The method of claim 2, wherein the communication network further comprises a route reflector RR; the CSG obtaining a proxy VPNSID corresponding to the VPNSID of the plurality of VMs, including:
and the CSG receives a second notification message sent by the RR, wherein the second notification message carries the proxy VPNSID corresponding to the VPNSIDs of the VMs.
5. A message forwarding method is applied to any data center routing node in a plurality of data center routing nodes in a communication network, the communication network further comprises a CSG, a plurality of VRFs and a plurality of VMs for executing a target VNF, and each VRF is connected with one or more of the VMs; each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID;
the method comprises the following steps:
the routing node of any data center receives a message sent by the CSG, wherein the message carries a target proxy VPNSID;
under the condition that a target proxy VPNSID carried by the message is a proxy VPNSID of any data center routing node, the any data center routing node selects one VPNSID from VPNSIDs of multiple VMs in a second forwarding table, wherein the multiple VPNSIDs in the second forwarding table refer to VPNSIDs of the multiple VMs corresponding to the proxy VPNSID of any data center routing node;
and the any data center routing node forwards the message by taking the selected VPNSID as a destination address.
6. The method of claim 5, wherein the method further comprises:
and the any data center routing node acquires VPNSIDs of a plurality of VMs corresponding to the proxy VPNSIDs of the any data center routing node, and adds the acquired VPNSIDs of the VMs to the second forwarding table.
7. The method of claim 6, wherein the obtaining, by the any data center routing node, the VPNSIDs of the plurality of VMs corresponding to the proxy VPNSID of the any data center routing node comprises:
the any data center routing node receives a third notification message sent by any VRF in the multiple VRFs, wherein the third notification message carries multiple proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected with the any VRF;
and the any data center routing node acquires the VPNSID of the VM corresponding to the proxy VPNSID of the any data center routing node from the third notification message according to the proxy VPNSID of the any data center routing node.
8. The method of claim 6, wherein the VPNSIDs of the plurality of VMs corresponding to the proxy VPNSID of any of the data center routing nodes are configured on the any of the data center routing nodes by an administrator.
9. A message forwarding method is applied to any VRF in a plurality of VRFs in a communication network, wherein the communication network further comprises a CSG, a plurality of data center routing nodes and a plurality of VMs for executing a target VNF, and each VRF in the plurality of VRFs is connected with one or more VMs; each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID;
the method comprises the following steps:
the method comprises the steps that any VRF obtains a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in a plurality of VMs connected with the VRF;
and the any VRF issues a first notification message to the CSG, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in a plurality of VMs connected with the any VRF.
10. The method of claim 9, wherein after the obtaining, by the any VRF, a plurality of proxy VPNSIDs corresponding to the VPNSID of each of the plurality of VMs connected to the any VRF, further comprising:
and the any VRF issues a third notification message to the plurality of data center routing nodes, wherein the third notification message carries a plurality of agent VPNSIDs corresponding to the VPNSIDs of each VM in the plurality of VMs connected with the any VRF.
11. A message forwarding method is characterized in that the method is applied to RR in a communication network, the communication network further comprises a CSG, a plurality of data center routing nodes, a plurality of VRFs and a plurality of VMs for executing a target VNF, and each VRF is connected with one or more of the VMs; each of the plurality of VMs is configured with a VPNSID, and each of the plurality of data center routing nodes is configured with a proxy VPNSID;
the method comprises the following steps:
the RR acquires VPNSID of each VM in a plurality of VMs connected with any VRF in the plurality of VRFs;
for the obtained VPNSID of any VM, the RR determines, based on a correspondence between a locally stored VPNSID and a proxy VPNSID, a proxy VPNSID corresponding to the VPNSID of any VM, and obtains a plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in a plurality of VMs connected to any VRF;
and the RR sends a second notification message to the CSG, wherein the second notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in a plurality of VMs connected with each VRF in the plurality of VRFs.
12. The method of claim 11, wherein a correspondence between the VPNSID and a proxy VPNSID is configured on the RR by a manager.
13. A CSG in a communication network, the communication network further comprising a plurality of data center routing nodes, a plurality of virtual route forwarding VRFs, and a plurality of VMs to execute a target virtual network function VNF, each VRF of the plurality of VRFs having one or more of the plurality of VMs connected thereto, each VM of the plurality of VMs configured with a VPNSID, each data center routing node of the plurality of data center routing nodes configured with a proxy nsvpid;
wherein the CSG comprises a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any of claims 1-4.
14. A data center routing node in a communication network, the communication network comprising a plurality of data center routing nodes, a CSG, a plurality of VRFs, a plurality of VMs to execute a target VNF, one or more of the plurality of VMs being connected to each VRF, each VM of the plurality of VMs configured with a VPNSID, each data center routing node of the plurality of data center routing nodes configured with a proxy VPNSID;
wherein any of the plurality of data center routing nodes comprises a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any of claims 5-8.
15. A VRF in a communication network, the communication network comprising a plurality of VRFs, a CSG, a plurality of data center routing nodes, a plurality of VMs to execute a target VNF, one or more of the plurality of VMs connected to each VRF of the plurality of VRFs, each VM of the plurality of VMs configured with a VPNSID, each data center routing node of the plurality of data center routing nodes configured with a proxy VPNSID;
wherein any VRF in the plurality of VRFs comprises a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any of claims 9-10.
16. An RR in a communication network, the communication network further comprising a CSG, a plurality of data center routing nodes, a plurality of VRFs, a plurality of VMs for executing a target VNF, each VRF having one or more of the plurality of VMs connected thereto, each VM of the plurality of VMs configured with a VPNSID, each data center routing node of the plurality of data center routing nodes configured with a proxy VPNSID;
wherein the RR comprises a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any of claims 11-12.
17. A chip disposed in a CSG in a communication network, the communication network further comprising a plurality of data center routing nodes, a plurality of virtual route forwarding VRFs, and a plurality of VMs for executing a target virtual network function VNF, one or more of the plurality of VMs being connected to each VRF in the plurality of VRFs, one VPNSID being configured for each VM in the plurality of VMs, one proxy VPNSID being configured for each data center routing node in the plurality of data center routing nodes;
wherein the chip includes a processor and an interface circuit;
the interface circuit is used for receiving instructions and transmitting the instructions to the processor;
the processor is configured to perform the method of any one of claims 1-4.
18. A chip, the chip being disposed in any one of a plurality of data center routing nodes included in a communication network, the communication network further including a CSG, a plurality of VRFs, and a plurality of VMs for executing a target VNF, one or more of the plurality of VMs being connected to each VRF, each VM of the plurality of VMs being configured with a VPNSID, each data center routing node of the plurality of data center routing nodes being configured with a proxy VPNSID;
wherein the chip includes a processor and an interface circuit;
the interface circuit is used for receiving instructions and transmitting the instructions to the processor;
the processor is configured to perform the method of any one of claims 5-8.
19. A chip, the chip being disposed in any one of a plurality of VRFs included in a communication network, the communication network further including a CSG, a plurality of data center routing nodes, and a plurality of VMs for executing a target VNF, one or more of the plurality of VMs being connected to each VRF in the plurality of VRFs, one VPNSID being configured for each VM in the plurality of VMs, and one proxy VPNSID being configured for each data center routing node in the plurality of data center routing nodes;
wherein the chip includes a processor and an interface circuit;
the interface circuit is used for receiving instructions and transmitting the instructions to the processor;
the processor is configured to perform the method of any one of claims 9-10.
20. A chip disposed in an RR of a communication network, the communication network further comprising a CSG, a plurality of data center routing nodes, a plurality of VRFs, and a plurality of VMs for executing a target VNF, each VRF having one or more of the plurality of VMs connected thereto, each VM of the plurality of VMs configured with a VPNSID, each data center routing node of the plurality of data center routing nodes configured with a proxy VPNSID;
wherein the chip includes a processor and an interface circuit;
the interface circuit is used for receiving instructions and transmitting the instructions to the processor;
the processor is configured to perform the method of any one of claims 11-12.
21. A message forwarding system is characterized in that the system comprises a CSG, a plurality of data center routing nodes, a plurality of virtual routing forwarding VRFs and a plurality of VMs (virtual machines) for executing a target virtual network function VNF, wherein each VRF in the plurality of VRFs is connected with one or more of the VMs, each VM in the plurality of VMs is configured with a Virtual Private Network Segment Identifier (VPNSID), and each data center routing node in the plurality of data center routing nodes is configured with a proxy VPNSID;
the CSG is used for acquiring proxy VPNSIDs corresponding to the VPNSIDs of the VMs and adding the acquired proxy VPNSIDs to the first forwarding table;
and any one of the data center routing nodes is configured to acquire VPNSIDs of multiple VMs corresponding to a proxy VPNSID of the data center routing node, and add the acquired VPNSIDs of the VMs to a second forwarding table.
CN201911046986.5A 2019-10-30 2019-10-30 Message forwarding method and system, related equipment and chip Active CN112751766B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911046986.5A CN112751766B (en) 2019-10-30 2019-10-30 Message forwarding method and system, related equipment and chip
PCT/CN2020/124463 WO2021083228A1 (en) 2019-10-30 2020-10-28 Message forwarding method, device, and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911046986.5A CN112751766B (en) 2019-10-30 2019-10-30 Message forwarding method and system, related equipment and chip

Publications (2)

Publication Number Publication Date
CN112751766A true CN112751766A (en) 2021-05-04
CN112751766B CN112751766B (en) 2023-07-11

Family

ID=75640813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911046986.5A Active CN112751766B (en) 2019-10-30 2019-10-30 Message forwarding method and system, related equipment and chip

Country Status (2)

Country Link
CN (1) CN112751766B (en)
WO (1) WO2021083228A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115334045A (en) * 2022-08-12 2022-11-11 迈普通信技术股份有限公司 Message forwarding method, device, gateway equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150381452A1 (en) * 2014-06-25 2015-12-31 Comcast Cable Communications, Llc Detecting virtual private network usage
US20170104679A1 (en) * 2015-10-09 2017-04-13 Futurewei Technologies, Inc. Service Function Bundling for Service Function Chains
CN107547339A (en) * 2017-06-14 2018-01-05 新华三技术有限公司 A kind of gateway media access control MAC address feedback method and device
CN108718278A (en) * 2018-04-13 2018-10-30 新华三技术有限公司 A kind of message transmitting method and device
CN109873760A (en) * 2017-12-01 2019-06-11 华为技术有限公司 Handle the method and apparatus of routing and the method and apparatus of data transmission

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106034077B (en) * 2015-03-18 2019-06-28 华为技术有限公司 A kind of dynamic route collocating method, apparatus and system
CN106487695B (en) * 2015-08-25 2019-10-01 华为技术有限公司 A kind of data transmission method, virtual network managing device and data transmission system
CN106101023B (en) * 2016-05-24 2019-06-28 华为技术有限公司 A kind of VPLS message processing method and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150381452A1 (en) * 2014-06-25 2015-12-31 Comcast Cable Communications, Llc Detecting virtual private network usage
US20170104679A1 (en) * 2015-10-09 2017-04-13 Futurewei Technologies, Inc. Service Function Bundling for Service Function Chains
CN107547339A (en) * 2017-06-14 2018-01-05 新华三技术有限公司 A kind of gateway media access control MAC address feedback method and device
CN109873760A (en) * 2017-12-01 2019-06-11 华为技术有限公司 Handle the method and apparatus of routing and the method and apparatus of data transmission
CN108718278A (en) * 2018-04-13 2018-10-30 新华三技术有限公司 A kind of message transmitting method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐建锋等: "改进BGP实现大型复杂IP网络的负载均衡", 《电信科学》 *
王煜炜等: "面向网络功能虚拟化的高性能负载均衡机制", 《计算机研究与发展》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115334045A (en) * 2022-08-12 2022-11-11 迈普通信技术股份有限公司 Message forwarding method, device, gateway equipment and storage medium
CN115334045B (en) * 2022-08-12 2023-12-19 迈普通信技术股份有限公司 Message forwarding method, device, gateway equipment and storage medium

Also Published As

Publication number Publication date
CN112751766B (en) 2023-07-11
WO2021083228A1 (en) 2021-05-06

Similar Documents

Publication Publication Date Title
JP7417825B2 (en) slice-based routing
EP3399703B1 (en) Method for implementing load balancing, apparatus, and network system
US20190007322A1 (en) Virtual network device and related method
US20170264496A1 (en) Method and device for information processing
US10237179B2 (en) Systems and methods of inter data center out-bound traffic management
US20210289436A1 (en) Data Processing Method, Controller, and Forwarding Device
US10404773B2 (en) Distributed cluster processing system and packet processing method thereof
CN109660442B (en) Method and device for multicast replication in Overlay network
US11663052B2 (en) Adaptive application assignment to distributed cloud resources
EP4037265A1 (en) Packet forwarding method, apparatus, storage medium, and system
US9755909B2 (en) Method, device and system for controlling network path
US20230208751A1 (en) Packet forwarding method, device, and system
WO2018161795A1 (en) Routing priority configuration method, device, and controller
US20230412508A1 (en) Packet processing method and related apparatus
CN113254148A (en) Virtual machine migration method and cloud management platform
CN112751766B (en) Message forwarding method and system, related equipment and chip
EP4040745A1 (en) Service packet forwarding method, device, and computer storage medium
CN113595915A (en) Method for forwarding message and related equipment
JP7273130B2 (en) Communication method and device
WO2022037330A1 (en) Method and device for transmitting virtual private network segment identification (vpn sid), and network device
WO2022143572A1 (en) Message processing method and related device
WO2023138351A1 (en) Traffic forwarding method, packet sending method, message sending method, and apparatus
CN114301833A (en) Route notification method, route notification device, equipment and storage medium
CN116938800A (en) Transmission path determining method and device
CN115473765A (en) Message transmission method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant