WO2024065481A1 - 一种数据处理方法、装置、网络设备及存储介质 - Google Patents
一种数据处理方法、装置、网络设备及存储介质 Download PDFInfo
- Publication number
- WO2024065481A1 WO2024065481A1 PCT/CN2022/122841 CN2022122841W WO2024065481A1 WO 2024065481 A1 WO2024065481 A1 WO 2024065481A1 CN 2022122841 W CN2022122841 W CN 2022122841W WO 2024065481 A1 WO2024065481 A1 WO 2024065481A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- message
- scheduling queue
- scheduling
- data center
- network device
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 32
- 230000005540 biological transmission Effects 0.000 claims abstract description 61
- 238000000034 method Methods 0.000 claims abstract description 51
- 238000012545 processing Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 19
- 238000005538 encapsulation Methods 0.000 claims description 11
- 238000010586 diagram Methods 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 7
- 238000012360 testing method Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 2
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000007493 shaping process Methods 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/12—Shortest path evaluation
Definitions
- the present application relates to the field of communication technology, and in particular to a data processing method, apparatus, network equipment and storage medium.
- RDMA Remote Direct Memory Access
- the purpose of the embodiments of the present application is to provide a data processing method, apparatus, network device and storage medium to reduce the packet loss problem caused by delay and jitter uncertainty, and improve the performance of RDMA services and other services that require high network transmission reliability when applied to wide area networks.
- the specific technical solutions are as follows:
- an embodiment of the present application provides a data processing method, which is applied to a network device in a wide area network, wherein the network device is located on a designated path from a gateway of a first data center to a gateway of a second data center, and the method includes:
- the first message of the target service type obtained is stored in a first scheduling queue corresponding to the deterministic flow to which the first message belongs, the source address of the first message is the address of the first host in the first data center, the destination address of the first message is the address of the second host in the second data center, and the forwarding path of the deterministic flow to which the first message belongs is the designated path;
- the message in the first scheduling queue is forwarded.
- an embodiment of the present application provides a data processing device, which is applied to a network device in a wide area network, wherein the network device is located on a designated path from a gateway of a first data center to a gateway of a second data center, and the device includes:
- a first storage unit used to store the acquired first message of the target service type into a first scheduling queue corresponding to the deterministic flow to which the first message belongs, the source address of the first message is the address of the first host in the first data center, the destination address of the first message is the address of the second host in the second data center, and the forwarding path of the deterministic flow to which the first message belongs is the designated path;
- a forwarding unit is used to forward the message in the first scheduling queue when the scheduling period of the first scheduling queue is reached.
- an embodiment of the present application provides a network device, comprising a processor and a machine-readable storage medium, wherein the machine-readable storage medium stores a computer program that can be executed by the processor, and the processor is prompted by the computer program to implement any of the above-mentioned steps of the data processing method.
- an embodiment of the present application provides a machine-readable storage medium, wherein the machine-readable storage medium stores a computer program, and when the computer program is executed by a processor, any of the above-mentioned steps of the data processing method is implemented.
- an embodiment of the present application provides a computer program, which, when executed by a processor, implements any of the above-mentioned data processing method steps.
- the network device in the wide area network stores the target service type messages that require high network transmission reliability, such as RDMA services, in the designated scheduling queue, and forwards the messages in the scheduling queue when the scheduling period of the scheduling queue is reached. Because the scheduling period of the designated scheduling queue is determined, forwarding the target service type messages that require high network transmission reliability in the scheduling period of the designated scheduling queue can ensure that the transmission delay and jitter of the target service type messages are determined, and the transmission of the target service type messages with a deterministic bounded delay is achieved, which reduces the packet loss problem caused by the uncertainty of delay and jitter, and improves the performance of services that require high network transmission reliability, such as RDMA services, when applied to the wide area network.
- RDMA services network transmission reliability
- FIG1 is a schematic diagram showing the throughput of RDMA service read and write operations
- FIG2 is a schematic diagram of RDMA service transmission within a data center
- FIG3 is a schematic diagram of RDMA service transmission between data centers
- FIG4 is a schematic diagram of a network architecture provided in an embodiment of the present application.
- FIG5 is a schematic diagram of a first flow chart of a data processing method provided in an embodiment of the present application.
- FIG6 is a schematic diagram of a second flow chart of a data processing method provided in an embodiment of the present application.
- FIG. 7 is a detailed schematic diagram of step S62 in FIG. 6 ;
- FIG8 is a schematic diagram of a third flow chart of a data processing method provided in an embodiment of the present application.
- FIG9 is a detailed schematic diagram of step S63 in FIG6 and step S83 in FIG8;
- FIG10 is a schematic diagram of a fourth flow chart of a data processing method provided in an embodiment of the present application.
- FIG11 is a detailed schematic diagram of step S104 in FIG10 ;
- FIG12 is a schematic diagram of a network topology for actual testing provided by an embodiment of the present application.
- FIG13 is a schematic diagram of a structure of a data processing device provided in an embodiment of the present application.
- FIG. 14 is a schematic diagram of the structure of a network device provided in an embodiment of the present application.
- RDMA Remote Direct Memory Access
- RDMA service allows user-mode applications to directly read or write remote memory without kernel intervention and memory copying.
- RoCE RDMA over Converged Ethernet
- SRv6 Segment Routing IPv6, segment routing based on IPv6 forwarding plane: is a new generation of IP (Internet Protocol) bearer protocol. SRv6 adopts the existing IPv6 forwarding technology and realizes network programmability through flexible IPv6 extension headers.
- RDMA services place very high demands on network transmission, which is mainly reflected in the packet loss of the network.
- the performance of the throughput of RDMA service read and write operations under different packet loss rates is shown in Figure 1.
- the short dashed line represents the performance of the throughput of the read operation
- the long dashed line represents the performance of the throughput of the write operation.
- RDMA services are extremely sensitive to Ethernet packet loss. When the packet loss rate in Ethernet exceeds >10 -3 , the effective throughput of the network will drop sharply (the effective throughput is only about 75%). When the packet loss rate in Ethernet is reduced to 1%, the throughput of the RDMA service drops to 0. Based on Figure 1, if the throughput of the RDMA service is to be unaffected, the packet loss rate needs to be guaranteed to be less than one in 100,000, and it is best to have no packet loss.
- the main application of RDMA services is within the data center, and is transmitted through a lossless network combined with RoCE technology.
- the lossless network includes multiple spine nodes, such as spine1 and spine2 in Figure 2, and multiple leaf nodes, such as leaf1 and leaf2 in Figure 2.
- RDMA network cards are respectively provided on the sender and the receiver. After the sender copies the message data of the RDMA service of the application from the buffer to the RDMA network card, the network card driver drives the DMA network card to complete the sending of the message data of the RDMA service through the lossless network. The network card driver drives the DMA network card at the receiver to complete the reception of the message data of the RDMA service through the lossless network and copy the message data of the RDMA service to the buffer of the application.
- RDMA services also put forward the same requirements for the WAN interconnected between data centers and edge computing, mainly in terms of packet loss and latency.
- packet loss in the WAN There are many reasons for packet loss in the WAN. One of them is that the jitter is too large due to congestion, and the RDMA service message is misjudged as lost, which leads to the retransmission of the entire RDMA service message data.
- latency if the latency fluctuates, it will also cause a sharp drop in RDMA service performance.
- the WAN carries a large number of various types of services, including video, voice, file transfer, etc. When the WAN provides the best-effort network capability, different services affect each other.
- the RDMA service message is very easy to be affected by various large-bandwidth transmission services, resulting in large jitter and even packet loss, resulting in a sharp drop in RDMA service performance.
- an embodiment of the present application provides a data processing method, which is applied to a network device in a wide area network, and the network device is located on a designated path from the gateway of a first data center to the gateway of a second data center.
- the specific network architecture can be seen in Figure 4.
- the wide area network includes PE (Provider Edge, network side edge device) 1-2 and P (Provider, network side device) 1-6, PE1 is the gateway of data center 1, and PE2 is the gateway of data center 2.
- the first data center and the second data center are any two data centers connected by a wide area network, that is, the first data center and the second data center are located in different regions, such as data center 1 and data center 2 shown in Figure 4.
- the designated path is a pre-designated path for transmitting messages from the gateway of the first data center to the gateway of the second data center, such as the thick solid line with double arrows in Figure 4.
- the network device can be the gateway of the first data center, the gateway of the second data center, or an intermediate device, and the intermediate device is a network device other than the gateway of the first data center and the gateway of the second data center on the designated path, such as P1, P2, P5 and P6 in Figure 4.
- the network device can be a router, a switch, a firewall device, and other devices with communication functions.
- the network device in the wide area network stores the target service type messages that require high network transmission reliability, such as RDMA services, in the designated scheduling queue, and forwards the messages in the scheduling queue when the scheduling period of the scheduling queue is reached. Because the scheduling period of the designated scheduling queue is determined, forwarding the target service type messages that require high network transmission reliability in the scheduling period of the designated scheduling queue can ensure that the transmission delay and jitter of the target service type messages are determined, and the transmission of the target service type messages with a deterministic bounded delay is achieved, which reduces the packet loss problem caused by the uncertainty of delay and jitter, and improves the performance of services that require high network transmission reliability, such as RDMA services, when applied to the wide area network.
- RDMA services network transmission reliability
- Figure 5 is a first flow chart of a data processing method provided by an embodiment of the present application. The method is applied to a network device in a wide area network, and the network device is located on a designated path from a gateway of a first data center to a gateway of a second data center, and includes the following steps:
- Step S51 storing the acquired first message of the target service type into the first scheduling queue corresponding to the deterministic flow to which the first message belongs, the source address of the first message is the address of the first host in the first data center, the destination address of the first message is the address of the second host in the second data center, and the forwarding path of the deterministic flow to which the first message belongs is the designated path;
- Step S52 when the scheduling period of the first scheduling queue is reached, forward the message in the first scheduling queue.
- the network device in the wide area network stores the target service type messages that require high network transmission reliability, such as RDMA services, in the designated scheduling queue, and forwards the messages in the scheduling queue when the scheduling period of the scheduling queue is reached. Because the scheduling period of the designated scheduling queue is determined, forwarding the target service type messages that require high network transmission reliability in the scheduling period of the designated scheduling queue can ensure that the transmission delay and jitter of the target service type messages are determined, and the transmission of the target service type messages with a deterministic bounded delay is achieved, which reduces the packet loss problem caused by the uncertainty of delay and jitter, and improves the performance of services that require high network transmission reliability, such as RDMA services, when applied to the wide area network.
- RDMA services network transmission reliability
- the technical solution provided in the embodiment of the present application reduces the requirements for network devices and saves network deployment costs.
- the business flow of the target business type can be a business flow of the RDMA business, or a business flow of other business types that require high network transmission reliability.
- the business flow of the target business type can be one or more.
- a business flow of the target business type corresponds to a scheduling queue, and a scheduling queue can correspond to one or more business flows of the target business type.
- the scheduling queue is used to cache the received message.
- the first message is a message of the business flow of the target business type.
- the first message can be a message of the target business type sent by any host in the first data center to any host in the second data center.
- the business flow to which the first message belongs is a deterministic flow
- the forwarding path of the deterministic flow to which the first message belongs is the above-mentioned specified path, that is, the first message is forwarded along the above-mentioned specified path.
- any host in the first data center is the first host
- any host in the second data center is the second host.
- the forwarding path of this deterministic flow can be the same or different, and the scheduling queues corresponding to this deterministic flow can be the same or different.
- a lossless network can be used to transmit messages within the data center.
- the host in the first data center transmits messages to the gateway of the first data center through a lossless network.
- the gateway of the second data center transmits messages to the host in the second data center through a lossless network.
- the structure of the lossless network can refer to the lossless network structure in Figures 2 and 3 above.
- the lossless network can be configured using technologies such as ECN (Explicit Congestion Notification), PFC (Priority Flow Control), and DCBX (Data Center Bridging Exchange), without limitation.
- the network device After acquiring the first message of the target service type, the network device performs mapping and deterministic processing scheduling, that is, determining the scheduling queue corresponding to the service to which the first message belongs, that is, determining the scheduling queue corresponding to the deterministic flow to which the first message belongs.
- mapping and deterministic processing scheduling that is, determining the scheduling queue corresponding to the service to which the first message belongs, that is, determining the scheduling queue corresponding to the deterministic flow to which the first message belongs.
- the scheduling queue corresponding to the deterministic flow to which the message of the target service type belongs is referred to as the first scheduling queue, which does not serve as a limitation.
- the network device stores the first message in the first scheduling queue.
- each scheduling queue corresponding to the deterministic flow of the target service type is configured with a scheduling period, and the scheduling period represents the period for forwarding the messages in the corresponding scheduling queue.
- the scheduling period of the scheduling queue can be set by the controller according to the demand for transmitting messages, or can be configured by the network device itself according to the initial time of starting the scheduling queue, link delay and other information.
- the network device performs smooth shaping on the message according to the period, such as monitoring the scheduling period of each scheduling queue in real time, and forwarding the message stored in the first scheduling queue, such as the first message mentioned above, when monitoring the scheduling period reaching the first scheduling queue.
- the forwarding path of the deterministic flow to which the first message belongs is the specified path, therefore, when forwarding the first message stored in the first scheduling queue, the first message is actually forwarded along the specified path.
- the network device may be a gateway of the first data center, an intermediate device or a gateway of the second data center.
- the data processing method thereof is different.
- the embodiment of the present application when the network device is a gateway of the first data center, based on the embodiment shown in FIG. 5 , the embodiment of the present application further provides a data processing method, as shown in FIG. 6 , which may include the following steps:
- Step S61 receiving a second message
- the source address of the second message is the address of the first host in the first data center
- the destination address of the second message is the address of the second host in the second data center.
- Step S62 If the feature of the second message matches the message feature of the preconfigured target service type, a first message to be forwarded along a designated path is generated based on the second message.
- Step S63 store the first message into a first scheduling queue, where the first scheduling queue is a scheduling queue corresponding to the deterministic flow to which the first message belongs.
- Step S64 when the scheduling period of the first scheduling queue is reached, forward the message in the first scheduling queue. This is the same as the above step S52.
- the network device i.e., the gateway of the first data center, identifies the messages of the target business type and only transmits the messages of the target business type deterministically, and does not transmit the messages of other business types deterministically, thereby reducing the requirements of the network device and saving the bandwidth resources of the network device.
- the second message may be a message sent by any host in the first data center to any host in the second data center. That is, any host in the first data center may use lossless network technology to send the original message to the gateway of the first data center.
- the gateway of the first data center receives the original message sent by any host in the first data center, and the original message received by the gateway of the first data center is the second message.
- any host in the first data center is taken as the first host
- any host in the second data center is taken as the second host for illustration, which does not serve as a limitation.
- the message features may include but are not limited to port numbers, quintuples, and other information.
- the gateway of the first data center extracts the features of the second message and identifies the message of the target business type, that is, matches the features of the second message with the pre-configured message features of the target business type; if the features of the second message match the pre-configured message features of the target business type, that is, the features of the second message are the same as the pre-configured message features of the target business type, it means that the second message is a message of the target business type, and a first message is generated based on the second message, and the first message is forwarded along the above-mentioned specified path.
- the gateway of the first data center can determine the first scheduling queue corresponding to the deterministic flow to which the first message belongs, that is, determine the first scheduling queue corresponding to the first message, and then store the first message in the first scheduling queue.
- the step of generating a first message to be forwarded along a designated path based on the second message in the above step S62 may include steps S71 - S72 .
- Step S71 obtaining a target segment routing list corresponding to a specified path, wherein the segment identifier of the network device in the target segment routing list corresponds to a first scheduling queue.
- the gateway of the first data center can preset and store a segment routing list (Segment List) corresponding to the deterministic flow to which the message of the target service type belongs, that is, a SID (Segment Identifier) list, and the SID list corresponds to the specified path.
- a segment routing list (Segment List) corresponding to the deterministic flow to which the message of the target service type belongs, that is, a SID (Segment Identifier) list
- the SID list corresponds to the specified path.
- the SID information of the gateway of the first data center indicates the scheduling queue corresponding to the corresponding deterministic flow.
- the gateway of the first data center After receiving the second message and determining that the second message is a message of the target service type, the gateway of the first data center obtains the segment routing list corresponding to the deterministic flow to which the second message belongs from the pre-stored segment routing list, that is, the target segment routing list corresponding to the specified path.
- the gateway of the first data center pre-stores the correspondence between the message features and the segment routing list.
- the gateway of the first data center extracts the features of the second message, and then obtains the segment routing list corresponding to the features of the second message from the pre-stored correspondence between the message features and the segment routing list as the target segment routing list.
- Step S72 based on the target segment routing list, encapsulate the second message into a first message, where the first message is an SRv6 message.
- the gateway of the first data center After obtaining the target segment routing list, the gateway of the first data center encapsulates the second message into the first message, for example, encapsulates the second message with an SRv6 header, the SRv6 header includes the target segment routing list, and the second message encapsulated with the SRv6 header is the first message.
- the gateway of the first data center encapsulates the message from the first data center into an SRv6 message.
- the header of the SRv6 message includes a segment routing list, and the segment routing list can uniquely indicate a path, namely the above-mentioned specified path, thereby ensuring the deterministic transmission of the first message.
- the controller can pre-calculate the latency and jitter of each path from the gateway of the first data center to the gateway of the second data center, determine a path that meets the transmission indicators of the deterministic flow to which the first message belongs, that is, the designated path, and then configure the routing in each intermediate device on the designated path, so that when the intermediate device queries the route to forward the first message after receiving the first message, it can forward the first message along the designated path.
- the gateway of the first data center can obtain the first SID of the gateway of the first data center and the second SID of the gateway of the second data center, and encapsulate the second message into the first message based on the first SID and the second SID, and the first message is an SRv6 message.
- the SRv6 header of the first message may include SRH (Segment Routing Header), or may not include SRH.
- the SRv6 header of the first message includes SRH
- the SRH in the SRv6 header of the first message includes the first SID and the second SID
- the source IP address of the IPv6 basic header in the SRv6 header of the first message is the first SID
- the destination IP address is the second SID
- the SRv6 header of the first message includes SRH
- the source IP address of the IPv6 basic header in the SRv6 header of the first message is the first SID
- the destination IP address is the second SID.
- the gateway of the first data center can determine the first scheduling queue based on the source IP address of the IPv6 basic header, and then store the first message in the first scheduling queue.
- the gateway of the second data center can determine the first scheduling queue based on the destination IP address of the IPv6 basic header, and then store the first message in the first scheduling queue.
- the gateway of the first data center may also use other methods to generate the first message to be forwarded along the specified path, which is not limited to this.
- the embodiment of the present application further provides a data processing method, as shown in FIG. 8 , which may include the following steps:
- Step S81 receiving a second message
- the source address of the second message is the address of the first host in the first data center
- the destination address of the second message is the address of the second host in the second data center.
- Step S82 If the feature of the second message matches the message feature of the preconfigured target service type, a first message to be forwarded along a designated path is generated based on the second message.
- Step S83 store the first message into a first scheduling queue, where the first scheduling queue is a scheduling queue corresponding to the deterministic flow to which the first message belongs.
- Step S84 when the scheduling period of the first scheduling queue is reached, forward the message in the first scheduling queue.
- Steps S81-S84 are the same as the above-mentioned steps S61-S64.
- Step S85 If the feature of the second message does not match the pre-configured message feature of the target service type, a second scheduling queue is determined according to the reception status of messages of the target service type within a preset time period before the current moment.
- Step S86 storing the second message in the second scheduling queue.
- the gateway of the first data center can determine a scheduling queue in the scheduling queue of the network device as the second scheduling queue based on the reception of messages of the target business type within a preset time before the current moment, and store the second message in the second scheduling queue.
- the gateway of the first data center can also determine the second scheduling queue corresponding to the characteristics of the second message based on the correspondence between the pre-stored message characteristics and the scheduling cycle, and then store the second message in the second scheduling queue.
- the gateway of the first data center can also determine the second scheduling queue corresponding to the time slot for receiving the second message based on the pre-stored correspondence between the time slot and the scheduling cycle, and then store the second message in the second scheduling queue.
- the gateway of the first data center may also determine the second scheduling queue in other ways, such as randomly selecting a scheduling queue as the second scheduling queue, and this is not limited.
- the scheduling queue range of the second scheduling queue can be adjusted according to the actual situation.
- the gateway of the first data center can detect whether a message of the target business type is received within the preset time before the current moment; if the message of the target business type is not received within the preset time before the current moment, it means that there is a message of the target business type in the current wide area network, and the scheduling queue range of the second scheduling queue is determined to be all scheduling queues in the network device, that is, the second scheduling queue is determined from all scheduling queues in the network device; if the message of the target business type is received within the preset time before the current moment, it means that there is no message of the target business type in the current wide area network, and the scheduling queue range of the second scheduling queue is determined to be all scheduling queues in the network device except the first scheduling queue, that is, the second scheduling queue is determined from all scheduling queues in the network device except the first scheduling queue.
- the gateway of the first data center gives priority
- a message forwarding mode of a non-target service type may be pre-configured in the gateway of the first data center.
- the gateway of the first data center may process the second message according to the pre-configured message forwarding mode of the non-target service type, and store the processed second message in the second scheduling queue.
- the above message forwarding mode can be set according to the actual needs of the non-target service type.
- the message forwarding mode may be to forward the original message.
- the gateway of the first data center directly stores the received second message into the second scheduling queue.
- the message forwarding mode may be forwarding an SRv6 message.
- the gateway of the first data center encapsulates the received second message into an SRv6 message, and stores the encapsulated second message in the second scheduling queue.
- the SRv6 header of the encapsulated second message may include an SRH or may not include an SRH.
- the encapsulated second message is an SRv6 message
- the first message is also an SRv6 message
- the information carried in the SRv6 header of the encapsulated second message and the first message will be different.
- the segment identifier of the gateway of the first data center in the segment routing list of the first message corresponds to a scheduling queue
- the segment identifier of the gateway of the first data center in the segment routing list of the second message does not have a corresponding scheduling queue. Based on this, it is convenient for the intermediate devices on the specified path and the gateway of the second data center to distinguish between messages of the target business type and messages of non-target business types.
- Step S87 forward the message in the second scheduling queue according to the forwarding policy corresponding to the second scheduling queue.
- the forwarding strategy corresponding to the second scheduling queue can be first-in-first-out, that is, the gateway of the first data center can extract the message in the second scheduling queue in a first-in-first-out manner, and then query the routing table, and forward the message in the second scheduling queue according to the query result.
- the gateway of the first data center stores two scheduling queues, namely queue 1 and queue 2.
- the gateway of the first data center first stores the message of the non-target business type in queue 1, and then stores the message of the non-target business type in queue 2.
- the gateway of the first data center first forwards the message stored in queue 1, and then converts the message stored in queue 2.
- the forwarding strategy corresponding to the second scheduling queue can also be forwarding according to the scheduling period of the scheduling queue, that is, the gateway of the first data center can extract the message in the second scheduling queue when the scheduling period of the second scheduling queue is reached, and then query the routing table, and forward the message in the second scheduling queue according to the query result.
- the forwarding strategy corresponding to the second scheduling queue may also be in other forms, which is not limited to this.
- the gateway of the first data center queries the routing table according to the corresponding forwarding strategy to realize the forwarding of the message of the non-target business type without occupying fixed time and space resources, thereby ensuring the deterministic forwarding of the message of the target business type.
- step S63 and step S83 may include steps S91 - S93 .
- Step S91 detect whether there are messages of other service types stored in the first scheduling queue. If yes, execute step S92; if no, it means that the first scheduling queue is the cleared first scheduling queue, execute step S93.
- Step S92 clear the first scheduling queue.
- the first scheduling queue is a scheduling queue reserved for the target service type's message.
- the gateway of the first data center can store other service types' messages in the first scheduling queue, as described in the relevant description of steps S85-S86.
- the gateway of the first data center clears the first scheduling queue, such as discarding messages of other business types stored in the first scheduling queue, or transferring messages of other business types stored in the first scheduling queue to other scheduling queues.
- Step S93 store the first message in the cleared first scheduling queue.
- the gateway of the first data center clears the first scheduling queue
- the first message is stored in the first scheduling queue, thereby avoiding messages of other business types occupying resources for deterministic transmission and ensuring deterministic transmission of messages of the target business type.
- an embodiment of the present application further provides a data processing method, as shown in FIG. 10 , which may include the following steps:
- Step S101 receiving a second message
- the source address of the second message is the address of the first host in the first data center
- the destination address of the second message is the address of the second host in the second data center.
- Step S102 If the second message is an SRv6 message and the network device supports SRv6, a target segment routing list is obtained from the second message.
- Step S103 if the segment identifier of the network device in the target segment routing list corresponds to the first scheduling queue, the second message is used as the first message of the target service type, and the first message is stored in the first scheduling queue, which is the scheduling queue corresponding to the deterministic flow to which the first message belongs.
- Step S104 when the scheduling period of the first scheduling queue is reached, forward the message in the first scheduling queue. This is the same as the above step S52.
- the gateway of the first data center encapsulates the message from the first data center into an SRv6 message.
- the header of the SRv6 message includes a segment routing list, and the segment routing list can uniquely indicate a path, namely the above-mentioned specified path, thereby ensuring the deterministic transmission of the message of the target business type.
- the operations performed by the intermediate device and the gateway of the second data center are similar.
- the following description takes the intermediate device as the execution subject, which does not serve as a limitation.
- the second message received by the intermediate device or the gateway of the second data center is a message forwarded by the host in the first data center through the gateway of the first data center.
- the second message received by the intermediate device or the gateway of the second data center may be an original message of a non-target service type forwarded by the gateway of the first data center, or may be a first message of a target service type processed by the gateway of the first data center, or a message after encapsulation and processing of a message of a non-target service type.
- the message of the target service type is an SRv6 message.
- the intermediate device detects that the second message is an SRv6 message and the intermediate device supports SRv6, it parses the second message and obtains the target segment routing list carried by the second message.
- the gateway of the first data center encapsulates the message of the target service type into an SRv6 message, and then transmits the SRv6 message.
- the way in which the gateway of the first data center obtains the SRv6 message of the target service type can be seen in the embodiment shown in Figure 7 above.
- the second message can be considered as a message of a non-target service type.
- the intermediate device determines the second scheduling queue based on the reception of messages of the target service type within the preset time before the current moment, and stores the second message in the second scheduling queue; forwards the message in the second scheduling queue according to the forwarding policy corresponding to the second scheduling queue.
- the specific implementation of storing the second message in the second scheduling queue and forwarding the message in the second scheduling queue can be found in the relevant description of the above steps S85-S87, which will not be repeated here.
- the intermediate device can preset and store the SID list corresponding to the deterministic flow to which the message of the target business type belongs.
- the SID information of the intermediate device indicates the scheduling queue corresponding to the corresponding deterministic flow.
- the intermediate device obtains the target segment routing list, it determines the scheduling queue corresponding to the segment identifier of the intermediate device in the target segment routing list based on the pre-stored SID list, that is, the first scheduling queue corresponding to the deterministic flow to which the first message belongs. If the first scheduling queue is determined, it can be explained that the second message is the first message of the target business type, and then the first message is stored in the first scheduling queue.
- the SRv6 message of the target service type may also carry indication information, which indicates that the segment identifier of the intermediate device in the target segment routing list corresponds to the first scheduling queue.
- the indication information can be added in the SID list or in any position of the payload of the message. In this case, after the intermediate device obtains the target segment routing list, it determines whether the first message carries the above indication information. If it does, the first message can be stored in the first scheduling queue indicated by the indication information.
- the second message is an SRv6 message
- the intermediate device supports SRv6, but the target segment routing list is not obtained from the second message, or the first scheduling queue corresponding to the segment identifier of the intermediate device in the target segment routing list is not determined, that is, the segment identifier of the intermediate device in the target segment routing list does not have a corresponding scheduling queue
- the second message can be considered as a message of a non-target service type.
- the intermediate device determines the second scheduling queue based on the reception of messages of the target service type within a preset time before the current moment, and stores the second message in the second scheduling queue; forwards the message in the second scheduling queue according to the forwarding policy corresponding to the second scheduling queue.
- the specific implementation of storing the second message in the second scheduling queue and forwarding the message in the second scheduling queue can be found in the relevant description of the above steps S85-S87, which will not be repeated here.
- the intermediate device if the intermediate device does not support SRv6, the intermediate device does not need to determine whether the second message is a message of the target business type. It can directly determine the second scheduling queue based on the reception status of messages of the target business type within a preset time period before the current moment, and store the second message in the second scheduling queue; forward the message in the second scheduling queue according to the forwarding policy corresponding to the second scheduling queue.
- the gateway of the second data center can directly forward the original messages in the second scheduling queue when forwarding the messages in the second scheduling queue.
- the gateway of the second data center can obtain the original message corresponding to the message in the second scheduling queue; then, forward the original message to the second data center.
- the gateway of the second data center can strip the SRv6 encapsulation of the message in the second scheduling queue to obtain the original message, and then forward the original message to the second data center.
- step S103 storing the first message in the first scheduling queue may include: if the first scheduling queue stores messages of other service types, the intermediate device clears the first scheduling queue; and storing the first message in the cleared first scheduling queue.
- the intermediate device clears the first scheduling queue.
- step S104 in FIG. 10 may include steps S111 - S112 .
- Step S111 strip the SRv6 encapsulation of the message in the first scheduling queue to obtain the original message.
- the message of the target business type is an SRv6 message
- the message in the first scheduling queue is a message of the target business type
- the gateway of the second data center forwards the message in the first scheduling queue, it strips the SRv6 encapsulation of the message in the first scheduling queue to obtain the original message.
- Step S112 forwarding the original message to the second host in the second data center.
- the gateway of the second data center strips the SRv6 encapsulation of the first message, obtains the original message, and then forwards the original message to the second host in the second data center, so as to facilitate the processing of the original message by the second host in the data center.
- a controller can be deployed in a wide area network, such as the controller in Figure 4, and the controller can be an SDN (Software Defined Network) controller or other types of controllers.
- SDN Software Defined Network
- the controller sends the first path information to each network device, the first path information includes: scheduling information and a scheduling period of a first scheduling queue, the scheduling information indicates that the deterministic flow to which the message of the target business type belongs corresponds to the first scheduling queue; the network device stores the received first path information, and based on the first path information, associates the deterministic flow to which the message of the target business type belongs with the first scheduling queue in the network device, and configures the scheduling period of the first scheduling queue in the network device according to the scheduling period included in the first path information.
- the controller sends the second path information to each network device, the second path information including: a target segment routing list corresponding to the specified path, a message feature of the target service type, and a correspondence between the first scheduling queue and the segment identifier of the network device in the target segment routing list; the network device stores the received second path information, and performs at least one of the following operations based on the first path information:
- the message characteristics of the target service type are associated with the deterministic flow to which the message of the target service type belongs;
- a correspondence between the first scheduling queue and the segment identifier of the network device in the target segment routing list is established.
- all information included in the above-mentioned first path information and second path information can be issued by the controller; part of the information included in the first path information and the second path information is issued by the controller, and the information not issued by the controller is replaced by the information locally stored in the network device.
- the network device when the network device forwards the message of the target service type, based on the first path information sent by the controller and the second path information pre-stored by the network device, the network device configures the relevant information in the network device to assist the network device in forwarding the message.
- the configuration of the path information in each network device is completed through the above controller, which facilitates the unified management of each network device in the wide area network and ensures the deterministic transmission of the message of the target business type.
- the controller can collect designated network resource information of the wide area network and establish a resource model for the entire network.
- the designated network resource information may include one or more of network topology information, network bandwidth, network delay, and network jitter.
- the controller collects transmission indicators of deterministic flows of the target service type, and calculates the deterministic path that meets the transmission indicators and the encapsulation information of the SRv6 message based on the collected designated network resource information and the transmission indicators of the deterministic flow, combined with the message characteristics of the target service type, to obtain the first path information and the second path information.
- the controller may use an in-band network telemetry method to detect the wide area network and obtain designated network resource information. Based on the designated network resource information detected by the in-band network telemetry method, the delay and jitter required for each path can be accurately estimated, and then the deterministic path that meets the transmission index is determined, and the first path information and the second path information corresponding to the deterministic path are sent to each network device. In this embodiment, the delay and jitter required for each path can be accurately estimated by the in-band network telemetry method.
- the intermediate device does not execute the above data processing method, it can ensure that the selected deterministic path is valid within the QoS (Quality of Service) constraint range, and achieve deterministic transmission, which reduces the requirements for the intermediate device and further saves the network deployment cost.
- QoS Quality of Service
- the controller can flexibly schedule the resources of the wide area network according to specific business needs without occupying fixed network resources such as bandwidth, thereby improving network utilization efficiency.
- the data processing method provided by the embodiment of the present application has high network utilization efficiency and low network equipment requirements, realizes deterministic transmission with bounded delay, and can support more application scenarios in the future that require lossless wide area networks.
- Step 1 The RDMA service host in data center 1 sends the original message to gateway PE1 through the lossless network technology inside data center 1.
- PE1 starts processing WAN data after receiving the original message, see step 2.
- Step 2 PE1 identifies the RDMA service message according to the message characteristics of the RDMA service configured by the controller, maps it, performs deterministic processing and scheduling, and performs smooth shaping on the message in a periodic manner, encapsulates it into the corresponding SRv6 message, and sends it to the intermediate device P.
- the controller maps it, performs deterministic processing and scheduling, and performs smooth shaping on the message in a periodic manner, encapsulates it into the corresponding SRv6 message, and sends it to the intermediate device P.
- Step 3 The P device parses the SID information carried in the SRv6 message, performs deterministic related processing scheduling, and sends the SRv6 message to the gateway PE2 of the data center 2.
- the P device parses the SID information carried in the SRv6 message, performs deterministic related processing scheduling, and sends the SRv6 message to the gateway PE2 of the data center 2.
- the P device here may be P1, P2, P5 or P6 in FIG. 4 above.
- Step 4 PE2 performs deterministic processing and scheduling based on the SID information carried in the SRv6 message, removes the SRv6 encapsulation, and sends the original message to the device inside data center 2. At this point, the data processing in the WAN is completed.
- PE2 performs deterministic processing and scheduling based on the SID information carried in the SRv6 message, removes the SRv6 encapsulation, and sends the original message to the device inside data center 2.
- the data processing in the WAN is completed.
- Figures 5 and 10-11 above please refer to the description of Figures 5 and 10-11 above.
- Step 5 The device inside the data center 2 sends the original message to the RDMA service host through the internal lossless network technology.
- the deterministic solution of the wide area network is combined with the lossless network technology in the data center, so that the packet loss rate of the RoCE service message transmission process can be reduced and the deterministic bounded delay can be achieved.
- the lossless network technology is extended from within the data center to the wide area network, and the effect of an end-to-end lossless network across data centers can be achieved.
- the network equipment supports the scheduling queue reservation function, that is, when there are no messages of the target service type in the wide area network, messages of other service types can be stored in the scheduling queue corresponding to the deterministic flow to which the messages of the RDMA service belong.
- the data processing method provided by the embodiment of the present application is actually tested using a laboratory environment.
- the messages of the RoCE service are run on server 1 and server 2.
- Server 1 and server 2 are connected to switches SW1 and SW2 respectively, and a damage meter is inserted between SW1 and SW2 to simulate the delay and jitter of the wide area network.
- the port rates of the devices connected are 10G respectively.
- Server 1 and Server 2 send RoCE service packets, and simulate delay and jitter parameters on the damage instrument to observe and record the throughput of the RoCE service.
- the data processing method provided in the embodiment of the present application can achieve zero packet loss and deterministic bounded latency for RoCE services in wide-area transmission, thereby achieving RDMA service availability across data centers.
- an embodiment of the present application provides a data processing device, as shown in FIG13, which is applied to a network device in a wide area network, and the network device is located on a designated path from a gateway of a first data center to a gateway of a second data center, and the device includes:
- the first storage unit 131 is used to store the acquired first message of the target service type into a first scheduling queue corresponding to the deterministic flow to which the first message belongs, the source address of the first message is the address of the first host in the first data center, the destination address of the first message is the address of the second host in the second data center, and the forwarding path of the deterministic flow to which the first message belongs is the designated path;
- the forwarding unit 132 is configured to forward the message in the first scheduling queue when the scheduling period of the first scheduling queue is reached.
- the network device is a gateway of the first data center; the first storage unit 131 can be specifically used for:
- the source address of the second message is the address of the first host in the first data center
- the destination address of the second message is the address of the second host in the second data center
- the first message is stored in a first scheduling queue, where the first scheduling queue is a scheduling queue corresponding to the deterministic flow to which the first message belongs.
- the first storage unit 131 may be specifically used for:
- the second message is encapsulated into the first message, where the first message is an SRv6 message.
- the first storage unit 131 may also be used to determine a second scheduling queue according to the reception of messages of the target service type within a preset time period before the current moment if the feature of the second message does not match the message feature of the preconfigured target service type; store the second message in the second scheduling queue;
- the forwarding unit 132 may also be configured to forward the messages in the second scheduling queue according to the forwarding policy corresponding to the second scheduling queue.
- the network device is an intermediate device on the specified path or a gateway of the second data center, and the message of the target service type is an SRv6 message;
- the first storage unit 131 may be specifically used for:
- the source address of the second message is the address of the first host in the first data center
- the destination address of the second message is the address of the second host in the second data center
- the second message is an SRv6 message and the network device supports SRv6, obtaining a target segment routing list from the second message;
- the second message is used as the first message of the target service type, and the first message is stored in the first scheduling queue, which is the scheduling queue corresponding to the deterministic flow to which the first message belongs.
- the first storage unit 131 may also be used for:
- the second message is not an SRv6 message, or the network device does not support SRv6, or the target segment routing list is not obtained, or the segment identifier of the network device in the target segment routing list does not have a corresponding scheduling queue, then determine the second scheduling queue according to the reception of messages of the target service type within a preset time period before the current moment; store the second message in the second scheduling queue;
- the forwarding unit 132 may also be configured to forward the messages in the second scheduling queue according to the forwarding policy corresponding to the second scheduling queue.
- the forwarding unit 132 may be specifically configured to:
- the original message is forwarded to the second host in the second data center according to the forwarding policy corresponding to the second scheduling queue.
- the forwarding unit 132 may be specifically configured to:
- the original message is forwarded to the second host in the second data center.
- the first storage unit 131 may be specifically used for:
- a second scheduling queue is determined from all scheduling queues in the network device except the first scheduling queue.
- the forwarding unit 132 may be specifically configured to:
- the message in the second scheduling queue is forwarded.
- the first storage unit 131 may be specifically used for:
- the first message is stored in the cleared first scheduling queue.
- the data processing device may further include:
- a first receiving unit is used to receive first path information sent by the controller, the first path information includes scheduling information and a scheduling period of a first scheduling queue, and the scheduling information indicates that a deterministic flow to which a message of a target service type belongs corresponds to the first scheduling queue;
- the first configuration unit is used to associate the deterministic flow to which the message of the target service type belongs with the first scheduling queue in the network device, and configure the scheduling period of the first scheduling queue in the network device according to the scheduling period included in the first path information.
- the data processing device may further include:
- a second receiving unit is used to receive second path information sent by the controller, where the second path information includes: a target segment routing list corresponding to the specified path, a message feature of the target service type, and a correspondence between the first scheduling queue and the segment identifier of the network device in the target segment routing list.
- the second configuration unit is configured to perform at least one of the following operations according to the second path information:
- the message characteristics of the target service type are associated with the deterministic flow to which the message of the target service type belongs;
- a correspondence between the first scheduling queue and the segment identifier of the network device in the target segment routing list is established.
- the first path information and the second path information may be determined by the controller according to designated network resource information of the wide area network and transmission indicators of deterministic flows of the target service type.
- the specified network resource information may be obtained by the controller detecting the wide area network using an in-band network telemetry method.
- the specified network resource information may include one or more of network topology information, network bandwidth, network delay, and network jitter.
- the network device in the wide area network stores the target service type messages that require high network transmission reliability, such as RDMA services, in the designated scheduling queue, and forwards the messages in the scheduling queue when the scheduling period of the scheduling queue is reached. Because the scheduling period of the designated scheduling queue is determined, forwarding the target service type messages that require high network transmission reliability in the scheduling period of the designated scheduling queue can ensure that the transmission delay and jitter of the target service type messages are determined, and the transmission of the target service type messages with a deterministic bounded delay is achieved, which reduces the packet loss problem caused by the uncertainty of delay and jitter, and improves the performance of services that require high network transmission reliability, such as RDMA services, when applied to the wide area network.
- RDMA services network transmission reliability
- the technical solution provided in the embodiment of the present application reduces the requirements for network devices and saves the network deployment cost.
- An embodiment of the present application also provides a network device, as shown in FIG14 , comprising a processor 141 and a machine-readable storage medium 142, wherein the machine-readable storage medium 142 stores machine-executable instructions that can be executed by the processor 141, and the processor 141 is prompted by the machine-executable instructions to implement the method steps described in any of the embodiments of FIG4 to FIG12 above.
- the network device may be a gateway of the first data center, an intermediate device, or a gateway of the second data center.
- the machine-readable storage medium may include a random access memory (RAM) or a non-volatile memory (NVM), such as at least one disk storage.
- the machine-readable storage medium may also be at least one storage device located away from the aforementioned processor.
- the processor can be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; it can also be a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
- CPU central processing unit
- NP network processor
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- a machine-readable storage medium in which a computer program is stored.
- the computer program is executed by a processor, the method steps described in any of the embodiments of Figures 2 to 12 are implemented.
- a computer program is also provided.
- the computer program is executed by a processor, the method steps described in any of the embodiments of FIG. 2 to FIG. 12 are implemented.
- the computer program product includes one or more computer instructions.
- the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
- the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
- the computer instructions may be transmitted from one website, computer, server or data center to another website, computer, server or data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means.
- the computer-readable storage medium may be any available medium that a computer can access or a data storage device such as a server or data center that includes one or more available media integrated.
- the available medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a solid-state drive Solid State Disk (SSD)), etc.
- SSD Solid State Disk
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
一种数据处理方法、装置、网络设备及存储介质,该方法应用于广域网中的网络设备,网络设备位于第一数据中心的网关到第二数据中心的网关的指定路径上,包括:将获取到的目标业务类型的第一报文存储至第一报文所属确定性流对应的第一调度队列,第一报文的源地址为第一数据中心中的第一主机的地址,第一报文的目的地址为第二数据中心中的第二主机的地址,第一报文所属确定性流的转发路径为指定路径;当到达第一调度队列的调度周期时,转发第一调度队列中的报文。应用本申请实施例提供的技术方案,能够减少因时延和抖动不确定带来的丢包问题,提高RDMA业务等对网络传输可靠性要求高的业务应用于广域网时的性能。
Description
本申请涉及通信技术领域,特别是涉及一种数据处理方法、装置、网络设备及存储介质。
为了解决网络传输中服务器端数据处理的延迟,RDMA(Remote Direct Memory Access,远程直接数据存取)业务应运而生。RDMA业务对网络传输的可靠性要求非常高,主要表现在丢包和时延上。然而,广域网中导致丢包的原因分很多种,可靠性较差,这导致基于广域网传输RDMA业务时,RDMA业务性能的急剧下降。
发明内容
本申请实施例的目的在于提供一种数据处理方法、装置、网络设备及存储介质,以减少因时延和抖动不确定带来的丢包问题,提高RDMA业务等对网络传输可靠性要求高的业务应用于广域网时的性能。具体技术方案如下:
第一方面,本申请实施例提供了一种数据处理方法,应用于广域网中的网络设备,所述网络设备位于第一数据中心的网关到第二数据中心的网关的指定路径上,所述方法包括:
将获取到的目标业务类型的第一报文存储至所述第一报文所属确定性流对应的第一调度队列,所述第一报文的源地址为所述第一数据中心中的第一主机的地址,所述第一报文的目的地址为所述第二数据中心中的第二主机的地址,所述第一报文所属确定性流的转发路径为所述指定路径;
当到达所述第一调度队列的调度周期时,转发所述第一调度队列中的报文。
第二方面,本申请实施例提供了一种数据处理装置,应用于广域网中的网络设备,所述网络设备位于第一数据中心的网关到第二数据中心的网关的指定路径上,所述装置包括:
第一存储单元,用于将获取到的目标业务类型的第一报文存储至所述第一报文所属确定性流对应的第一调度队列,所述第一报文的源地址为所述第一数据中心中的第一主机的地址,所述第一报文的目的地址为所述第二数据中心中的第二主机的地址,所述第一报文所属确定性流的转发路径为所述指定路径;
转发单元,用于当到达所述第一调度队列的调度周期时,转发所述第一调度队列中的报文。
第三方面,本申请实施例提供了一种网络设备,包括处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的计算机程序,所述处理器被所述计算机程序促使:实现上述任一所述数据处理方法步骤。
第四方面,本申请实施例提供了一种机器可读存储介质,所述机器可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时,实现上述任一所述数据处理方法步骤。
第五方面,本申请实施例提供了一种计算机程序,所述计算机程序被处理器执行时,实现上述任一所述数据处理方法步骤。
本申请实施例提供的技术方案中,广域网中的网络设备将RDMA业务等对网络传输可靠性要求高的目标业务类型的报文存储在指定的调度队列中,当到达该调度队列的调度周期时,转发该调度队列中的报文。因为,指定的调度队列的调度周期是确定的,在指定的调度队列的调度周期,转发对网络传输 可靠性要求高的目标业务类型的报文,可以保证目标业务类型的报文的传输时延和抖动是确定的,实现目标业务类型的报文的确定性的有界时延的传输,减少了因时延和抖动不确定带来的丢包问题,提高了RDMA业务等对网络传输可靠性要求高的业务应用于广域网时的性能。
当然,实施本申请的任一产品或方法并不一定需要同时达到以上所述的所有优点。
为了更清楚地说明本发明实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为RDMA业务读、写操作的吞吐量的表现情况的一种示意图;
图2为数据中心内部RDMA业务传输的一种示意图;
图3为数据中心间RDMA业务传输的一种示意图;
图4为本申请实施例提供的网络架构的一种示意图;
图5为本申请实施例提供的数据处理方法的第一种流程示意图;
图6为本申请实施例提供的数据处理方法的第二种流程示意图;
图7为图6中步骤S62的一种细化示意图;
图8为本申请实施例提供的数据处理方法的第三种流程示意图;
图9为图6中步骤S63和图8中步骤S83的一种细化示意图;
图10为本申请实施例提供的数据处理方法的第四种流程示意图;
图11为图10中步骤S104的一种细化示意图;
图12为本申请实施例提供的实际测试的网络拓扑的一种示意图;
图13为本申请实施例提供的数据处理装置的一种结构示意图;
图14为本申请实施例提供的网络设备的一种结构示意图。
为使本发明的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本发明进一步详细说明。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
为便于理解,下面对本申请实施例中出现的词语进行解释说明。
RDMA(Remote Direct Memory Access,远程直接数据存取):是为了解决网络传输中服务器端数据处理的延迟而产生的;RDMA业务允许用户态的应用程序直接读取或写入远程内存,而无内核干预和内存拷贝发生。
RoCE(RDMA over Converged Ethernet,在以太网络使用RDMA):是一种允许通过以太网使用RDMA的网络协议。
SRv6(Segment Routing IPv6,基于IPv6转发平面的分段路由):是新一代IP(Internet Protocol,网际协议)承载协议。SRv6采用现有的IPv6转发技术,通过灵活的IPv6扩展头,实现网络可编程。
RDMA业务对网络的传输提出了非常高的要求,主要表现在对网络的丢包上。对于不同的丢包率情况下,RDMA业务读、写操作的吞吐量的表现情况如图1所示。图1中,短虚线表示读操作的吞吐量的表现情况,长虚线表示写操作的吞吐量的表现情况。从图1可知,RDMA业务对于以太网丢包异常敏感,以太网中丢包率超过>10
-3时将,导致网络有效吞吐量急剧下降(有效吞吐量仅约75%),以太网中丢包率降低至1%时,RDMA业务的吞吐量下降为0。基于图1可知,如果要使得RDMA业务的吞吐量不受影响,丢包率需要保证在十万分之一以下,最好为无丢包。
当前RDMA业务的主要应用在数据中心内部,并通过无损网络结合RoCE技术进行传输,如图2所示,通信方(包括发送端和接收端)均属于同一数据中心1,无损网络包括多个脊节点,如图2中的spine1和spine2,还包括多个叶节点,如图2中leaf1和leaf2。发送端和接收端上分别设置有RDMA网卡,发送端在将应用的RDMA业务的报文数据从缓冲器复制至RDMA网卡后,由网卡驱动器驱动DMA网卡,通过无损网络完成RDMA业务的报文数据发送,接收端由网卡驱动器驱动DMA网卡,通过无损网络完成RDMA业务的报文数据的接收,并将RDMA业务的报文数据复制至应用的缓冲器。
随着边缘计算、5G MEC(Mobile Edge Computing,移动边缘计算)的发展使算力下沉到边缘,以及一体化大数据中心的大力发展,数据中心、边缘计算中心之间的业务交互也越来越密切。RDMA业务的应用场景也从数据中心内部扩展至数据中心、边缘计算中心之间,在这种场景下,需要广域网来对数据中心、边缘计算中心进行连接,这使得RDMA业务的通信方分布在不同的位置如下图3。
在不改变现有RDMA业务实现机制的前提下,RDMA业务对数据中心、边缘计算之间互连的广域网也提出了同样的要求,主要表现在丢包和时延上。广域网中导致丢包的原因分很多种,其中的一种就是因为拥塞导致抖动过大,RDMA业务的报文被误判丢失,进而导致RDMA业务的报文数据整体被重传;另外在时延上,如果时延忽大忽小,也将导致RDMA业务性能的急剧下降。广域网承载着大量各种类型的业务,包括视频、语音、文件传输等,广域网提供best-effort(尽力服务)网络的能力的情况下,不同的业务之间相互影响,RDMA业务的报文非常容易收到各种大带宽传输业务的影响,产生大抖动,甚至丢包,导致RDMA业务性能的急剧下降。
为解决上述问题,本申请实施例提供了一种数据处理方法,该方法应用于广域网中的网络设备,该网络设备位于第一数据中心的网关至第二数据中心的网关的指定路径上,具体的网络架构可参见图4所示。图4中,广域网包括PE(Provider Edge,网络侧边缘设备)1-2以及P(Provider,网络侧设备)1-6,PE1为数据中心1的网关,PE2为数据中心2的网关。当PE1-2和P1-6中的任一网络设备位于指定路径上时,均可实施本申请实施例提供的数据处理方法。
本申请实施例中,第一数据中心和第二数据中心为通过广域网连接的任意两个数据中心,即第一数据中心和第二数据中心位于不同的地域,如图4中所示的数据中心1和数据中心2。指定路径为预先指定的从第一数据中心的网关向第二数据中心的网关传输报文的路径,如图4中的带双箭头的粗实线。网络设备可以为第一数据中心的网关、第二数据中心的网关、或中间设备,中间设备为指定路径上除第一数据中心的网关和第二数据中心的网关外的网络设备,如图4中P1、P2、P5和P6。网络设备可以为路由器、交换机、防火墙设备等等具有通信功能的设备。
本申请实施例提供的数据处理方法中,广域网中的网络设备将RDMA业务等对网络传输可靠性要求高的目标业务类型的报文存储在指定的调度队列中,当到达该调度队列的调度周期时,转发该调度队列中的报文。因为,指定的调度队列的调度周期是确定的,在指定的调度队列的调度周期,转发对网络 传输可靠性要求高的目标业务类型的报文,可以保证目标业务类型的报文的传输时延和抖动是确定的,实现目标业务类型的报文的确定性的有界时延的传输,减少了因时延和抖动不确定带来的丢包问题,提高了RDMA业务等对网络传输可靠性要求高的业务应用于广域网时的性能。
下面通过具体实施例,对本申请实施例提供的数据处理方法进行详细说明。
如图5所示,图5为本申请实施例提供的数据处理方法的第一种流程示意图。该方法应用于广域网中的网络设备,网络设备位于第一数据中心的网关到第二数据中心的网关的指定路径上,包括如下步骤:
步骤S51,将获取到的目标业务类型的第一报文存储至第一报文所属确定性流对应的第一调度队列,第一报文的源地址为第一数据中心中的第一主机的地址,第一报文的目的地址为第二数据中心中的第二主机的地址,第一报文所属确定性流的转发路径为指定路径;
步骤S52,当到达第一调度队列的调度周期时,转发第一调度队列中的报文。
本申请实施例提供的数据处理方法中,广域网中的网络设备将RDMA业务等对网络传输可靠性要求高的目标业务类型的报文存储在指定的调度队列中,当到达该调度队列的调度周期时,转发该调度队列中的报文。因为,指定的调度队列的调度周期是确定的,在指定的调度队列的调度周期,转发对网络传输可靠性要求高的目标业务类型的报文,可以保证目标业务类型的报文的传输时延和抖动是确定的,实现目标业务类型的报文的确定性的有界时延的传输,减少了因时延和抖动不确定带来的丢包问题,提高了RDMA业务等对网络传输可靠性要求高的业务应用于广域网时的性能。
另外,本申请实施例提供的数据处理方法中,只需要对网络设备进行软件上的改进,就可以实现按照指定的调度周期转发相应的调度队列中的报文,实现对网络传输可靠性要求高的业务报文的确定性的有界时延的传输。相对于采用复杂度技术,在网络设备上设置昂贵的硬件,以专门用于传输对网络传输可靠性要求高的业务报文,本申请实施例提供的技术方案降低了对网络设备的要求,节约了网络部署成本。
上述步骤S51中,目标业务类型的业务流可以为RDMA业务的业务流,也可以为其他对网络传输可靠性要求高的业务类型的业务流。目标业务类型的业务流可以为一条或多条。目标业务类型的一条业务流对应一个调度队列,一个调度队列可以对应目标业务类型的一条或多条业务流,调度队列用于缓存所接收的报文。第一报文为目标业务类型的业务流的一个报文,第一报文可以为第一数据中心中的任一主机向第二数据中心中的任一主机发送的目标业务类型的报文,第一报文所属的业务流为确定性流,第一报文所属确定性流的转发路径为上述指定路径,也就是,第一报文沿上述指定路径转发。本申请实施例中,以第一数据中心中的任一主机为第一主机,第二数据中心中的任一主机为第二主机为例进行说明,并不起限定作用。第一主机与第二主机之间传输的目标业务类型的确定性流可以有多条,这条确定性流的转发路径可以相同,也可以不同,且这条确定性流对应的调度队列可以相同,也可以不同。
为了减少网络中丢包,实现无损传输,实现确定性传输,数据中心内部可以采用无损网络传输报文。例如,第一数据中心内的主机通过无损网络,向第一数据中心的网关传输报文。再例如,第二数据中心的网关通过无损网络,向第二数据中心内的主机传输报文。无损网络的结构可参见上述图2和图3中的无损网络结构。本申请实施例中,无损网络可以采用ECN(Explicit Congestion Notification,显示拥塞通告)、PFC(Priority Flow Control,基于优先级的流控)、DCBX((Data Center Bridging Exchange,数据中心桥交换)等技术进行配置,对此不进行限定。
网络设备在获取到目标业务类型的第一报文后,进行映射、确定性处理调度,即确定该第一报文所 属业务对应的调度队列,也就是确定该第一报文所属确定性流对应的调度队列。为便于区分和理解,本申请实施例中,将目标业务类型的报文所属确定性流对应的调度队列称为第一调度队列,并不起限定作用。网络设备将第一报文存储至第一调度队列中。
上述步骤S52中,目标业务类型的确定性流对应的每个调度队列配置有调度周期,调度周期表示转发相应调度队列中的报文的周期。本申请实施例中,网络设备中,调度队列的调度周期可以为控制器根据传输报文的需求设置的,也可以为网络设备自身根据自身启动调度队列的初始时间、链路时延等信息配置的。
网络设备按周期对报文做平滑整形,如可以实时监测各个调度队列的调度周期,当监测到到达第一调度队列的调度周期时,转发第一调度队列中存储的报文,如上述第一报文。如上述步骤S51中的描述,第一报文所属确定性流的转发路径为指定路径,因此,在转发第一调度队列中存储的第一报文时,第一报文实际为沿指定路径转发。
本申请实施例中,网络设备可以为第一数据中心的网关、中间设备或第二数据中心的网关。当网络设备为不同类型的设备,其数据处理方式有所不同。
在一些实施例中,当网络设备为第一数据中心的网关时,基于图5所示实施例,本申请实施例还提供了一种数据处理方法,如图6所示,可以包括如下步骤:
步骤S61,接收第二报文,第二报文的源地址为第一数据中心中的第一主机的地址,第二报文的目的地址为第二数据中心中的第二主机的地址。
步骤S62,若第二报文的特征与预配置的目标业务类型的报文特征匹配,则基于第二报文,生成沿指定路径转发的第一报文。
步骤S63,将第一报文存储至第一调度队列,第一调度队列为第一报文所属确定性流对应的调度队列。
步骤S64,当到达第一调度队列的调度周期时,转发第一调度队列中的报文。与上述步骤S52相同。
本申请实施例中,网络设备,即第一数据中心的网关将目标业务类型的报文识别出,仅仅对目标业务类型的报文进行确定性传输,对其他业务类型的报文不做确定性传输,降低了网络设备的要求,节约网络设备的带宽资源。
上述步骤S61中,第二报文可以为第一数据中心中的任一主机向第二数据中心中的任一主机发送的报文。也就是,第一数据中心中的任一主机可以采用无损网络技术,将原始报文发送第一数据中心的网关。第一数据中心的网关接收第一数据中心中的任一主机发送的原始报文,第一数据中心的网关接收的原始报文即为第二报文。本申请实施例中,以第一数据中心中的任一主机为第一主机,第二数据中心中的任一主机为第二主机为例进行说明,并不起限定作用。
上述步骤S62中,报文特征可以包括但不限于端口号、五元组等信息。第一数据中心的网关在接收到第二报文后,提取第二报文的特征,识别目标业务类型的报文,即将第二报文的特征与预配置的目标业务类型的报文特征进行匹配;若第二报文的特征与预配置的目标业务类型的报文特征匹配,即第二报文的特征与预配置的目标业务类型的报文特征相同,则说明第二报文为目标业务类型的报文,基于第二报文生成第一报文,该第一报文沿上述指定路径转发。
上述步骤S63中,在步骤S62中生成第一报文后,第一数据中心的网关可以确定该第一报文所属确定性流对应的第一调度队列,也就是确定该第一报文对应的第一调度队列,进而将第一报文存储至第 一调度队列中。
在一些实施例中,如图7所示,上述步骤S62中基于所述第二报文,生成沿指定路径转发的第一报文的步骤,可以包括步骤S71-S72。
步骤S71,获取指定路径对应的目标分段路由列表,目标分段路由列表中的网络设备的分段标识对应第一调度队列。
本申请实施例中,第一数据中心的网关可以预设存储目标业务类型的报文所属确定性流对应的分段路由列表(Segment List),即SID(Segment Identifier,分段标识)列表,SID列表与指定的路径对应。该SID列表中,第一数据中心的网关的SID信息指示了相应确定性流对应的调度队列。
第一数据中心的网关在接收到第二报文,确定第二报文为目标业务类型的报文后,从预先存储的分段路由列表中,获取第二报文所属确定性流对应的分段路由列表,即指定路径对应的目标分段路由列表。
例如,第一数据中心的网关中预先存储有报文特征与分段路由列表的对应关系。第一数据中心的网关提取第二报文的特征,进而从预先存储的报文特征与分段路由列表的对应关系,获取第二报文的特征对应的分段路由列表,作为目标分段路由列表。
步骤S72,基于目标分段路由列表,将第二报文封装为第一报文,第一报文为SRv6报文。
第一数据中心的网关在获得目标分段路由列表后,将第二报文封装为第一报文,例如,为第二报文封装SRv6头,SRv6头包括目标分段路由列表,封装了SRv6头后的第二报文即为第一报文。
本申请实施例提供的技术方案中,第一数据中心的网关将来自第一数据中心的报文封装为SRv6报文,SRv6报文的头中包括分段路由列表,而分段路由列表能够唯一的指示一条路径,即上述指定路径,进而确保了第一报文的确定性传输。
在一些实施例中,控制器可以预先计算从第一数据中心的网关到第二数据中心的网关的每条路径的时延和抖动,确定出满足第一报文所属确定性流的传输指标的路径,即指定路径,进而配置指定路径上每个中间设备中路由,使得中间设备在接收到第一报文后查询路由转发第一报文时,可以沿指定路径转发第一报文。
这种情况下,第一数据中心的网关可以获取第一数据中心的网关的第一SID,并获取第二数据中心的网关的第二SID,基于第一SID和第二SID,将第二报文封装为第一报文,第一报文为SRv6报文。此时,第一报文的SRv6头可以包括SRH(Segment Routing Header,分段路由头),也可以不包括SRH。若第一报文的SRv6头包括SRH,则第一报文的SRv6头中的SRH包括第一SID和第二SID,第一报文的SRv6头中的IPv6基本头的源IP地址为第一SID,目的IP地址为第二SID;若第一报文的SRv6头包括SRH,则第一报文的SRv6头中的IPv6基本头的源IP地址为第一SID,目的IP地址为第二SID。
此时,第一数据中心的网关可以基于IPv6基本头的源IP地址确定第一调度队列,进而将第一报文存储至第一调度队列。相应的,第二数据中心的网关可以基于IPv6基本头的目的IP地址确定第一调度队列,进而将第一报文存储至第一调度队列。
本申请实施例中,第一数据中心的网关还可以采用其他方式生成沿指定路径转发的第一报文,对此不进行限定。
在一些实施例中,基于图6所示实施例,本申请实施例还提供了一种数据处理方法,如图8所示,可以包括如下步骤:
步骤S81,接收第二报文,第二报文的源地址为第一数据中心中的第一主机的地址,第二报文的目 的地址为第二数据中心中的第二主机的地址。
步骤S82,若第二报文的特征与预配置的目标业务类型的报文特征匹配,则基于第二报文,生成沿指定路径转发的第一报文。
步骤S83,将第一报文存储至第一调度队列,第一调度队列为第一报文所属确定性流对应的调度队列。
步骤S84,当到达第一调度队列的调度周期时,转发第一调度队列中的报文。
步骤S81-S84与上述步骤S61-S64相同。
步骤S85,若第二报文的特征与预配置的目标业务类型的报文特征不匹配,则根据在当前时刻之前的预设时长内目标业务类型的报文的接收情况,确定第二调度队列。
步骤S86,将第二报文存储至第二调度队列。
若第二报文的特征与预配置的目标业务类型的报文特征不匹配,即第二报文的特征与预配置的目标业务类型的报文特征不同,则说明第二报文不是目标业务类型的报文,第一数据中心的网关可以根据在当前时刻之前的预设时长内所述目标业务类型的报文的接收情况,在网络设备的调度队列中确定一个调度队列,作为第二调度队列,并将第二报文存储至第二调度队列。
本申请实施例中,在第二报文不是目标业务类型的报文时,第一数据中心的网关也可以根据预先存储的报文特征与调度周期的对应关系,确定第二报文的特征对应的第二调度队列,进而将第二报文存储至第二调度队列。
在第二报文不是目标业务类型的报文时,第一数据中心的网关还可以根据预先存储的时隙与调度周期的对应关系,确定接收第二报文的时隙对应的第二调度队列,进而将第二报文存储至第二调度队列。
第一数据中心的网关还可以采用其他方式,确定第二调度队列,如随机选择一个调度队列作为第二调度队列,对此不进行限定。
另外,本申请实施例中,确定第二调度队列的调度队列范围可以根据实际情况进行调整。举例来说,第一数据中心的网关可以检测在当前时刻之前的预设时长内是否接收到目标业务类型的报文;若在当前时刻之前的预设时长内未接收到目标业务类型的报文,则说明当前广域网中存在目标业务类型的报文,确定第二调度队列的调度队列范围为网络设备中的所有调度队列,即从网络设备中的所有调度队列中,确定第二调度队列;若在当前时刻之前的预设时长内接收到目标业务类型的报文,则说明当前广域网中不存在目标业务类型的报文,确定第二调度队列的调度队列范围为网络设备中除第一调度队列外的所有调度队列,即从网络设备中除第一调度队列外的所有调度队列中,确定第二调度队列。通过本申请实施例,第一数据中心的网关将第一调度队列优先为目标业务类型的报文使用,保证了目标业务类型的报文的确定性传输,同时提高了队列资源的利用率。
此外,本申请实施例中,第一数据中心的网关中可以预先配置非目标业务类型的报文转发模式。在将第二报文存储至第二调度队列时,第一数据中心的网关可以按照预先配置非目标业务类型的报文转发模式,对第二报文处理,并将处理后第二报文存储至第二调度队列。
上述报文转发模式可以根据非目标业务类型的实际需求进行设定。
例如,报文转发模式可以为转发原始报文,此时,第一数据中心的网关会将接收的第二报文直接存储至第二调度队列。
再例如,报文转发模式可以为转发SRv6报文,此时,第一数据中心的网关会将接收的第二报文封 装为SRv6报文,并将封装后的第二报文存储至第二调度队列。其中,封装后的第二报文的SRv6头中可以包括SRH,也可以不包括SRH。
本申请实施例中,若封装后的第二报文为SRv6报文,且第一报文也是SRv6报文,则即使第二报文和第一报文的转发路径相同,该封装后的第二报文与第一报文的SRv6头中所携带的信息也会不同。例如,第一报文的分段路由列表中第一数据中心的网关的分段标识对应一个调度队列,而第二报文的分段路由列表中第一数据中心的网关的分段标识没有对应的调度队列。基于此,方便指定路径上的中间设备和第二数据中心的网关区分目标业务类型的报文和非目标业务类型的报文。
步骤S87,按照第二调度队列对应的转发策略,转发第二调度队列中的报文。
本申请实施例中,第二调度队列对应的转发策略可以为先进先出,即第一数据中心的网关可以按照先进先出的方式,提取第二调度队列中的报文,进而查询路由表,按照查询结果,转发第二调度队列中的报文。例如,第一数据中心的网关中存储了包括2个调度队列,分别为队列1和队列2。第一数据中心的网关先向队列1中存储了非目标业务类型的报文,后向队列2中存储了非目标业务类型的报文,则在转发报文时,第一数据中心的网关先转发队列1中存储的报文,后转换队列2中存储的报文。
第二调度队列对应的转发策略也可以为按照调度队列的调度周期转发,即第一数据中心的网关可以在到达第二调度队列的调度周期时,提取第二调度队列中的报文,进而查询路由表,按照查询结果,转发第二调度队列中的报文。
本申请实施例中,第二调度队列对应的转发策略还可以为其他形式,对此不进行限定。
本申请实施例提供的技术方案中,第一数据中心的网关在第二报文不是目标业务类型的报文时,按照相应的转发策略查询路由表,来实现对非目标业务类型的报文的转发,而不会固定占用时间和空间资源,保证了目标业务类型的报文的确定性转发。
在一些实施例中,如图9所示,上述步骤S63和步骤S83可以包括步骤S91-S93。
步骤S91,检测第一调度队列中是否存储有其他业务类型的报文。若是,则执行步骤S92;若否,则说明第一调度队列就是清空后的第一调度队列,执行步骤S93。
步骤S92,清空第一调度队列。
第一调度队列是为目标业务类型的报文预留的调度队列。为提高了网络设备的存储空间的利用率,在当前广域网中不存在目标业务类型的报文的情况下,第一数据中心的网关可以将其他业务类型的报文存储在第一调度队列,如步骤S85-S86部分的相关描述。
在接收到目标业务类型的报文时,如上述第一报文,说明当前广域网中不存在目标业务类型的报文,第一数据中心的网关清空第一调度队列,如丢弃第一调度队列中存储的其他业务类型的报文,或,将第一调度队列中存储的其他业务类型的报文转存至其他调度队列中。
步骤S93,将第一报文存储至清空后的第一调度队列。
本申请实施例提供的技术方案中,第一数据中心的网关清空第一调度队列后,再将第一报文存储至第一调度队列,避免了其他业务类型的报文占用确定性传输的资源,保证了目标业务类型的报文的确定性传输。
在一些实施例中,当网络设备为指定路径上的中间设备或第二数据中心的网关时,基于图5所示实施例,本申请实施例还提供了一种数据处理方法,如图10所示,可以包括如下步骤:
步骤S101,接收第二报文,第二报文的源地址为第一数据中心中的第一主机的地址,第二报文的 目的地址为第二数据中心中的第二主机的地址。
步骤S102,若第二报文为SRv6报文,且网络设备支持SRv6,则从第二报文中,获取目标分段路由列表。
步骤S103,若目标分段路由列表中的网络设备的分段标识对应第一调度队列,则将第二报文作为目标业务类型的第一报文,并将第一报文存储至第一调度队列,第一调度队列为第一报文所属确定性流对应的调度队列。
步骤S104,当到达第一调度队列的调度周期时,转发第一调度队列中的报文。与上述步骤S52相同。
本申请实施例提供的技术方案中,第一数据中心的网关将来自第一数据中心的报文封装为SRv6报文,SRv6报文的头中包括分段路由列表,而分段路由列表能够唯一的指示一条路径,即上述指定路径,进而确保了目标业务类型的报文的确定性传输。
中间设备和第二数据中心的网关执行的操作相似,以下以中间设备为执行主体进行说明,并不起限定作用。
上述步骤S101中,中间设备或第二数据中心的网关接收的第二报文为第一数据中心内的主机通过第一数据中心的网关转发的报文。中间设备或第二数据中心的网关接收的第二报文可能是经过第一数据中心的网关转发的非目标业务类型的原始报文,也可能是通过第一数据中心的网关处理后的目标业务类型的第一报文,或对非目标业务类型的报文封装处理后的报文。
上述步骤S102中,目标业务类型的报文为SRv6报文。中间设备在监测到第二报文为SRv6报文,且中间设备支持SRv6时,解析第二报文,获取第二报文携带的目标分段路由列表。第一数据中心的网关将目标业务类型的报文封装为SRv6报文,进而传输该SRv6报文,第一数据中心的网关获得目标业务类型的SRv6报文的方式,可参见上述图7所示实施例。
这种情况下,若第二报文不是SRv6报文,则可以认为该第二报文为非目标业务类型的报文,中间设备根据在当前时刻之前的预设时长内目标业务类型的报文的接收情况,确定第二调度队列,将第二报文存储至第二调度队列;按照第二调度队列对应的转发策略,转发第二调度队列中的报文。这里,将第二报文存储至第二调度队列,以及转发第二调度队列中的报文的具体实现,可参见上述步骤S85-S87部分的相关描述,此处不再赘述。
上述步骤S103中,中间设备可以预设存储目标业务类型的报文所属确定性流对应的SID列表,该SID列表中,中间设备的SID信息指示了相应确定性流对应的调度队列。中间设备获得目标分段路由列表后,基于预先存储的SID列表,确定目标分段路由列表中的该中间设备的分段标识对应调度队列,即第一报文所属确定性流对应的第一调度队列。若确定了第一调度队列,则可以说明二报文就是目标业务类型的第一报文,进而将第一报文存储至第一调度队列。
目标业务类型的SRv6报文中也可以携带指示信息,该指示信息表示目标分段路由列表中的该中间设备的分段标识对应第一调度队列。指示信息可以添加在SID列表中,也可以添加在报文的载荷的任一位置。这种情况下,中间设备获得目标分段路由列表后,确定第一报文是否携带上述指示信息。若携带有,则可以将第一报文存储至指示信息所指示的第一调度队列。
若第二报文是SRv6报文,中间设备支持SRv6,但从第二报文中未获取到目标分段路由列表,或未确定出目标分段路由列表中的该中间设备的分段标识对应的第一调度队列,即目标分段路由列表中的 中间设备的分段标识没有对应的调度队列,则可以认为该第二报文为非目标业务类型的报文,中间设备根据在当前时刻之前的预设时长内目标业务类型的报文的接收情况,确定第二调度队列,将第二报文存储至第二调度队列;按照第二调度队列对应的转发策略,转发第二调度队列中的报文。这里,将第二报文存储至第二调度队列,以及转发第二调度队列中的报文的具体实现,可参见上述步骤S85-S87部分的相关描述,此处不再赘述。
在一些实施例中,若中间设备不支持SRv6,则中间设备不需要确定第二报文是否为目标业务类型的报文,可以直接根据在当前时刻之前的预设时长内目标业务类型的报文的接收情况,确定第二调度队列,将第二报文存储至第二调度队列;按照第二调度队列对应的转发策略,转发第二调度队列中的报文。
本申请实施例中,对于非目标业务类型的报文分为两种:原始报文和封装处理后的报文。当非目标业务类型的报文为原始报文时,第二数据中心的网关在转发第二调度队列中的报文时,可以直接转发第二调度队列中的原始报文。
当非目标业务类型的报文为封装处理后的报文时,第二数据中心的网关可以获得第二调度队列中的报文对应的原始报文;之后,再向第二数据中心内转发原始报文。例如,封装处理后的报文为SRv6报文,第二数据中心的网关在转发第二调度队列中的报文时,可以剥离第二调度队列中的报文的SRv6封装,得到原始报文,之后,向第二数据中心内转发原始报文。
在一些实施例中,步骤S103将第一报文存储至第一调度队列可以包括:若第一调度队列中存储有其他业务类型的报文,则中间设备清空第一调度队列;将第一报文存储至清空后的第一调度队列。具体实现可参见上述图9部分的相关描述,此处不再赘述。
在一些实施例中,如图11所示,当网络设备为第二数据中心的网关时,图10中步骤S104可以包括步骤S111-S112。
步骤S111,剥离第一调度队列中的报文的SRv6封装,得到原始报文。
其中,目标业务类型的报文为SRv6报文,第一调度队列中的报文为目标业务类型的报文,第二数据中心的网关在转发第一调度队列中的报文时,剥离第一调度队列中的报文的SRv6封装,得到原始报文。
步骤S112,向第二数据中心内的第二主机转发原始报文。
本申请实施例中,第二数据中心的网关剥离第一报文的SRv6封装,得到原始报文后,向第二数据中心内的第二主机转发该原始报文,便于数据中心内部的第二主机对原始报文的处理。
本申请实施例中,广域网中可以部署控制器,如图4中的控制器,控制器可以为SDN(Software Defined Network,软件定义网络)控制器或其他类型的控制器。
上述第一数据中心的网关、中间设备、第二数据中心的网关中预先存储的信息,如SID列表、目标业务类型的报文所属确定性流对应的第一调度队列、第一调度队列的调度周期、目标业务类型的报文特征、第一调度队列与目标分段路由列表中网络设备的分段标识的对应关系、SRv6报文的封装信息等,可以由控制器统一下发。
具体可以为:
控制器向各个网络设备下发的第一路径信息,第一路径信息包括:调度信息和第一调度队列的调度周期,调度信息指示目标业务类型的报文所属确定性流对应第一调度队列;网络设备存储接收到第一路径信息,并基于第一路径信息,将目标业务类型的报文所属确定性流与网络设备中的第一调度队列关联, 并按照第一路径信息包括的调度周期,配置网络设备中的第一调度队列的调度周期。
控制器向各个网络设备下发的第二路径信息,第二路径信息包括:指定路径对应的目标分段路由列表、目标业务类型的报文特征、以及第一调度队列与目标分段路由列表中网络设备的分段标识的对应关系中的一种或多种;网络设备存储接收到第二路径信息,并基于第一路径信息,执行以下至少一项操作:
将指定路径与目标分段路由列表关联;
目标业务类型的报文特征与目标业务类型的报文所属确定性流关联;
建立第一调度队列与目标分段路由列表中网络设备的分段标识的对应关系。
本申请实施例中,上述第一路径信息和第二路径信息包括的所有信息可以均由控制器下发;第一路径信息和第二路径信息包括的部分信息由控制器下发,控制器未下发的信息由网络设备本地存储的信息替代。
例如,控制器下发了第一路径信息,未下发第二路径信息,则网络设备在转发目标业务类型的报文时,基于控制器下发的第一路径信息,网络设备预先存储的第二路径信息,配置网络设备中的相关信息,以协助网络设备转发报文。
通过上述控制器完成对各个网络设备中路径信息的配置,便于对广域网中的各个网络设备进行统一管理,保证目标业务类型的报文的确定性传输。
为了确定准确的路径信息,控制器可以收集广域网的指定网络资源信息,建立全网资源模型。其中,指定网络资源信息可以包括网络拓扑信息、网络带宽、网络时延、网络抖动中的一种或多种。另外,控制器收集对目标业务类型的确定性流的传输指标,根据收集的指定网络资源信息和确定性性流的传输指标,结合目标业务类型的报文特征,计算满足传输指标的确定性路径和SRv6报文的封装信息,得到第一路径信息和第二路径信息。
在一些实施例中,控制器可以采用带内网络遥测方式,对广域网进行探测,得到指定网络资源信息,基于带内网络遥测方式探测得到的指定网络资源信息,可以准确的估算出每条路径所需的时延和抖动,进而确定满足传输指标的确定性路径,并将确定性路径对应的第一路径信息和第二路径信息下发给各个网络设备。该实施例中,通过带内网络遥测方式,可以准确的估算出每条路径所需的时延和抖动,此时,即使只有两端的数据中心的网关执行上述数据处理方法,中间设备不执行上述数据处理方法,也可以保证所选择的确定性路径在QoS(Quality of Service,服务质量)约束范围内有效,实现确定性传输,这降低了对中间设备的要求,进一步节约了网络部署成本。
本申请实施例中,控制器可以根据具体的业务需求,灵活的调度广域网的资源,无需占用固定的带宽等网络资源,提升了网络使用效率。
另外,本申请实施例提供的网络使用效率较高、网络设备要求较低的数据处理方法,实现了有界时延的确定性传输,可以支撑未来更多的对广域网无损有要求的应用场景。
下面结合图4对本申请实施例提供对数据处理方法进行说明。
步骤1,数据中心1内的RDMA业务主机通过数据中心1内部的无损网络技术,将原始报文发送至网关PE1。
PE1从接收到原始报文开始,进行广域网数据处理流程,参见步骤2。
步骤2,PE1根据控制器配置的RDMA业务的报文特征,将RDMA业务的报文识别出来,进行映射、确定性处理调度后,并按周期对报文做平滑整形,封装得到相应的SRv6报文,并发送至中间的设 备P。具体可参见上述图5-9部分的描述。
步骤3,P设备解析SRv6报文中所携带的SID信息,进行确定性相关的处理调度,并发送SRv6报文至数据中心2的网关PE2。具体可参见上述图5、图10部分的描述。
这里的P设备可以为上述图4中的P1、P2、P5或P6。
步骤4,PE2根据SRv6报文中携带的SID信息,进行确定性的处理调度后,剥离SRv6封装,将原始报文发送至数据中心2内部的设备,至此,广域网内的数据处理完毕。具体可参见上述图5、图10-11部分的描述。
步骤5,数据中心2内部的设备通过内部无损网络技术,将原始报文发送至RDMA业务主机。
本申请实施例中,对于其他业务类型的数据处理过程中,只需要按照best-effort方式进行处理即可。
本申请实施例中,将广域网的确定性方案与数据中心内的无损网络技术结合,可以实现RoCE业务的报文的传输过程丢包率降低和确定性有界时延,将无损网络技术从数据中心内部拓展到广域网,可以达到跨数据中心的端到端无损网络的效果。
另外,网络设备支持调度队列预留功能,即在广域网中不存在目标业务类型的报文时,可以将其他业务类型的报文存储在RDMA业务的报文所属确定性流对应的调度队列,通过对RDMA业务的报文的精确识别、映射、封装、调度,实现了端到端的确定性无损传输。
基于图12所示的网络拓扑,利用实验室环境,对本申请实施例提供的数据处理方法进行实际测试。图12中,在服务器1和服务器2上运行RoCE业务的报文。服务器1和服务器2分别连接交换机SW1和SW2,在SW1和SW2中间插入损伤仪,模拟广域网的时延和抖动。设备间连接的端口速率分别为10G。
在测试时,服务器1和服务器2发送RoCE业务的报文,在损伤仪上模拟时延和抖动参数,来观察记录RoCE业务的吞吐量。
基于测试结果进行分析得到:
1、抖动对于RoCE业务的报文最终体现的效果是时延,从测试的结果来看,造成RoCE业务的吞吐量受到的影响体现在总体时延:固定时延+抖动的结果。
2、当总体时延在7ms(700km传输距离)以内时,RoCE业务的报文不受影响,RoCE业务的吞吐量为100%。
3、当总体时延在10ms(1000km传输距离)时,RoCE业务的吞吐量为65%。
4、当总体时延在15ms(1500km传输距离)时,RoC业务的E吞吐量为45%。
从实际测试可以确定,本申请实施例提供的数据处理方法在实现广域网确定性传输时,可以严格将抖动控制在20us之内。
综上所述,本申请实施例提供的数据处理方法,可以实现RoCE业务在广域传输0丢包和确定性的有界时延,达到跨数据中心的RDMA业务可用性。
与上述数据处理方法对应,本申请实施例提供了一种数据处理装置,如图13所示,应用于广域网中的网络设备,网络设备位于第一数据中心的网关到第二数据中心的网关的指定路径上,该装置包括:
第一存储单元131,用于将获取到的目标业务类型的第一报文存储至第一报文所属确定性流对应的第一调度队列,第一报文的源地址为第一数据中心中的第一主机的地址,第一报文的目的地址为第二数据中心中的第二主机的地址,所述第一报文所属确定性流的转发路径为所述指定路径;
转发单元132,用于当到达第一调度队列的调度周期时,转发第一调度队列中的报文。
在一些实施例中,网络设备为第一数据中心的网关;第一存储单元131,具体可以用于:
接收第二报文,第二报文的源地址为第一数据中心中的第一主机的地址,第二报文的目的地址为第二数据中心中的第二主机的地址;
若第二报文的特征与预配置的目标业务类型的报文特征匹配,则基于第二报文,生成沿指定路径转发的第一报文;
将第一报文存储至第一调度队列,第一调度队列为第一报文所属确定性流对应的调度队列。
在一些实施例中,第一存储单元131,具体可以用于:
获取指定路径对应的目标分段路由列表,目标分段路由列表中的网络设备的分段标识对应第一调度队列;
基于目标分段路由列表,将第二报文封装为第一报文,第一报文为SRv6报文。
在一些实施例中,第一存储单元131,还可以用于若第二报文的特征与预配置的目标业务类型的报文特征不匹配,则根据在当前时刻之前的预设时长内目标业务类型的报文的接收情况,确定第二调度队列;将第二报文存储至第二调度队列;
转发单元132,还可以用于按照第二调度队列对应的转发策略,转发第二调度队列中的报文。
在一些实施例中,网络设备为指定路径上的中间设备或第二数据中心的网关,目标业务类型的报文为SRv6报文;
第一存储单元131,具体可以用于:
接收第二报文,第二报文的源地址为第一数据中心中的第一主机的地址,第二报文的目的地址为第二数据中心中的第二主机的地址;
若第二报文为SRv6报文,且网络设备支持SRv6,则从第二报文中,获取目标分段路由列表;
若目标分段路由列表中的网络设备的分段标识对应第一调度队列,则将第二报文作为目标业务类型的第一报文,并将第一报文存储至第一调度队列,第一调度队列为第一报文所属确定性流对应的调度队列。
在一些实施例中,第一存储单元131,还可以用于:
若第二报文不是SRv6报文,或者,网络设备不支持SRv6,或者,未获取到目标分段路由列,或者,目标分段路由列表中的网络设备的分段标识没有对应的调度队列,则根据在当前时刻之前的预设时长内目标业务类型的报文的接收情况,确定第二调度队列;将第二报文存储至第二调度队列;
转发单元132,还可以用于按照第二调度队列对应的转发策略,转发第二调度队列中的报文。
在一些实施例中,当网络设备为第二数据中心的网关时,转发单元132,具体可以用于:
获得第二调度队列中的报文对应的原始报文;
按照所述第二调度队列对应的转发策略,向第二数据中心内的第二主机转发原始报文。
在一些实施例中,当网络设备为第二数据中心的网关时,转发单元132,具体可以用于:
剥离第一调度队列中的报文的SRv6封装,得到原始报文;
向第二数据中心内的第二主机转发原始报文。
在一些实施例中,第一存储单元131,具体可以用于:
若在当前时刻之前的预设时长内未接收到目标业务类型的报文,则从网络设备中的所有调度队列中, 确定第二调度队列;
若在当前时刻之前的预设时长内接收到目标业务类型的报文,则从网络设备中除第一调度队列外的所有调度队列中,确定第二调度队列。
在一些实施例中,转发单元132,具体可以用于:
按照先进先出的方式,转发第二调度队列中的报文;或者,
当到达第二调度队列的调度周期时,转发第二调度队列中的报文。
在一些实施例中,第一存储单元131,具体可以用于:
若第一调度队列中存储有其他业务类型的报文,则清空第一调度队列;
将第一报文存储至清空后的第一调度队列。
在一些实施例中,上述数据处理装置还可以包括:
第一接收单元,用于接收控制器下发的第一路径信息,第一路径信息包括调度信息和第一调度队列的调度周期,调度信息指示目标业务类型的报文所属确定性流对应第一调度队列;
第一配置单元,用于将目标业务类型的报文所属确定性流与网络设备中的第一调度队列关联,并按照第一路径信息包括的调度周期,配置网络设备中的第一调度队列的调度周期。
在一些实施例中,上述数据处理装置还可以包括:
第二接收单元,用于接收控制器下发的第二路径信息,第二路径信息包括:指定路径对应的目标分段路由列表、目标业务类型的报文特征、以及第一调度队列与目标分段路由列表中网络设备的分段标识的对应关系中的一种或多种;
第二配置单元,用于根据第二路径信息,执行以下至少一项操作:
将指定路径与目标分段路由列表关联;
目标业务类型的报文特征与目标业务类型的报文所属确定性流关联;
建立第一调度队列与目标分段路由列表中网络设备的分段标识的对应关系。
在一些实施例中,第一路径信息和第二路径信息可以为控制器根据广域网的指定网络资源信息和目标业务类型的确定性流的传输指标确定。
在一些实施例中,指定网络资源信息可以为控制器采用带内网络遥测方式对广域网进行探测得到的。
在一些实施例中,指定网络资源信息可以包括网络拓扑信息、网络带宽、网络时延、网络抖动中的一种或多种。
本申请实施例提供的数据处理装置中,广域网中的网络设备将RDMA业务等对网络传输可靠性要求高的目标业务类型的报文存储在指定的调度队列中,当到达该调度队列的调度周期时,转发该调度队列中的报文。因为,指定的调度队列的调度周期是确定的,在指定的调度队列的调度周期,转发对网络传输可靠性要求高的目标业务类型的报文,可以保证目标业务类型的报文的传输时延和抖动是确定的,实现目标业务类型的报文的确定性的有界时延的传输,减少了因时延和抖动不确定带来的丢包问题,提高了RDMA业务等对网络传输可靠性要求高的业务应用于广域网时的性能。
本申请实施例提供的数据处理装置中,只需要对网络设备进行软件上的改进,就可以实现按照指定的调度周期转发相应的调度队列中的报文,实现对网络传输可靠性要求高的业务报文的确定性的有界时延的传输。相对于采用复杂度技术,在网络设备上设置昂贵的硬件,以专门用于传输对网络传输可靠性要求高的业务报文,本申请实施例提供的技术方案降低了对网络设备的要求,节约了网络部署成本。
本申请实施例还提供了一种网络设备,如图14所示,包括处理器141和机器可读存储介质142,所述机器可读存储介质142存储有能够被所述处理器141执行的机器可执行指令,所述处理器141被所述机器可执行指令促使:实现上述图4-图12任一实施例所述的方法步骤。
本申请实施例中,网络设备可以为第一数据中心的网关、中间设备或第二数据中心的网关。
机器可读存储介质可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,机器可读存储介质还可以是至少一个位于远离前述处理器的存储装置。
处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
在本申请提供的又一实施例中,还提供了一种机器可读存储介质,该机器可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述图2-图12任一实施例所述的方法步骤。
在本申请提供的又一实施例中,还提供了一种计算机程序,计算机程序被处理器执行时实现上述图2-图12任一实施例所述的方法步骤。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置、网络设备、存储介质和计算机程序实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本申请的保护范围内。
Claims (35)
- 一种数据处理方法,其特征在于,应用于广域网中的网络设备,所述网络设备位于第一数据中心的网关到第二数据中心的网关的指定路径上,所述方法包括:将获取到的目标业务类型的第一报文存储至所述第一报文所属确定性流对应的第一调度队列,所述第一报文的源地址为所述第一数据中心中的第一主机的地址,所述第一报文的目的地址为所述第二数据中心中的第二主机的地址,所述第一报文所属确定性流的转发路径为所述指定路径;当到达所述第一调度队列的调度周期时,转发所述第一调度队列中的报文。
- 根据权利要求1所述的方法,其特征在于,所述网络设备为所述第一数据中心的网关;所述将获取到的目标业务类型的第一报文存储至所述第一报文所属确定性流对应的第一调度队列的步骤,包括:接收第二报文,所述第二报文的源地址为所述第一数据中心中的第一主机的地址,所述第二报文的目的地址为所述第二数据中心中的第二主机的地址;若所述第二报文的特征与预配置的目标业务类型的报文特征匹配,则基于所述第二报文,生成沿所述指定路径转发的第一报文;将所述第一报文存储至第一调度队列,所述第一调度队列为所述第一报文所属确定性流对应的调度队列。
- 根据权利要求2所述的方法,其特征在于,所述基于所述第二报文,生成沿所述指定路径转发的第一报文的步骤,包括:获取所述指定路径对应的目标分段路由列表,所述目标分段路由列表中的所述网络设备的分段标识对应第一调度队列;基于所述目标分段路由列表,将所述第二报文封装为第一报文,所述第一报文为SRv6报文。
- 根据权利要求2所述的方法,其特征在于,所述方法还包括:若所述第二报文的特征与预配置的目标业务类型的报文特征不匹配,则根据在当前时刻之前的预设时长内所述目标业务类型的报文的接收情况,确定第二调度队列;将所述第二报文存储至所述第二调度队列;按照所述第二调度队列对应的转发策略,转发所述第二调度队列中的报文。
- 根据权利要求1所述的方法,其特征在于,所述网络设备为所述指定路径上的中间设备或所述第二数据中心的网关,所述目标业务类型的报文为SRv6报文;所述将获取到的目标业务类型的第一报文存储至所述第一报文所属确定性流对应的第一调度队列的步骤,包括:接收第二报文,所述第二报文的源地址为所述第一数据中心中的第一主机的地址,所述第二报文的目的地址为所述第二数据中心中的第二主机的地址;若所述第二报文为SRv6报文,且所述网络设备支持SRv6,则从所述第二报文中,获取目标分段路由列表;若所述目标分段路由列表中的所述网络设备的分段标识对应第一调度队列,则将所述第二报文作为目标业务类型的第一报文,并将所述第一报文存储至第一调度队列,所述第一调度队列为所述第一报文所属确定性流对应的调度队列。
- 根据权利要求5所述的方法,其特征在于,所述方法还包括:若所述第二报文不是SRv6报文,或者,所述网络设备不支持SRv6,或者,未获取到所述目标分段路由列,或者,所述目标分段路由列表中的所述网络设备的分段标识没有对应的调度队列,则根据在当前时刻之前的预设时长内所述目标业务类型的报文的接收情况,确定第二调度队列;将所述第二报文存储至所述第二调度队列;按照所述第二调度队列对应的转发策略,转发所述第二调度队列中的报文。
- 根据权利要求6所述的方法,其特征在于,当所述网络设备为所述第二数据中心的网关时,所述按照所述第二调度队列对应的转发策略,转发所述第二调度队列中的报文的步骤,包括:获得所述第二调度队列中的报文对应的原始报文;按照所述第二调度队列对应的转发策略,向所述第二数据中心内的第二主机转发所述原始报文。
- 根据权利要求6或7所述的方法,其特征在于,当所述网络设备为所述第二数据中心的网关时,所述转发所述第一调度队列中的报文的步骤,包括:剥离所述第一调度队列中的报文的SRv6封装,得到原始报文;向所述第二数据中心内的第二主机转发所述原始报文。
- 根据权利要求4或6所述的方法,其特征在于,所述根据在当前时刻之前的预设时长内所述目标业务类型的报文的接收情况,确定第二调度队列的步骤,包括:若在当前时刻之前的预设时长内未接收到所述目标业务类型的报文,则从所述网络设备中的所有调度队列中,确定第二调度队列;若在当前时刻之前的所述预设时长内接收到所述目标业务类型的报文,则从所述网络设备中除所述第一调度队列外的所有调度队列中,确定第二调度队列。
- 根据权利要求4或6所述的方法,其特征在于,所述按照所述第二调度队列对应的转发策略,转发所述第二调度队列中的报文的步骤,包括:按照先进先出的方式,转发所述第二调度队列中的报文;或者,当到达第二调度队列的调度周期时,转发所述第二调度队列中的报文。
- 根据权利要求2-7任一项所述的方法,其特征在于,所述将所述第一报文存储至第一调度队列的步骤,包括:若第一调度队列中存储有其他业务类型的报文,则清空所述第一调度队列;将所述第一报文存储至清空后的第一调度队列。
- 根据权利要求1-7任一项所述的方法,其特征在于,所述方法还包括:接收控制器下发的第一路径信息,所述第一路径信息包括调度信息和所述第一调度队列的调度周期,所述调度信息指示所述目标业务类型的报文所属确定性流对应所述第一调度队列;将所述目标业务类型的报文所属确定性流与所述网络设备中的第一调度队列关联,并按照所述第一路径信息包括的调度周期,配置所述网络设备中的第一调度队列的调度周期。
- 根据权利要求12所述的方法,其特征在于,所述方法还包括:接收所述控制器下发的第二路径信息,所述第二路径信息包括:所述指定路径对应的目标分段路由列表、所述目标业务类型的报文特征、以及所述第一调度队列与所述目标分段路由列表中所述网络设备的分段标识的对应关系中的一种或多种;根据所述第二路径信息,执行以下至少一项操作:将所述指定路径与所述目标分段路由列表关联;所述目标业务类型的报文特征与所述目标业务类型的报文所属确定性流关联;建立所述第一调度队列与所述目标分段路由列表中所述网络设备的分段标识的对应关系。
- 根据权利要求13所述的方法,其特征在于,所述第一路径信息和第二路径信息为所述控制器根据所述广域网的指定网络资源信息和所述目标业务类型的确定性流的传输指标确定。
- 根据权利要求14所述的方法,其特征在于,所述指定网络资源信息为所述控制器采用带内网络遥测方式对所述广域网进行探测得到的。
- 根据权利要求14所述的方法,其特征在于,所述指定网络资源信息包括网络拓扑信息、网络带宽、网络时延、网络抖动中的一种或多种。
- 一种数据处理装置,其特征在于,应用于广域网中的网络设备,所述网络设备位于第一数据中心的网关到第二数据中心的网关的指定路径上,所述装置包括:第一存储单元,用于将获取到的目标业务类型的第一报文存储至所述第一报文所属确定性流对应的第一调度队列,所述第一报文的源地址为所述第一数据中心中的第一主机的地址,所述第一报文的目的地址为所述第二数据中心中的第二主机的地址,所述第一报文所属确定性流的转发路径为所述指定路径;转发单元,用于当到达所述第一调度队列的调度周期时,转发所述第一调度队列中的报文。
- 根据权利要求17所述的装置,其特征在于,所述网络设备为所述第一数据中心的网关;所述第一存储单元,具体用于:接收第二报文,所述第二报文的源地址为所述第一数据中心中的第一主机的地址,所述第二报文的目的地址为所述第二数据中心中的第二主机的地址;若所述第二报文的特征与预配置的目标业务类型的报文特征匹配,则基于所述第二报文,生成沿所述指定路径转发的第一报文;将所述第一报文存储至第一调度队列,所述第一调度队列为所述第一报文所属确定性流对应的调度队列。
- 根据权利要求18所述的装置,其特征在于,所述第一存储单元,具体用于:获取所述指定路径对应的目标分段路由列表,所述目标分段路由列表中的所述网络设备的分段标识对应第一调度队列;基于所述目标分段路由列表,将所述第二报文封装为第一报文,所述第一报文为SRv6报文。
- 根据权利要求18所述的装置,其特征在于,所述第一存储单元,还用于若所述第二报文的特征与预配置的目标业务类型的报文特征不匹配,则根据在当前时刻之前的预设时长内所述目标业务类型的报文的接收情况,确定第二调度队列;将所述第二报文存储至所述第二调度队列;所述转发单元,还用于按照所述第二调度队列对应的转发策略,转发所述第二调度队列中的报文。
- 根据权利要求17所述的装置,其特征在于,所述网络设备为所述指定路径上的中间设备或所述第二数据中心的网关,所述目标业务类型的报文为SRv6报文;所述第一存储单元,具体用于:接收第二报文,所述第二报文的源地址为所述第一数据中心中的第一主机的地址,所述第二报文的 目的地址为所述第二数据中心中的第二主机的地址;若所述第二报文为SRv6报文,且所述网络设备支持SRv6,则从所述第二报文中,获取目标分段路由列表;若所述目标分段路由列表中的所述网络设备的分段标识对应第一调度队列,则将所述第二报文作为目标业务类型的第一报文,并将所述第一报文存储至第一调度队列,所述第一调度队列为所述第一报文所属确定性流对应的调度队列。
- 根据权利要求21所述的装置,其特征在于,所述第一存储单元,还用于:若所述第二报文不是SRv6报文,或者,所述网络设备不支持SRv6,或者,未获取到所述目标分段路由列,或者,所述目标分段路由列表中的所述网络设备的分段标识没有对应的调度队列,则根据在当前时刻之前的预设时长内所述目标业务类型的报文的接收情况,确定第二调度队列;将所述第二报文存储至第二调度队列;所述转发单元,还用于按照所述第二调度队列对应的转发策略,转发所述第二调度队列中的报文。
- 根据权利要求22所述的装置,其特征在于,当所述网络设备为所述第二数据中心的网关时,所述转发单元,具体用于:获得所述第二调度队列中的报文对应的原始报文;按照所述第二调度队列对应的转发策略,向所述第二数据中心内的第二主机转发所述原始报文。
- 根据权利要求22或23所述的装置,其特征在于,当所述网络设备为所述第二数据中心的网关时,所述转发单元,具体用于:剥离所述第一调度队列中的报文的SRv6封装,得到原始报文;向所述第二数据中心内的第二主机转发所述原始报文。
- 根据权利要求20或22所述的装置,其特征在于,所述第一存储单元,具体用于:若在当前时刻之前的预设时长内未接收到所述目标业务类型的报文,则从所述网络设备中的所有调度队列中,确定第二调度队列;若在当前时刻之前的所述预设时长内接收到所述目标业务类型的报文,则从所述网络设备中除所述第一调度队列外的所有调度队列中。
- 根据权利要求20或22所述的装置,其特征在于,所述转发单元,具体用于:按照先进先出的方式,转发所述第二调度队列中的报文;或者,当到达所述第二调度队列的调度周期时,转发所述第二调度队列中的报文。
- 根据权利要求18-23任一项所述的装置,其特征在于,所述第一存储单元,具体用于:若第一调度队列中存储有其他业务类型的报文,则清空所述第一调度队列;将所述第一报文存储至清空后的第一调度队列。
- 根据权利要求17-23任一项所述的装置,其特征在于,所述装置还包括:第一接收单元,用于接收控制器下发的第一路径信息,所述第一路径信息包括调度信息和所述第一调度队列的调度周期,所述调度信息指示所述目标业务类型的报文所属确定性流对应所述第一调度队列;第一配置单元,用于将所述目标业务类型的报文所属确定性流与所述网络设备中的第一调度队列关联,并按照所述第一路径信息包括的调度周期,配置所述网络设备中的第一调度队列的调度周期。
- 根据权利要求28所述的装置,其特征在于,所述装置还包括:第二接收单元,用于接收所述控制器下发的第二路径信息,所述第二路径信息包括:所述指定路径对应的目标分段路由列表、所述目标业务类型的报文特征、以及所述第一调度队列与所述目标分段路由列表中所述网络设备的分段标识的对应关系中的一种或多种;第二配置单元,用于根据所述第二路径信息,执行以下至少一项操作:将所述指定路径与所述目标分段路由列表关联;所述目标业务类型的报文特征与所述目标业务类型的报文所属确定性流关联;建立所述第一调度队列与所述目标分段路由列表中所述网络设备的分段标识的对应关系。
- 根据权利要求29所述的装置,其特征在于,所述第一路径信息和第二路径信息为所述控制器根据所述广域网的指定网络资源信息和所述目标业务类型的确定性流的传输指标确定。
- 根据权利要求30所述的装置,其特征在于,所述指定网络资源信息为所述控制器采用带内网络遥测方式对所述广域网进行探测得到的。
- 根据权利要求30所述的装置,其特征在于,所述指定网络资源信息包括网络拓扑信息、网络带宽、网络时延、网络抖动中的一种或多种。
- 一种网络设备,其特征在于,包括处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的计算机程序,所述处理器被所述计算机程序促使:实现权利要求1-16任一所述的方法步骤。
- 一种机器可读存储介质,其特征在于,所述机器可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时,实现权利要求1-16任一所述的方法步骤。
- 一种计算机程序,其特征在于,所述计算机程序被处理器执行时,实现权利要求1-16任一所述的方法步骤。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22960117.4A EP4425876A1 (en) | 2022-09-29 | 2022-09-29 | Data processing method and apparatus, and network device and storage medium |
CN202280003396.8A CN118120209A (zh) | 2022-09-29 | 2022-09-29 | 一种数据处理方法、装置、网络设备及存储介质 |
PCT/CN2022/122841 WO2024065481A1 (zh) | 2022-09-29 | 2022-09-29 | 一种数据处理方法、装置、网络设备及存储介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/122841 WO2024065481A1 (zh) | 2022-09-29 | 2022-09-29 | 一种数据处理方法、装置、网络设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024065481A1 true WO2024065481A1 (zh) | 2024-04-04 |
Family
ID=90475391
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/122841 WO2024065481A1 (zh) | 2022-09-29 | 2022-09-29 | 一种数据处理方法、装置、网络设备及存储介质 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4425876A1 (zh) |
CN (1) | CN118120209A (zh) |
WO (1) | WO2024065481A1 (zh) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160380886A1 (en) * | 2015-06-25 | 2016-12-29 | Ciena Corporation | Distributed data center architecture |
US20170063674A1 (en) * | 2015-08-29 | 2017-03-02 | Vmware, Inc. | Routing optimization for inter-cloud connectivity |
CN110336759A (zh) * | 2018-12-26 | 2019-10-15 | 锐捷网络股份有限公司 | 基于rdma的协议报文转发方法及装置 |
CN112448885A (zh) * | 2019-08-27 | 2021-03-05 | 华为技术有限公司 | 一种业务报文传输的方法及设备 |
CN113472646A (zh) * | 2021-05-31 | 2021-10-01 | 华为技术有限公司 | 一种数据传输方法、节点、网络管理器及系统 |
CN114080789A (zh) * | 2020-03-18 | 2022-02-22 | 环球互连及数据中心公司 | 用于应用工作负载的网络定义的边缘路由 |
-
2022
- 2022-09-29 CN CN202280003396.8A patent/CN118120209A/zh active Pending
- 2022-09-29 WO PCT/CN2022/122841 patent/WO2024065481A1/zh active Application Filing
- 2022-09-29 EP EP22960117.4A patent/EP4425876A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160380886A1 (en) * | 2015-06-25 | 2016-12-29 | Ciena Corporation | Distributed data center architecture |
US20170063674A1 (en) * | 2015-08-29 | 2017-03-02 | Vmware, Inc. | Routing optimization for inter-cloud connectivity |
CN110336759A (zh) * | 2018-12-26 | 2019-10-15 | 锐捷网络股份有限公司 | 基于rdma的协议报文转发方法及装置 |
CN112448885A (zh) * | 2019-08-27 | 2021-03-05 | 华为技术有限公司 | 一种业务报文传输的方法及设备 |
CN114080789A (zh) * | 2020-03-18 | 2022-02-22 | 环球互连及数据中心公司 | 用于应用工作负载的网络定义的边缘路由 |
CN113472646A (zh) * | 2021-05-31 | 2021-10-01 | 华为技术有限公司 | 一种数据传输方法、节点、网络管理器及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN118120209A (zh) | 2024-05-31 |
EP4425876A1 (en) | 2024-09-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI643477B (zh) | 軟體定義網路控制器、服務功能鏈系統及路徑追蹤方法 | |
US7668107B2 (en) | Hardware implementation of network testing and performance monitoring in a network device | |
CN113676361A (zh) | 针对体验质量度量的按需探测 | |
CN108206753B (zh) | 一种检测时延的方法、装置及系统 | |
US20050276230A1 (en) | Communication statistic information collection apparatus | |
WO2019134383A1 (zh) | 控制网络拥塞的方法、接入设备和计算机可读存储介质 | |
JP2020508004A (ja) | パケット処理方法および装置 | |
EP3082293B1 (en) | Switching device and packet loss method therefor | |
US11165716B2 (en) | Data flow processing method and device | |
WO2019227891A1 (zh) | 一种实现节点间通讯的方法、装置及电子设备 | |
CN111371634B (zh) | 一种通信方法、装置及系统 | |
WO2021098425A1 (zh) | 配置业务的服务质量策略方法、装置和计算设备 | |
WO2018036173A1 (zh) | 一种网络负载均衡方法、设备及系统 | |
US20220217051A1 (en) | Method, Device, and System for Determining Required Bandwidth for Data Stream Transmission | |
WO2013044827A1 (zh) | 一种跟踪路由测试方法、系统、装置及设备 | |
US20200366626A1 (en) | Forwarding Entry Update Method and Apparatus | |
TW201737664A (zh) | 集群精確限速方法和裝置 | |
US20230318970A1 (en) | Packet Processing Method and Apparatus | |
CN111628999B (zh) | 一种基于sdn的fast-cnp数据传输方法及系统 | |
CN106464670A (zh) | 网络实体及服务策略管理方法 | |
WO2024065481A1 (zh) | 一种数据处理方法、装置、网络设备及存储介质 | |
WO2022152230A1 (zh) | 信息流识别方法、网络芯片及网络设备 | |
EP3643138B1 (en) | A method and network node of setting up a wireless connection | |
US7855967B1 (en) | Method and apparatus for providing line rate netflow statistics gathering | |
JP3781663B2 (ja) | トラヒック情報収集装置およびトラヒック情報収集方法およびプログラムおよび記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 202280003396.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22960117 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022960117 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022960117 Country of ref document: EP Effective date: 20240529 |