CN113746746A - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN113746746A
CN113746746A CN202010480984.3A CN202010480984A CN113746746A CN 113746746 A CN113746746 A CN 113746746A CN 202010480984 A CN202010480984 A CN 202010480984A CN 113746746 A CN113746746 A CN 113746746A
Authority
CN
China
Prior art keywords
credit request
credit
time period
sending
network device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010480984.3A
Other languages
Chinese (zh)
Inventor
任江兴
王临春
夏洪淼
喻径舟
何子键
亚利克斯·塔尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010480984.3A priority Critical patent/CN113746746A/en
Priority to PCT/CN2021/096537 priority patent/WO2021244404A1/en
Publication of CN113746746A publication Critical patent/CN113746746A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a data processing method and equipment, wherein the method comprises the following steps: acquiring a credit request message, and determining the expected arrival time of a data message corresponding to the credit request message according to the credit request message; determining a target sending time period of the data message corresponding to the credit request message according to the expected arrival time; and then acquiring a data message corresponding to the credit request message, and transmitting the data message corresponding to the credit request message in the target transmission time period. The technical scheme provided by the application can effectively avoid collision of the data messages and realize congestion control.

Description

Data processing method and device
Technical Field
The present application relates to the field of communications technologies, and in particular, to a data processing method and device.
Background
In a statistical multiplexing network, end-to-end congestion control refers to dynamically controlling how much or how fast traffic enters the network according to the capacity and congestion degree of the network. Congestion control is a key technology for reducing packet loss, improving the utilization rate of network bandwidth and reducing transmission delay. Congestion control may also reduce the size of the message cache of the network device, thereby reducing device cost and design difficulty. The conventional congestion control is mainly implemented by a host through a Transmission Control Protocol (TCP). Such as feedback congestion signals that the network device may display to assist the host in achieving rate adjustment. However, the congestion control based on rate control is based on network congestion information in principle to adjust the sending rate of the source, and the amount (e.g. the number of bytes) of the sending flow cannot be accurately controlled.
A credit (credit) -based congestion control mechanism may control how much traffic is sent, such as the method shown in fig. 1. And the destination receives credit request messages sent by a plurality of source ends, then a dispatcher of the destination performs flow control on the source ends based on the bandwidth and congestion conditions of the outlet ports, and distributes credits to the source ends through credit response messages. After receiving the credit response message, the source end can send a data message of a corresponding byte.
However, in the above method, when a plurality of source terminals send data packets to a destination terminal, collision still occurs, and congestion is generated.
Disclosure of Invention
The application provides a data processing method and equipment, which can effectively avoid collision of data messages and realize congestion control.
In a first aspect, an embodiment of the present application provides a data processing method, where the method is applied to a first network device, and the method includes: acquiring a credit request message; determining the expected arrival time of the data message corresponding to the credit request message according to the credit request message; then determining a target sending time period of the data message corresponding to the credit request message according to the expected arrival time; acquiring a data message corresponding to the credit request message; and transmitting the data message corresponding to the credit request message in the target transmission time period.
According to the technical scheme provided by the embodiment of the application, the first network equipment determines the target sending time period of the data message according to the expected arrival time of the data message corresponding to the credit request message. On one hand, the target sending time period of the data message is determined according to the expected arrival time of the data message, the time difference of the data message sent by different source ends to the target end is considered, even if the round-trip delay of static paths from the different source ends to the target end is different, the data message can not collide, the congestion and packet loss are avoided, and the congestion control is accurately realized. On the other hand, the technical solution provided in the embodiment of the present application may be applied not only to a source end and a destination end for transmitting the data packet, but also to an intermediate node for transmitting the data packet, that is, a target sending time period for the data packet to be sent by the intermediate node is also considered, so that flow control of all nodes for transmitting the data packet is achieved.
In one possible implementation, the available credit in the target sending time period satisfies the requirement of the data message corresponding to the credit request message.
According to the technical scheme provided by the embodiment of the application, the data message is sent in the target sending time period, so that the time period for sending the data message is accurately controlled, and congestion and packet loss are avoided; and the available credit in the target sending time period can control the number of the sent data messages (or can be called as the number of the sending flow) by meeting the requirement of the data messages corresponding to the credit request messages, thereby further accurately realizing congestion-free and packet loss-free.
In a possible implementation manner, the determining a target sending time period of a data packet corresponding to the credit request packet according to the expected arrival time includes: determining a target sending time period of a data message corresponding to the credit request message at the first sending port according to the expected arrival time; the sending of the data message corresponding to the credit request message in the target sending time period includes: and transmitting the data message corresponding to the credit request message through the first transmitting port in the target transmitting time period.
In a possible implementation manner, the credit request message includes an identifier of the first sending port. Or, the credit request message includes an identifier of the first sending port of the one or more network devices, and the identifier of the first sending port of the one or more network devices includes an identifier of the first sending port of the first network device.
According to the technical scheme provided by the embodiment of the application, the credit request message comprises the identifier of the first sending port, the intermediate node can send the credit request message to other nodes according to the identifier of the first sending port, and the data message is sent through the identifier of the first sending port. The data message is sent in a source routing forwarding mode, so that the first network equipment can determine the expected arrival time of the data message in advance.
In one possible implementation, the first sending port includes M sending time periods, each sending time period including available credits, and M is greater than or equal to 2.
In this embodiment of the present application, the first sending port includes M sending time periods, which can also be understood as: the first transmission port is provided with M transmission periods.
In a possible implementation manner, the determining, according to the expected arrival time, a target transmission time period of the data packet corresponding to the credit request packet at the first transmission port includes: obtaining available credits of N sending time periods after an expected arrival time in M sending time periods, wherein M is greater than or equal to N, and N is greater than or equal to 1; and determining at least one transmission time period as a target transmission time period according to the available credit in the N transmission time periods.
In a possible implementation manner, the credit request message further includes first time information; the determining the expected arrival time of the data packet corresponding to the credit request packet according to the credit request packet includes: and determining the expected arrival time according to the acquisition time of the credit request message and the first time information.
In one possible implementation, the method further includes: updating first time information included in the credit request message according to the expected arrival time and the target sending time period; and sending the credit request message to the next-stage network equipment of the first network equipment, wherein the credit request message comprises the updated first time information.
In the embodiment of the application, the first network device can provide more accurate information for the next-stage network device to determine the expected arrival time by updating the first time information, so that the determination precision and accuracy of the target sending time period are improved.
In one possible implementation, the first time information is used to indicate a buffer queuing delay and/or a static path round trip delay of one or more network devices.
In a possible implementation manner, the first sending port includes P buffer queues, and different buffer queues correspond to different sending time periods, where P is greater than or equal to 2.
In a possible implementation manner, a data message corresponding to the credit request message includes an identifier of a target sending time period of one or more network devices, where the identifier of the target sending time period of the one or more network devices includes an identifier of a target sending time period of a first network device; the sending, in the target sending time period, the data packet corresponding to the credit request packet through the first sending port includes: caching the data message corresponding to the credit request message into a corresponding cache queue according to the identification of the target sending time period of the first network equipment; and at the first sending port, sending the data message corresponding to the credit request message through the corresponding cache queue.
In the technical solution provided in the embodiment of the present application, a data packet corresponding to a credit request packet may be enabled to cache the data packet according to an identifier of a target transmission time period of one or more network devices by including the identifier of the target transmission time period, so that the data packet is transmitted through a corresponding cache queue. The one or more network devices are the network devices through which the data packet passes when the data packet is transmitted. The technical scheme provided by the embodiment of the application can be applied to not only a source end and a destination end for transmitting the data message, but also an intermediate node for transmitting the data message, so that the data message can be cached in the corresponding cache queue by each node according to the identifier of the target sending time period determined by each node through the identifier of the target sending time period determined by each node, and the flow control of all nodes for transmitting the data message is realized.
In a possible implementation manner, the data message corresponding to the credit request message further includes an identifier of the first sending port of the one or more network devices, and the identifier of the first sending port of the one or more network devices includes the identifier of the first sending port of the first network device.
In this way, the first network device may send the data packet corresponding to the credit request packet through the first sending port of the first network device within the target sending time period of the first network device.
In a possible implementation manner, after the obtaining of the credit request packet and before the obtaining of the data packet corresponding to the credit request packet, the method further includes: receiving a credit response message corresponding to the credit request message, wherein the credit response message comprises the identification of the target sending time period of one or more network devices and corresponding available credit; and/or sending a credit response message corresponding to the credit request message, wherein the credit response message comprises the identification of the target sending time period of one or more network devices and the corresponding available credit.
In a second aspect, an embodiment of the present application provides a first network device, configured to execute the method in the first aspect or any possible implementation manner of the first aspect. The first network device comprises corresponding means with instructions to perform the method of the first aspect or any possible implementation of the first aspect.
For example, the first network device may include a transceiving unit and a processing unit.
In a third aspect, an embodiment of the present application provides a first network device, where the first network device includes a processor, configured to execute a program stored in a memory, and when the program is executed, the first network device is caused to perform a method as shown in the first aspect or any possible implementation manner of the first aspect.
In one possible implementation, the memory is located outside the first network device.
In one possible implementation, the memory is located within the first network device.
In one possible implementation, the first network device further includes a transceiver for receiving signals or transmitting signals.
In a fourth aspect, an embodiment of the present application provides a first network device, where the first network device includes a processing circuit and an interface circuit, where the processing circuit is configured to obtain a credit request packet; determining the expected arrival time of the data message corresponding to the credit request message according to the credit request message; determining a target sending time period of the data message corresponding to the credit request message according to the expected arrival time; the interface circuit is used for acquiring a data message corresponding to the credit request message; and outputting the data message corresponding to the credit request message in the target sending time period.
It is understood that reference may also be made to the following for specific implementations of the processing circuit and the interface circuit, which are not described in detail here.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium for storing a computer program, which when run on a computer causes the method shown in the first aspect or any possible implementation manner of the first aspect to be performed.
The embodiment of the present application is not limited to a specific form of a computer. Illustratively, the computer may comprise a first network device.
Alternatively, the computer readable storage medium is used to store a computer program which, when executed, causes the method illustrated in the first aspect or any possible implementation of the first aspect described above to be performed.
In a sixth aspect, embodiments of the present application provide a computer program product comprising a computer program or computer code which, when run on a computer, causes the method illustrated in the first aspect or any possible implementation manner of the first aspect to be performed.
Alternatively, the computer program product comprises a computer program or computer code which, when run, causes the method illustrated in the first aspect or any possible implementation of the first aspect described above to be performed.
In a seventh aspect, an embodiment of the present application provides a computer program, which when running on a computer, performs the method shown in the first aspect or any possible implementation manner of the first aspect.
Alternatively, the computer program may be executed to perform a method as illustrated in the first aspect or any possible implementation manner of the first aspect.
Drawings
Fig. 1 is a schematic diagram of credit congestion control based on a network egress port according to an embodiment of the present application;
fig. 2a is a schematic diagram of different source-to-destination static RTTs provided by an embodiment of the present application;
fig. 2b is a schematic diagram illustrating different static RTTs from a source end to a destination end according to an embodiment of the present application;
fig. 3 is a schematic view of a scenario of a statistical multiplexing network according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 5a is a schematic diagram of a network structure provided in an embodiment of the present application;
fig. 5b is a schematic diagram of a network architecture provided in an embodiment of the present application;
FIG. 6a is a diagram illustrating a sliding window of credit provided by an embodiment of the present application;
fig. 6b is a schematic scheduling diagram of a buffer queue according to an embodiment of the present application;
fig. 7a is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 7b is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 7c is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 8a is a schematic diagram of a process for determining a target sending time period and a credit hop by hop according to an embodiment of the present application;
fig. 8b is a schematic diagram of a processing flow of a signaling message according to an embodiment of the present application;
fig. 8c is a schematic diagram of a forwarding flow of a data packet according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a first network device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a first network device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a first network device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, the present application will be further described with reference to the accompanying drawings.
The terms "first" and "second," and the like in the description, claims, and drawings of the present application are used solely to distinguish between different objects and not to describe a particular order. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. Such as a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In this application, "at least one" means one or more, "a plurality" means two or more, "at least two" means two or three and three or more, "and/or" for describing an association relationship of associated objects, which means that there may be three relationships, for example, "a and/or B" may mean: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one item(s) below" or similar expressions refer to any combination of these items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b," a and c, "" b and c, "or" a and b and c.
Terms referred to in the present application are described below.
Credit request (credit request) message: credit based flow control mechanisms. In some implementations, the source peer may send a credit request packet to the destination peer before the source peer sends the data packet to the destination peer. That is, the credit request packet may be used to request the destination end to authorize the source end to send the data packet, and the credit request packet may include the requested credit number, where the credit number corresponds to the data amount of the data packet. In other implementations, before the source peer sends the data packet to the destination peer, the source peer may send a credit request packet to the destination peer through one or more intermediate nodes. That is, the credit request packet may be used to request each of the one or more intermediate nodes to authorize the source end to send the data packet, and the credit request packet may include a requested credit number, which corresponds to a data amount of the data packet. Optionally, the credit request packet may also be used to request the destination end to authorize the source end to send a data packet. The intermediate node may be understood as another node through which the credit request packet is transmitted from the source end to the destination end.
The credit response message corresponding to the credit request message: for responding to the credit request message. For example, the credit response message may confirm the credit request message (e.g., grant); alternatively, the credit request message may be denied. For example, in the case where the credit response message acknowledges the credit request message, the credit response message includes available credit, which may be used to indicate the data amount of the data message. For example, the number of credits available in the credit response message may indicate the amount of data in the data message. For example, if one credit represents a data packet with preset bytes and the credit request packet requests to send a data packet corresponding to 5 credits, the credit request packet may be used to indicate that the destination end needs to authorize the source end to send the data packet, and the data packet may be 5 preset bytes. If the credit response message includes 5 credits, the credit response message may be used to indicate that the source end may send a data message of a corresponding byte (e.g., 5 preset bytes). If the credit response packet includes 4 credits, the credit response packet may be used to indicate that the source end may send a data packet of corresponding bytes (e.g., 4 preset bytes). It is understood that the application is not limited to whether the number of credits included in the credit request message matches the available credits included in the credit response message.
It is understood that the credit request message and the credit response message may also be collectively referred to as a signaling message.
The data message corresponding to the credit request message: the credit request message requests the sent data message, and the data amount (e.g. number of bytes) of the data message can be determined by the available credit included in the credit response message.
For example, as shown in fig. 5a, before node a sends a data packet to node D, node a may send a credit request packet to node D, and node D may feed back a credit response packet to node a. The node a then sends a data packet to node D. Alternatively, as shown in fig. 5b, the node a may send a credit request message to the node D through the node w, the node x, and the node z, and then the node D feeds back a credit response message to the node a through the node z, the node x, and the node w. Therefore, the node A can send a data message to the node D through the node w, the node x and the node z.
It can be understood that the node a, the node w, the node x, the node z, and the node D shown in fig. 5b are five network devices, for example, the credit request message may include identifiers of first sending ports of the five network devices, or a data message corresponding to the credit request message includes identifiers of target sending time periods of the five network devices. Each of the five network devices may perform the methods shown in fig. 4, 7a, 7b, 7c, etc. In other words, the above five network devices may also be referred to as a first network device. For this description, fig. 3, fig. 5a, fig. 8a to fig. 8c, and the like are equally applicable, and are not described in detail below.
Static path Round Trip Time (RTT): it may also be understood as a static RTT, e.g. the static path round trip delay may comprise link transmission delay and/or forwarding delay.
Buffering queuing delay: which may also be referred to as queuing delay (queue delay) or dynamic delay, etc. Determined by the expected time of transmission (ETD) and the Expected Time of Arrival (ETA) of the data packet. Or, the difference between the expected sending time and the expected arrival time of the data message is the buffer queuing delay. For example, the accumulated buffer queuing delay of the data packet from hop to hop may also be referred to as Qdelay. As shown in fig. 6a, there are 3 transmission time periods between ETD and ETA, and therefore, the buffer queuing delay is 3 transmission time periods. It is to be understood that, for convenience of description, the queuing delay, the dynamic delay, and the like are collectively referred to as a buffer queuing delay, and are denoted by Qdelay in the drawings. It can be understood that the cache queuing delay shown in the present application may further include a delay introduced by an uncertain factor such as thread scheduling in the forwarding process.
Target transmission time period: the time period during which the data message is sent. The target transmission time period may also be understood as a period of time. For example, the first transmit port may include M transmit time periods, and each transmit time period includes available credits. The target sending time period is at least one sending time period in the M sending time periods determined by the first network device, and the at least one target sending time period is used for sending the data message corresponding to the credit request message. In the present application, the sending time period may exist in the form of a buffer queue. That is, the different sending time periods may be represented by different buffer queues, and each buffer queue may buffer the data packets that may be sent in the sending time period.
First time information: for indicating a cache queuing delay and/or a static path round trip delay of one or more network devices, including the first network device. The cache queuing delay may be understood as an accumulated cache queuing delay of the plurality of network devices.
Second time information: for indicating a target transmission time period for one or more network devices, including the first network device. For example, the target transmission time period of one or more network devices may be included in the second time information; or, include an identification of a target transmission time period for one or more network devices. Wherein, one network device corresponds to one target sending time period; or, one target transmission time period corresponds to a plurality of network devices, etc., which is not limited in this application. It is understood that the identifier of the target transmission time period may also be referred to as a time tag or a time slot tag, and the name of the identifier is not limited in this application.
Expected Time of Arrival (ETA): the expected (or expected) arrival time of the data message.
Expected time of transmission (ETD): the time at which the data message is expected (or expected) to be sent.
Identification of the first transmission port: an identification of a port used to send the data message. It is understood that the application is not limited to whether the second sending port is included in the first network device, and the specific role of the second sending port. It is understood that the identifier of the first transmitting port may also be referred to as a first transmitting port identifier, a tag of the first transmitting port, or a first egress port tag, and the name of the first transmitting port is not limited in this application.
For example, as shown in fig. 3, when a node E needs to send a data packet to a node C, the node E may first determine an end-to-end path node E- > node y- > node z- > node C; node E then adds a link label stack, such as 100/200, for the data packet for the path along node, and sends the link label stack to node y. The node y looks up a table based on the outer layer label 100 to obtain a port 3 (i.e. the identifier of the sending port of the node y) and pops up (pop) a layer of label, and sends the data message to the node z through the port 3. Based on outer label 200, node z looks up table to obtain port 2 (i.e. the identifier of the sending port of node z), and sends the data packet to node C through port 2. It can be understood that when there are multiple available end-to-end paths, the node E may also perform load sharing among multiple paths, and the application does not limit how the node E determines the end-to-end paths. That is, the present application is not limited as to how the source determines the end-to-end path from the source to the destination. And the specific form of the identification of the first transmission port is not limited in this application. The payload in the data packet shown in fig. 3 may be traffic data, etc., and the present application does not limit this. Similarly, the payload (payload) in the data packet shown in fig. 8c may also be traffic data.
It is understood that the method for determining an end-to-end path shown above is only an example, and should not be construed as limiting the embodiments of the present application.
As shown in fig. 1, congestion control through an egress port (e.g., a destination leaf node 2) is a common flow control technique in a switching network, and may also be applied in a data center network, etc. However, the method shown in fig. 1 generally requires that the entire network be a non-blocking network and not suitable for any topology. And the method shown in fig. 1 only performs flow control on the bandwidth of the egress port (e.g., leaf node (leaf)2 in fig. 1), and does not consider the flow control of intermediate nodes (e.g., aggregation node (Agg) 0 and aggregation node 2 in fig. 1). That is, it can be guaranteed that the whole network is congestion-free and does not drop packets only when the network is a non-congestion network and the load sharing is balanced. Further, based on the bandwidth allocation credit of the egress port, if the paths from the source end to the destination end are not equal in length, the congestion control shown in fig. 1 is also not accurate for burst traffic, and thus, problems such as traffic overlapping bursts and packet loss may occur. It is to be understood that the bandwidth shown in the embodiment of the present application may also be understood as a physical bandwidth, and the flow control shown in the embodiment of the present application may also be understood as a congestion control, and therefore, the flow control hereinafter may also be replaced by a congestion control, or the congestion control may also be replaced by a flow control, and the like.
As shown in fig. 2a, a credit counter is set in an egress port (e.g., leaf node 2 in fig. 1), and the credit counter can be used to fill credit periodically, for example, credit allocation can be performed based on the bandwidth of the egress port when a credit request message arrives. If the static RTTs from all the source ends to the destination ends are the same, the data messages do not collide, and the flow control is effective. The scenarios shown in fig. 2a and/or fig. 2b are: the destination terminal performs credit allocation according to the credit request messages sent by the three source terminals (source terminal 1, source terminal 2 and source terminal 3), and then feeds back credit response messages to the three source terminals respectively. The source end 1, the source end 2 and the source end 3 respectively send data messages to the destination end after receiving the credit response messages. As shown in fig. 2a, because the static RTTs from the source end 1 to the destination end, from the source end 2 to the destination end, and from the source end 3 to the destination end are the same, when a data packet sent by the source end 1, a data packet sent by the source end 2, and a data packet sent by the source end 3 reach the destination end according to a certain time sequence, congestion will not be generated. As shown in fig. 2b, when the static RTTs from the source end 1 to the destination end, from the source end 2 to the destination end, and from the source end 3 to the destination end are different, the time for the data packet sent by the source end 1, the data packet sent by the source end 2, and the data packet sent by the source end 3 to reach the destination end is uncertain, so that the three data packets may collide, cause flow control failure, and cause congestion.
In view of this, the present application provides a data processing method and device, which can effectively avoid congestion and packet loss, and accurately implement flow control.
By way of example, the application can be applied to a statistical multiplexing network under any topology. As shown in fig. 3, each of the node a, the node B, the node C, the node D, the node E, and the node F is an edge network device (or may also be referred to as a first network device), and the edge network device may be a router or a switch, or a host. Each of the node w, the node x, the node y, and the node z is a core network device (or may also be referred to as a first network device), and the core network device may be a router or a switch, etc. It is understood that the method provided in the present application may be applied not only to the network shown in fig. 1 or fig. 3, but also to a network including two nodes, and the like, and the present application is not limited thereto.
For example, in a statistical multiplexing network such as an IP network, all service flows may share a network bandwidth, and if traffic bursts are superimposed, packet congestion queuing may occur in the network, which may result in packet loss and network transmission delay increase. Therefore, in order to reduce congestion and packet loss, a packet buffer at the RTT level is generally set in the network device for absorbing burst traffic. The bandwidth of the network device is continuously increased, the capacity and bandwidth requirement of the corresponding message cache are correspondingly increased, and the development speed of the memory cache technology is generally behind the speed increase of the network bandwidth, so that the high-performance network device faces great challenges in the aspects of manufacturing technology and cost. That is, even if an RTT level packet buffer is provided in the network device, it is still insufficient to solve the packet congestion problem as shown in fig. 1 and/or fig. 2 b.
Therefore, the method provided by the application can solve the congestion problem under any topology network, avoid packet loss caused by congestion, and meanwhile, core network equipment (such as a node w, a node x, a node y and a node z in fig. 3) does not need large-capacity message cache at the RTT level.
The data processing method provided by the application can be applied to a first network device, and the first network device can include a switch, a router, a wireless Access Point (AP), a host, or the like. Alternatively, the first network device may also be a device with similar functions, such as a wired device or a wireless device, which is not limited in this application.
Fig. 4 is a flowchart of a data processing method provided in an embodiment of the present application, where the method may be applied to a network shown in fig. 5a, or the method may also be applied to a network shown in fig. 5b, or the method may also be applied to a network shown in fig. 3, and so on. As shown in fig. 4, the method includes:
401. the first network equipment acquires a credit request message; and determining the expected arrival time of the data message corresponding to the credit request message according to the credit request message.
In some implementations, the first network device may generate a credit request message. For example, when the first network device is an edge network device, the first network device may generate the credit request packet by itself. In this case, the time of acquiring the credit request packet may be the time of generating the credit request packet by the first network device. Illustratively, the edge network device may be node a, node B, node C, node D, node E, or node F in fig. 3, etc. The time when the node A acquires the credit request message is the time when the node A generates the credit request message. It can be understood that, in the embodiment of the present application, there is no limitation on how the first network device generates the credit request message.
In other implementations, the first network device may receive a credit request message sent by a higher-level network device. For example, the first network device may be a core network device in fig. 3, or the like. In this case, the time of acquiring the credit request packet is the time when the first network device receives the credit request packet from the upper level network device. Illustratively, the time when the node w acquires the credit request message is the time when the credit request message is received from the node a.
For example, the time when the first network device receives the credit request message may be understood as: the time when the first network device receives the credit request message through a transceiver or a transceiver unit or an interface circuit; alternatively, it can also be understood that: after the first network device receives the credit request message through a transceiver or a transceiver unit or an interface circuit, etc., other internal devices or a protocol stack (e.g., a physical layer or a data link layer, etc.) of the first network device receive the time of the credit request message.
It can be understood that, in the embodiment of the present application, how the first network device obtains the credit request message is not limited.
In some implementations, the first network device may determine the expected arrival time of the data packet based on the acquisition time of the credit request packet and/or the static path round trip delay.
For example, the first network device stores the round-trip delay of the static path from the source end to the destination end, and the first network device may determine the expected arrival time of the data packet directly according to the acquisition time of the credit request packet and the stored round-trip delay of the static path from the source end to the destination end. For another example, the first network device does not store the static path round trip delay from the source end to the destination end, and the first network device may determine the expected arrival time of the data packet according to the link transmission delay of the passing intermediate node and/or the forwarding delay of the intermediate node, and the acquisition time of the credit request packet. For another example, the credit request packet includes address information of the destination end, and then the first network device may determine the round trip delay of the static path from the source end to the destination end according to the address information of the destination end after obtaining the address information of the destination end. For another example, if the credit request packet includes an identifier of a first sending port of one or more network devices, the first network device may determine a static path round trip delay from the source end to the destination end according to the identifier of the first sending port of the one or more network devices. Wherein, the identifier of the first sending port of the one or more network devices includes the identifier of the first sending port of the first network device. It can be understood that, in the embodiments of the present application, there is no limitation on how the first network device obtains the static path round trip delay from the source end to the destination end. It is to be understood that the first network device may be a network device from a source peer to a destination peer as illustrated above. Such as the first network device may be a source peer, a destination peer, or an intermediate node.
Illustratively, as shown in fig. 5a, the static path round trip delay is the static path round trip delay from node a to node D. As shown in fig. 5b, the static path round trip delay is the static path round trip delay of node a- > node w- > node x- > node z- > node D. For example, node a may determine the expected arrival time of the data packet based on the acquisition time of the credit request packet and/or the static path round trip delay. For another example, node D may also determine the expected arrival time of the data packet according to the acquisition time of the credit request packet and/or the static path round-trip delay. For another example, an intermediate node, such as node w, may also determine the expected arrival time of the data packet according to the acquisition time of the credit request packet and/or the static path round trip delay.
For convenience of description, the method provided by the embodiment of the present application will be described below by taking fig. 5b as an example.
In other implementations, the credit request message may include first time information, in which case the first network device may determine the expected arrival time of the data message according to the acquisition time of the credit request message and the first time information. The first time information is used for indicating the buffer queuing delay and/or the static path round-trip delay of one or more network devices.
In this embodiment of the application, the first time information may be used to indicate round-trip delay of the static path, and in this case, the specific implementation manner of the first network device determining the expected arrival time of the data packet may refer to the above description. Optionally, the first time information may also be used to indicate accumulated buffer queuing delay of one or more network devices, in this case, the first network device may obtain the static path round-trip delay according to the above-described method, so as to determine the expected arrival time of the data packet according to the buffer queuing delay and the static path round-trip delay. Optionally, the first time information may also be used to indicate accumulated buffer queuing delay and static path round-trip delay of one or more network devices, and in this case, the first network device may obtain the expected arrival time of the data packet more accurately. I.e. not only the static path round trip delay but also the buffer queuing delay when forwarding through each intermediate node is taken into account.
Optionally, in order to enable the next-stage network device after the first network device to accurately obtain the expected arrival time of the data packet, the method shown in fig. 4 may further include: the first network equipment updates the first time information included in the credit request message according to the expected arrival time and the target sending time period; and sending a credit request message to the next-stage network equipment of the first network equipment, wherein the credit request message comprises the updated first time information.
For example, the first network device may update the accumulated buffer queuing delay of the one or more network devices according to the expected arrival time and the target transmission time period. For example, as shown in fig. 5b, node a sends a credit request packet to node w, and the credit request packet may include the cache queuing delay of node a and/or the static path round trip delay from node a to node D. After the node w (i.e., the next-level node of the node a) receives the credit request packet and determines the expected arrival time of the data packet according to the acquisition time of the credit request packet, the cache queuing delay of the node a, and the static path round-trip delay from the node a to the node D, the node w may update the cache queuing delay in the first time information according to the determined expected arrival time of the data packet and the determined target transmission time period. In other words, the cache queuing delay included in the first time information in the credit request message sent by the node w to the node x is the cache queuing delay of the node a + the cache queuing delay of the node w. It can be understood that the next-level network device shown in the embodiment of the present application may be understood as: if the node a is a first network device, the node w may be referred to as a next-level network device of the first network device; if node w is a first network device, node x may be referred to as a next level network device of the first network device.
In the embodiment of the application, after obtaining the expected arrival time, the first network device determines the target sending time period according to the expected arrival time. That is, when the data packet arrives at the first network device, the data packet is not immediately sent out by the first network device, but has a certain buffering time. In other words, there is a certain buffering queuing delay between the expected arrival time of the data packet and the expected transmission time of the data packet. Therefore, in order to enable the next-level network device to obtain the expected arrival time of the data packet at the next-level network device more accurately, the first network device may update the cache queuing delay in the first time information.
402. And the first network equipment determines the target sending time period of the data message corresponding to the credit request message according to the expected arrival time.
In the embodiment of the application, the available credit in the target sending time period can meet the requirement of the data message corresponding to the credit request message.
In some implementations, the credit request message includes an identifier of a first sending port of one or more network devices, and the identifier of the first sending port of the one or more network devices includes the identifier of the first sending port of the first network device. For example, the one or more network devices are network devices passing from a source end to a destination end in the network, such as a source end, a destination end, or an intermediate node. In other implementations, the first network device may further determine the identifier of the first sending port according to a condition such as load sharing. In still other implementation manners, the first network device may further receive an identifier of a first sending port sent by the network management device, so as to determine that the data packet corresponding to the credit request packet is sent through the first sending port. The embodiment of the present application does not limit how the first network device obtains the identifier of the first sending port. Fig. 7a illustrates a method in which an identifier of a first transmission port including one or more network devices in a credit request message is taken as an example, but should not be construed as limiting the embodiment of the present application.
Therefore, the step 402 can be replaced by: and the first network equipment determines a target sending time period of the data message corresponding to the credit request message at the first sending port according to the expected arrival time.
The credit request message may include an identifier of the first transmission port of the one or more network devices, where the identifier of the first transmission port of the one or more network devices includes the identifier of the first transmission port of the first network device. Illustratively, as shown in fig. 5b, the credit request message may include an identifier of the first transmitting port of node a, such as 90, an identifier of the first transmitting port of node w, such as 100, an identifier of the first transmitting port of node x, such as 200, an identifier of the first transmitting port of node z, such as 300, and an identifier of the first transmitting port of node D, such as 400. For example, the identifier of the first sending port of the node D may be used for the node D to send a data packet to a server. In other words, the node D may comprise a port for sending credit response messages, and/or a port for sending data messages, etc.
In this embodiment, the first sending port may be configured to send a credit request message, receive a credit response message, and send a data message. Illustratively, the first sending port may be a port through which the first network device sends a data packet. As shown in fig. 5b, each of the node a, the node w, the node x, the node z, or the node D may include a plurality of sending ports, where a port used for sending the data packet corresponding to the credit request packet may be a first sending port, and other sending ports (e.g., a second sending port, etc.) may be used for sending other data or sending data packets corresponding to other credit request packets, and the embodiment of the present invention is not limited thereto.
403. The first network equipment acquires a data message corresponding to the credit request message; and transmitting the data message corresponding to the credit request message in the target transmission time period.
The data message corresponding to the credit request message sent by the first network device in the target sending time period can be replaced by: and the first network equipment sends the data message corresponding to the credit request message through the first sending port in the target sending time period.
In some implementations, after determining the target sending time period, the first network device may store an association relationship between an identifier of the target sending time period and the credit request packet; or, the association relationship between the identifier of the target sending time period and the data packet may be stored. Therefore, the first network device can send the data message within the target sending time period after acquiring the data message. For example, for uniform rate low latency traffic, after the first network device determines the target transmission time period, the first network device may reserve available credits for the target transmission time period. Therefore, when the data message is sent again subsequently, the first network device can directly send the data message according to a certain rate without a signaling interaction process. It is understood that the above illustrated approach can be understood as a static mapping approach.
In other implementation manners, the data packet may include an identifier of a target sending time period, and thus the first network device may also cache the data packet corresponding to the credit request packet in a corresponding cache queue according to the identifier of the target sending time period included in the data packet after acquiring the data packet; and then sending the data message through the corresponding buffer queue. In other words, the first network device may perform signaling interaction to apply for credit and determine a target transmission time period before transmitting the data packet. The above illustrated approach may be understood as a dynamic mapping approach.
In this embodiment of the application, the data packet may include an identifier of a target transmission time period of one or more network devices, where the identifier of the target transmission time period of the one or more network devices includes an identifier of a target transmission time period of the first network device. Illustratively, as shown in fig. 5b, the data packet may include an identifier of a target transmission time period determined by node a, an identifier of a target transmission time period determined by node w, an identifier of a target transmission time period determined by node x, an identifier of a target transmission time period determined by node z, and an identifier of a target transmission time period determined by node D. As to whether the identifiers of the target sending time periods determined by the nodes are the same or not, the embodiments of the present application are not limited. And how the node and the target sending time period are encapsulated in the data message, the embodiment of the application is not limited. For example, as shown in fig. 8c, the identities of the node and the target transmission time period may be encapsulated in a data packet in a one-to-one correspondence. For another example, the identifiers of the nodes and the target transmission time periods may also be encapsulated in the data packet in a one-to-many manner, that is, the identifier of one target transmission time period may correspond to a plurality of nodes. As shown in fig. 8c, the identification 4 of the target transmission time period may correspond to node w and node D.
In still other implementations, after the first network device determines the target transmission time period, available credits within the target transmission time period may also be reserved for a predetermined time (e.g., for 5 minutes). Therefore, when the first network equipment sends the data message again within the preset time, the signaling interaction process is not needed. After the predetermined time, the first network device may apply for credit again for signaling interaction and determine a target sending time period before sending the data packet. It is understood that the above-described manner can be understood as a static-dynamic mapping manner.
The method provided by the embodiment of the application can be applied to a source end, a destination end, an intermediate node and the like in a network, and for different modes of sending data messages by the source end, the embodiment of the application also provides the following two modes.
In some implementation manners, after obtaining the data packet, the source end may cache the data packet into a packet cache at an RTT level, and then apply for a credit based on a data amount of the data packet in the packet cache at the RTT level. That is, the first network device sends a credit request message to the destination, and then obtains a credit response message, etc. It is to be understood that the RTT-level packet buffer may also be understood as a network-level Virtual Output Queue (VOQ), and other names or understanding of the RTT-level packet buffer are not limited in this embodiment of the present application. In this implementation, the data packet at least needs to wait for a static RTT time in the packet buffer at the RTT level. Alternatively, if other buffering queuing delay factors are considered, the minimum delay of the data packet may be 1.5 static RTT times. However, this implementation can effectively avoid resource waste.
In other implementations, the source may reduce latency by pre-allocating credits. For example, credit is applied in advance according to the sending rate of the data message in the message buffer of the RTT level. In this implementation, if the source end receives the credit response packet and there is a data packet in the packet buffer at the RTT level, the data packet may be directly sent. If the source end receives the credit response packet and there is no data packet in the packet buffer at RTT level, there may be a situation that resources (such as reserved credit) are wasted. But the implementation mode can effectively reduce the waiting time delay of the data message.
In the embodiment of the application, the first network device determines the expected arrival time of the data message according to the credit request message, and determines the target sending time period of the data message according to the expected arrival time, so that the first network device can send the data message in the target sending time period. By implementing the embodiment of the application, the first network equipment determines the target sending time period of the data message according to the expected arrival time of the data message, not only the time difference of different data messages reaching the target end is considered, but also the congestion caused by the difference of the round-trip delay of the static path is avoided; and the target sending time period of the data message sent by the intermediate node is also considered, the flow control of all nodes for transmitting the data message is controlled, and a non-blocking network is realized.
Step 402 shown in fig. 4 is described in detail below, referring to fig. 6a, and fig. 6a is a schematic diagram of a credit sliding window according to an embodiment of the present application.
Wherein the first transmitting port may include M transmitting time periods, and each transmitting time period includes available credits, and M is greater than or equal to 2. In other words, the first transmission port may be set to M transmission periods; alternatively, it may be understood that the first transmission port is provided with M transmission periods. The available credit allocated per transmission time period may be determined based on the bandwidth of the first network device. Each transmission period shown above includes available credits, which means that each transmission period may be allocated available credits, as to whether the available credits within the transmission period are allocated; or as to whether there are remaining credits in the sending time period, the embodiment of the present application is not limited.
Fig. 6a shows M transmission time periods included in the first transmission port, and the available credit status included in each transmission time period. As shown in fig. 6a, the available credits in some transmission time periods are already occupied and the available credits in some transmission time periods are still left.
The determining, by the first network device according to the expected arrival time, a target transmission time period of the data packet corresponding to the credit request packet at the first transmission port includes: the first network equipment acquires available credits of N sending time periods after the expected arrival time in the M sending time periods, wherein M is larger than or equal to N, and N is larger than or equal to 1; and determining at least one transmission time period as a target transmission time period according to the available credits of the N transmission time periods.
Among the M transmission time periods, N transmission time periods located after the expected arrival time may be understood in the following two ways: the first way of understanding: the N transmission periods are N transmission periods from an expected arrival time among the M transmission periods. The second way of understanding: the N transmission time periods are N transmission time periods from the ith transmission time period after the expected arrival time to the (i + (N-1) th transmission time period in the M transmission time periods, i is greater than or equal to 0, and i is an integer. Wherein, the N transmission periods between the ith transmission period and the (i + (N-1) th transmission period may include the ith transmission period, the (i + 1) th transmission period, …, and the (i + (N-1) th transmission period. For example, i-0, or i-1, or i-2, i-3, etc. When i is equal to 0, the second understanding is the same as the N transmission periods in the first understanding. When i is 1, the N transmission periods are N transmission periods between the 1 st transmission period located after the expected arrival time to the 1+ (N-1) th (i.e., nth) transmission period among the M transmission periods. When i is 2, the N transmission periods are N transmission periods between the 2 nd transmission period located after the expected arrival time to the 2+ (N-1) th (i.e., N + 1) th transmission period among the M transmission periods.
As shown in fig. 6a, the expected arrival time (ETA shown in fig. 6 a) of the data packet can be obtained by adding the acquisition time of the credit Request packet (Request shown in fig. 6 a) to the static path round trip delay and the buffering queuing delay. The N transmission time periods after the expected arrival time of the M transmission time periods may be understood as 5 transmission time periods included within the message buffer window in fig. 6 a. As can be seen from fig. 6a, the message buffer window includes a sending time period 1, a sending time period 2, a sending time period 3, a sending time period 4, and a sending time period 5. Wherein part of the credits in transmission period 2 are already occupied, while the available credits comprised in transmission period 1, transmission period 3, transmission period 4 (the target transmission period shown in fig. 6a is transmission period 4, i.e. part of the credits of transmission period 4 in fig. 6a are reserved) and transmission period 5 are all not occupied. Accordingly, the first network device may determine one of the transmission period 1, the transmission period 3, the transmission period 4, and the transmission period 5 as the target transmission period. Alternatively, the first network device may also determine two or more transmission periods among the transmission period 1, the transmission period 3, the transmission period 4, and the transmission period 5 as the target transmission periods. In this case, the first network device may distinguish the determined target transmission time periods in a priority manner, a random manner, or the like. For example, when sending a data packet corresponding to a credit request packet, the data packet corresponding to the credit request packet is sent through a target sending time slot with a high priority. Further, the first network device updates the cache queuing time delay in the first time information, and may update the first time information according to a target sending time period with a high priority, and the like, which is not limited in this embodiment of the present application.
Optionally, if the remaining available credit in the sending time period 2 can meet the requirement of the data packet, the sending time period 2 may also be used as the target sending time period. In other words, in fig. 6a, M may be equal to 18 and N may be equal to 4 or 5. In fig. 6a, the buffering queuing delay of the data packet in the first network device is the duration of 3 sending time periods. Optionally, the number of the sending time periods included in the message cache window is at least 4, and the longer the message cache window is, the stronger the burst absorbing capability of the first network device is.
The N transmission periods shown in fig. 6a may be understood as N transmission periods starting from the expected arrival time. However, the first network device may also acquire N transmission periods from an i-th transmission period to an i + (N-1) -th transmission period after the expected arrival time among the M transmission periods. For example, the first network device may obtain N transmission periods from the 3 rd transmission period after the expected arrival time to the 3+ (N-1) th transmission period. Such as determining the target transmission period among N transmission periods after ETA + 3. If the target sending time period determined by the first network device is still the sending time period 4 of the N sending time periods, the cache queuing delay of the data packet in the first network device is the duration of 3+3 sending time periods.
The "3" shown above can be understood as a jitter buffer (jitter buffer), which can be used to solve jitter caused by misalignment of transmission time periods, or can be used to absorb jitter caused by uncertain factors such as thread scheduling and memory conflict during forwarding of the first network device. It is understood that "3" shown above is only an example, and in a specific implementation, the value may be other values, and the embodiment of the present application is not limited. For example, if the expected arrival time of the data packet is not aligned with the sending time period, in this case, the first network device may also ensure the expected sending time of the data packet based on the method provided in the embodiment of the present application, such as ETA + 3.
Optionally, the sending time period may exist in the form of a buffer queue. Therefore, the first network device can cache the data message to a corresponding cache queue (or called a target cache queue) according to the identification of the target sending time period; and sending the data message through the corresponding buffer queue. If the first sending port comprises P buffer queues, and different buffer queues correspond to different sending time periods, P is greater than or equal to 2.
That is to say, after the target sending time period is determined by the first network device, the data packet may be buffered in the target buffer queue. The target buffer queue corresponds to the target sending time period. Illustratively, the identification of the target transmission time period may be the same as the identification of the target cache queue. The 5 different sending time periods included in the message buffer window shown in fig. 6a may also be understood as 5 different buffer queues. In this case, P is equal to N. For example, the message buffer window shown in fig. 6a includes a sending time period 1 to a sending time period 5, the buffer queues may be the buffer queue 1 to the buffer queue 5, the sending time period m may correspond to the buffer queue m, and m is greater than or equal to 1 and less than or equal to 5. Alternatively, N may also be less than P. That is, the number of target transmission time periods included in the message buffer window may be less than or equal to the number of buffer queues. For example, the length of the message buffer window may be equal to the number of buffer queues, and the length of the message buffer window may be used to determine the size of the message buffer of the first network device.
For example, the duration of one transmission period may be determined by the caching capability of the first network device, the hardware implementation cost of the first network device, or the time precision that the first network device can control, and the like, for example, the duration of one transmission period may be 10 microseconds, and the like. The length of a transmission time period is not limited in this embodiment.
Alternatively, as shown in fig. 6a, the sending port of the first network device may be configured with a credit sliding window, where the credit sliding window may be a credit array, and each element of the array may be used to record the remaining available credit and correspond to an identifier of a sending time period. I.e. the 18 transmission time periods comprised in the credit sliding window, can be understood as M transmission time periods as shown in fig. 4.
The length of the credit sliding window may satisfy the following formula:
Figure BDA0002517358250000141
wherein, C is used for representing the length of the credit sliding window; RTT (round trip time)maxFor representing the maximum static RTT end-to-end; t is used for representing the duration of the sending time period; p is used for representing the number of the set buffer queues; hopmaxFor indicating the maximum number of hops of the network.
In other words, the length of the credit sliding window may be determined by the maximum static RTT from end to end in the network, the duration of the sending time period, the number of buffer queues, and the maximum hop count of the network.
It can be understood that the time lengths of the respective transmission time periods shown in the embodiment of the present application are the same, but in a specific implementation, different transmission time periods may also be different, and the like, which is not limited in the embodiment of the present application.
Step 403 shown in fig. 4 is described in detail below, referring to fig. 6b, where fig. 6b is a schematic diagram of scheduling a buffer queue according to an embodiment of the present application. As shown in fig. 6b, fig. 6b shows a process of sending a data packet by a first network device according to a buffer queue.
The first sending port of each first network device is provided with P buffer queues, each buffer queue corresponds to one sending time period, and each sending time period is a relatively small time slice (e.g., 10 microseconds). It can be understood that the length of each buffer queue needs to ensure that data packets arriving within a transmission time period can be buffered. It can be understood that, in the embodiment of the present application, no limitation is made on whether the data volumes of different data packets buffered in one buffer queue are the same.
As shown in fig. 6b, when a data packet arrives, the first network device may enter a corresponding buffer queue according to the identifier of the target transmission time period included in the data packet, where the range of the buffer queue may be, for example, 1 to P (that is, the buffer queue includes buffer queue 1 and buffer queue P). The scheduler of the sending port adopts time-based cyclic scheduling, and only sends the data message cached in one cache queue in one sending time period. Illustratively, for example, the first network device may transmit the data packet in the buffer queue 1 during a transmission time period (e.g., during time n in fig. 6 b). During another sending period (e.g., duration n +1 in fig. 6 b), the first network device may send the data packet in the buffer queue 2.
For example, when the duration of one sending time period is not finished, but the corresponding buffer queue is already empty, at this time, the first network device may not continue to send the data packet in the next buffer queue (if a best effort queue has a packet that can be sent within the remaining duration of the sending time period). That is, the buffered data packet in the next buffer queue is transmitted only when the time of the next transmission time period starts. If the transmitting port of the first network device is provided with P buffer queues, the data packet is inevitably transmitted after the duration of P transmitting time periods. That is, the maximum buffer queuing delay of the data packet may be the duration of P transmission time periods. As shown in fig. 6b, if the duration of one sending time period is 10 microseconds, the first network device is currently sending the data packet in the buffer queue 1, and the first network device has sent out the data packet in the buffer queue 1 within 7 microseconds. In this case, the first network device needs to wait for 3 microseconds before sending the data packet in the buffer queue 2.
According to the technical scheme provided by the embodiment of the application, a time-based credit allocation mechanism can ensure that the data message can be sent out within a determined target sending time period no matter when the data message arrives within a maximum jitter range. As shown in fig. 6b, if the data packet arrives with the identifier of the target transmission time period of 3, the data packet is buffered in the buffer queue 3 no matter which buffer queue (which may be 1, 2 or 4) is currently transmitting the data packet. Since the embodiment of the present application adopts the method described above, the time when the data packet arrives at the next level network device is expected.
In this embodiment of the present application, the packet buffer at the RTT level may only exist in the edge network device (e.g., the node a in fig. 3), and one edge network device or an egress interface of one edge network device may correspond to the packet buffer at the RTT level. The packet buffer queues at the RTT level are divided into smaller granularity, which is helpful for avoiding head of line congestion (head of line problem), and when actually deployed, a suitable packet buffer at the RTT level can be selected according to the network scale. In other words, by using the method provided by the embodiment of the present application, the core network device may not need the buffer at the RTT level.
The embodiment of the present application may be forwarded based on a source routing manner, where a data packet sent by a source end includes an identifier of a first sending port of a network along device (such as the above-described intermediate node or core network device), and the network along device sends the data packet through the first sending port of the network along device. Illustratively, the source routing forwarding mode based on the SR-TE/SRV6 protocol is suitable for traffic engineering due to its stateless and good expandability, and has a wide application scenario in the wide area network range at present, thereby ensuring the application foundation of the present application.
For further understanding of the method provided by the embodiment of the present application, referring to fig. 7a, fig. 7a is a schematic flowchart of a data processing method provided by the embodiment of the present application. Taking the network shown in fig. 5b as an example, it can be understood that node a, node w, node x, node z and node D may be network devices, and all may perform the method shown in fig. 4. As shown in fig. 7a, the method comprises:
701. the node A determines a target sending time period of the data message at the first sending port.
It can be understood that the specific implementation manner for step 701 may be: the node A acquires a credit request message; determining the expected arrival time of the data message according to the credit request message; and determining a target sending time period of the data message at the first sending port according to the expected arrival time.
For example, the step 701 may be specifically as shown in fig. 7 b:
7011. and the node A caches the acquired data message.
When a data packet arrives at node a (which may also be referred to as an edge network device), the data packet may be cached in a packet cache at the RTT level of the node a. In this case, the node a may first send a credit request message to apply for credit hop by hop, and the credit request message may include the first time information.
7012. And the node A determines the expected sending time of the data message according to the round-trip delay of the static path from the node A to the node D.
In this embodiment of the present application, a data packet may be forwarded based on a source routing manner, that is, when a node a needs to send the data packet to a node D, the node a may first select an end-to-end path node a- > node w- > node x- > node z- > node D, and a static RTT of the path may be measured in advance.
7013. The node a determines an available transmission time period as a target transmission time period in the packet buffer window from ETA +3, for example, the identifier of the target transmission time period is 2.
7014. The node A generates a credit request message, wherein the credit request message comprises a first sending port identifier from the node A to the node D and first time information. The first time information may include a static path round trip delay and a buffering queuing delay from node a to node D. The first sending port identifiers of the nodes a to D may include an identifier of the first sending port of the node a such as 90, an identifier of the first sending port of the node w such as 100, an identifier of the first sending port of the node x such as 200, an identifier of the first sending port of the node z such as 300, and an identifier of the first sending port of the node D such as 400. It can be understood that the embodiment of the present application is not limited to the identification of whether the first transmission port of the node D is included. The method provided by the embodiment of the present application will be illustrated below by using an identifier of the first transmission port including the node D as an example.
Illustratively, the buffer queuing delay (Qdelay in the figure) in the credit request message generated by node a may be initially 0. However, since there may be a buffering queuing delay when the node a transmits the data packet, the Qdelay may be obtained by the node a according to the ETA and the ETD determined by the node a, and the ETD is obtained by the ETA and the target transmission time period.
It is understood that reference may be made to steps 401 and 402 shown in fig. 4 for a specific implementation of step 701. And for the specific implementation of step 701, reference may also be made to the method illustrated in fig. 6a, etc.
702. The node A sends a credit Request message (Request) to the node w, wherein the credit Request message comprises an identifier 2 (determined by the node A) of a target sending time period, a first sending port identifier from the node A to the node D and first time information. Correspondingly, the node w receives the credit request message.
703. The node w determines a target transmission time period and updates the first time information.
For example, the node w may determine the target sending time period according to the first time information and the acquisition time of the credit request packet. The identification of the target transmission time period as determined by node w is 4. For example, the node w may further determine the cache queuing delay of the data packet at the node w according to the ETA and the ETD of the data packet, so as to update the cache queuing delay Qdelay included in the first time information according to the cache queuing delay of the node w.
For example, the step 703 may be specifically as shown in fig. 7 c:
7031. and the node w determines the expected arrival time of the data message according to the static path round-trip delay and the cache queuing delay included in the credit request message.
7032. Starting from ETA +3, the node w determines an available transmission time period as a target transmission time period in the message buffer window, for example, the identifier of the target transmission time period is 4. For example, the available transmission time period may indicate that the remaining credits are greater than or equal to the number of credits requested by the credit request message. For example, the node w may search for a transmission time period corresponding to the ETA of the data packet according to the static RTT and the Qdelay.
7033. And the node w updates the first time information included in the credit request message according to the expected arrival time of the data message and the target sending time period, for example, the Qdelay in the first time information is updated to be Qdelay + 3. Illustratively, the node w may be based on Qdelay in the first time information. As shown in fig. 8a, if the node w determines that the identifier of its target transmission time period is 4, Qdelay in the first time information is updated to Qdelay +3 according to the target transmission time period and ETA.
For example, after receiving the credit request message sent by the node a, the node w may further repackage the credit request message, for example, the node w encapsulates the identifier of the determined target sending time period in the credit request message; for example, the node w updates the first time information included in the credit request message.
Optionally, if the node w cannot determine the target sending time period in the message cache window, a failure message is returned. The failure message may be returned immediately to node a, i.e. along the original path and releasing the already allocated credits. Alternatively, the credit request message may still be sent to the next node, such as node x, and then a failure message is returned by node D. Or, the node w may also select another load sharing path, and send a credit request packet to another node.
704. The node w sends a credit request message to the node x, wherein the credit request message comprises identifications 2 (determined by the node A) and 4 (determined by the node w) of target sending time periods, first sending port identifications from the node A to the node D and first time information. Accordingly, node x receives the credit request message.
705. Node x determines a target transmission time period and updates the first time information.
It can be understood that, for the method for node x to determine the target transmission time period and update the first time information, reference may be made to the above description, and detailed description thereof is omitted here.
706. The node x sends a credit request message to the node z, wherein the credit request message comprises an identifier 2 (determined by the node A), an identifier 4 (determined by the node w) and an identifier 1 (determined by the node x) of a target sending time period, a first sending port identifier from the node A to the node D and first time information. Accordingly, node x receives the credit request message.
707. Node z determines a target transmission time period and updates the first time information.
708. The node z sends a credit request message to the node D, and the credit request message includes an identifier 2 (determined by the node a), 4 (determined by the node w), 1 (determined by the node x), and 3 (determined by the node z) of a target sending time period, a first sending port identifier from the node a to the node D, and first time information. Accordingly, node x receives the credit request message.
As shown in fig. 8a, the node w takes the third transmission time period after ETA (transmission time period 1 in fig. 8 a) as the target transmission time period, i.e. ETA +3 is the target transmission time period, e.g. the node w determines that the identifier of the target transmission time period is 4. However, for node x, the available credits in the third transmission period after ETA (e.g. transmission period 4 after ETA in fig. 8 a) are fully occupied, so node x may use the fourth transmission period after ETA as the target transmission period, e.g. determining the target transmission period to be identified as 1. Similarly, for the implementation of node z and node D, reference may be made to node x and/or node a, and details of the embodiments of the present application are not described again. It can be understood that, in the embodiment of the present application, there is no limitation on whether each node uses ETA +3 as the target transmission time period. It is understood that for the node z, the specific implementation manner of the node x may refer to the method described by the node w, or may also refer to fig. 8a and/or fig. 8b, etc., and will not be described in detail here.
709. Node D determines a target transmission time period.
710. Node D sends a credit response message (e.g., Grant in fig. 8 b) to node a, where the credit response message includes the second time information and the available credit. Accordingly, node a receives the credit response message.
The second time information may be used to indicate an identifier, such as 4, of the target transmission time period determined by node D, an identifier, such as 3, of the target transmission time period determined by node z, an identifier, such as 1, of the target transmission time period determined by node x, an identifier, such as 2, of the target transmission time period determined by node w, fig. 4, and a target transmission time period determined by node a. As shown in fig. 8b, the second time information may include an identifier of the target transmission time period determined by each node.
For example, node D may determine the path from node D to node a by the transmitting port id from node a to node D included in the credit request message.
711. Node a sends a data packet (e.g. Date in fig. 8 c) to node D, where the data packet includes the second time information and the port id from node a to node D. Correspondingly, node D receives the data packet.
For example, node a may send the data packet through a source routing manner, that is, a path of node a- > node w- > node x- > node z- > node D.
It can be understood that when a data packet passes through each node, each node may cache the data packet in a corresponding cache queue according to a first sending port identifier from node a to node D and an identifier of a target sending time period from node a to node D included in the data packet, and then send the data packet through the corresponding cache queue.
It is understood that reference may be made to step 403 shown in fig. 4 for a specific implementation of step 711; alternatively, reference may also be made to the method shown in fig. 6b, etc., which will not be described in detail here.
It is understood that the message form shown in fig. 8b and/or fig. 8c is only an example, and should not be construed as limiting the embodiment of the present application.
It can be understood that the sending port identifier shown in this embodiment may also be referred to as a hop-by-hop link identifier or a path identifier, and this is not limited in this embodiment.
For example, the method provided by the present application may be applied to a network with any topology, where the network may include a traffic router, a core router, and the like, and the traffic router and the core router may also be collectively referred to as a first network device. The key parameters were analyzed as follows:
assuming that the maximum scale of the network is 8 hops, the total length of the optical fiber of the longest end-to-end path is 400km, and the optical fiber rate is 1000km/5ms, the static RTT when the network transmits the data message is
Figure BDA0002517358250000171
For example, if the duration of one transmission period is 10 microseconds and the transmission port of the first network device is provided with 4 buffer queues, the buffer capacity of the first network device needs 10 × 4 — 40 microseconds at minimum.
The calculation method of the bandwidth of the signaling message and the bandwidth of the data message is as follows:
assuming that 20Kb of credit can be applied once per credit request message, the duration of each transmission period is 10us, which 10us can transmit about 1Mb of data messages for a 100G port. Therefore, the credit in one sending time period can be distributed by about 51(1Mb is 1024Kb, 1024Kb/20Kb is approximately equal to 51) credit request messages, and better link utilization rate can be ensured.
Assuming that the length of the credit request message and the credit response message is 64Byte × 8 × 2, 1024, or 1Kb, the bandwidth ratio of the data message to the signaling message is 20Kb/1Kb, 20/1, that is, the first network device may reserve about 5% of the bandwidth for the signaling message.
The length of the credit sliding window is network maximum static RTT (4ms 4000 us)/duration of the transmission time period (10us) + the number of buffer queues supported by the first network device (4) × maximum hop count (8) 432byte.
According to the above calculation, the credit in each sending time period can be distributed by 51 credit request messages, each credit is enough to be stored by using an 8bits memory, and the memory occupied by the credit array is 432 bytes. That is, when the first network device stores the credit sliding window shown in fig. 6a, the size of the memory occupied by the credit sliding window is 432 Byte.
The technical scheme provided by the application is as follows: compared with the method shown in fig. 1 and/or fig. 2b, the time-based credit allocation method provided by the application considers the difference of the time of different data messages reaching the destination end, can avoid congestion caused by static RTT difference, and simultaneously improves the utilization rate of a link. The method provided by the application can be suitable for any network topology, congestion control is realized on the edge network equipment, network congestion can be avoided only by setting a plurality of buffer queues (such as P buffer queues) on the core network equipment, the phenomenon that the core network equipment sets buffer at RTT level is avoided, and the cost of the core network equipment is reduced.
It can be understood that the above-illustrated embodiments have respective emphasis, and the implementation manner not described in detail in one embodiment may refer to other embodiments, which are not described in detail here. Furthermore, the various embodiments described herein may be implemented as stand-alone solutions or combined in accordance with inherent logic and are intended to fall within the scope of the present application.
The following describes a first network device provided in an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a first network device according to an embodiment of the present application, where the first network device may be configured to perform the operations performed by the first network device in the method shown in fig. 4; alternatively, the first network device may be further configured to perform the operations performed by node a, node w, node x, node z, or node D shown in fig. 7a to 7c, and the like. Illustratively, the first network device may be a switch or a router, etc. The switches may also include access switches, aggregation switches, core switches, or the like. Alternatively, the first network device may be another device having the same function as a switch or a router, or the like. Alternatively, the first network device may also be a general-purpose computer, a network element device, or the like, for example, the first network device may be a wired device, and may also be a wireless device, or the like.
As shown in fig. 9, the first network device includes a processing unit 901 and a transceiving unit 902. Wherein the content of the first and second substances,
a processing unit 901, configured to obtain a credit request packet; then, according to the credit request message, the expected arrival time of the data message corresponding to the credit request message is determined; determining a target sending time period of the data message corresponding to the credit request message according to the expected arrival time; acquiring a data message corresponding to the credit request message;
the transceiving unit 902 is configured to send a data packet corresponding to the credit request packet in the target sending time period.
In this embodiment of the application, in some implementation manners, the processing unit 901 may be configured to obtain a credit request packet and obtain a data packet corresponding to the credit request packet. In other implementations, the processing unit 901 may further be configured to receive the credit request message and receive a data message corresponding to the credit request message through the transceiving unit 902.
In one possible implementation, the available credit in the target sending time period satisfies the requirement of the data message corresponding to the credit request message.
In a possible implementation manner, the processing unit 901 is specifically configured to determine, according to an expected arrival time, a target sending time period of a data packet corresponding to a credit request packet at a first sending port;
the transceiving unit 902 is specifically configured to send a data packet corresponding to the credit request packet through the first sending port in the target sending time period.
In one possible implementation, the first sending port includes M sending time periods, each sending time period includes available credits, and M is greater than or equal to 2.
In a possible implementation manner, the processing unit 901 is specifically configured to obtain available credits in N transmission time periods after an expected arrival time in M transmission time periods, where M is greater than or equal to N, and N is greater than or equal to 1; and determining at least one transmission time period as a target transmission time period according to the available credits within the N transmission time periods.
In a possible implementation manner, the credit request message further includes first time information;
the processing unit 901 is specifically configured to determine an expected arrival time according to the acquisition time of the credit request packet and the first time information.
In a possible implementation manner, the processing unit 901 is further configured to update the first time information according to the expected arrival time and the target transmission time period; and sending a credit request message to a next-stage network device of the first network device through the transceiving unit 902, where the credit request message includes the updated first time information.
In one possible implementation, the first time information is used to indicate a buffer queuing delay and/or a static path round trip delay of one or more network devices.
In a possible implementation manner, the credit request message includes an identifier of the first sending port of the one or more network devices, and the identifier of the first sending port of the one or more network devices includes an identifier of the first sending port of the first network device.
In a possible implementation manner, the first sending port includes P buffer queues, and different buffer queues correspond to different sending time periods, where P is greater than or equal to 2.
In a possible implementation manner, a data message corresponding to the credit request message includes an identifier of a target sending time period of one or more network devices, where the identifier of the target sending time period of the one or more network devices includes an identifier of a target sending time period of a first network device; a transceiving unit 902, configured to cache, by using the processing unit 901, a data packet corresponding to the credit request packet in a corresponding cache queue according to the identifier of the target sending time period of the first network device; and transmitting the data message corresponding to the credit request message through the corresponding cache queue at the first transmitting port.
In a possible implementation manner, the data packet corresponding to the credit request packet further includes an identifier of the first sending port of the one or more network devices.
In a possible implementation manner, the transceiving unit 902 is further configured to receive a credit response packet corresponding to the credit request packet, where the credit response packet includes an identifier of a target transmission time period of one or more network devices and a corresponding available credit; and/or the presence of a gas in the gas,
the transceiving unit 902 is further configured to send a credit response packet corresponding to the credit request packet, where the credit response packet includes an identifier of a target sending time period of one or more network devices and a corresponding available credit.
It is understood that the above specific descriptions of the target sending time period, the identification of the first sending port, the first time information, the expected arrival time or the static path round trip delay, etc. may refer to the description of the above method embodiments, and are not detailed here. For example, for a specific description of the identification of the first transmission port, reference may be made to the method shown in fig. 3, fig. 5a, fig. 5b, or the like. For a detailed description of the credit request message, the credit response message or the data message, reference may be made to fig. 5a and/or fig. 5b, etc. Reference may be made to the methods shown in fig. 6a, 6b, 8a to 8c for static path round trip delay, expected arrival time, M transmission time periods or N transmission time periods, etc.
It is to be understood that when the first network device shown in fig. 9 is the first network device or a component of the first network device implementing the above functions, the processing unit 901 may be one or more processors, the transceiver unit 902 may be a transceiver, or the transceiver unit 902 may also be a transmitting unit and a receiving unit, the transmitting unit may be a transmitter, the receiving unit may be a receiver, and the transmitting unit and the receiving unit may be integrated into one device, such as a transceiver.
When the first network device is a circuit system, such as a chip or an integrated circuit, the processing unit 901 may be one or more processors, and the transceiving unit 902 may be an input/output interface, also referred to as a communication interface, or an interface circuit, or an interface, and so on. Or the transceiving unit 902 may also be a transmitting unit and a receiving unit, the transmitting unit may be an output interface, the receiving unit may be an input interface, and the transmitting unit and the receiving unit are integrated into one unit, such as an input-output interface.
In some implementations, the first network device shown in fig. 9 may be the first network device or a component of the first network device in the above method embodiments, which implements the above functions. Alternatively, the first network device shown in fig. 9 may be a component in a node or a node (such as a source node, an intermediate node, or a destination node) in the foregoing method embodiments, which implements the foregoing functions. In this case, the transceiver 902 may be implemented by a transceiver, and the processing unit 901 may be implemented by a processor. As shown in fig. 10, the first network device 100 includes one or more processors 1020 and a transceiver 1010. The processor and transceiver may be configured to perform functions or operations, etc., performed by the first network device described above. Alternatively, the processor and transceiver may be further configured to perform functions or operations performed by the source peer, intermediate node, destination peer, or the like. Alternatively, the processor and transceiver may be further configured to perform functions or operations performed by node a, node w, node x, node z, or node D, as described above.
In this case, the functions or operations performed by the transceiver 1010 and/or the processor 1020, etc. may be referred to fig. 9, and are not described in detail here.
In various implementations of the first network device shown in fig. 10, the transceiver may include a receiver to perform a function (or operation) of receiving and a transmitter to perform a function (or operation) of transmitting. And transceivers for communicating with other devices/apparatuses over a transmission medium. The processor 1020 transmits and receives data and/or signaling using the transceiver 1010 and is configured to implement the corresponding method described in fig. 4, fig. 7a, fig. 7b, or fig. 7c in the above method embodiments.
Optionally, first network device 100 may also include one or more memories 1030 for storing program instructions and/or data. A memory 1030 is coupled to the processor 1020. The coupling in the embodiments of the present application is an indirect coupling or a communication connection between devices, units or modules, and may be an electrical, mechanical or other form for information interaction between the devices, units or modules. Processor 1020 may operate in conjunction with memory 1030. Processor 1020 may execute program instructions stored in memory 1030. Optionally, at least one of the one or more memories may be included in the processor.
The specific connection medium among the transceiver 1010, the processor 1020 and the memory 1030 is not limited in the embodiments of the present application. In the embodiment of the present application, the memory 1030, the processor 1020, and the transceiver 1010 are connected by a bus 1040 in fig. 10, the bus is represented by a thick line in fig. 10, and the connection manner between other components is merely illustrative and not limited thereto. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
In the embodiments of the present application, the processor may be a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like, which may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in a processor.
It is understood that the first network device shown in the embodiment of the present application may also have more components than those shown in fig. 10, and the embodiment of the present application does not limit this.
It will be appreciated that the methods performed by the processor and transceiver shown above are merely examples, and reference may be made to the methods described above for the steps specifically performed by the processor and transceiver.
In other implementations, the first network device shown in fig. 9 may be circuitry. In this case, the processing unit 901 may be implemented by a processing circuit, and the transmitting/receiving unit 902 may be implemented by an interface circuit. As shown in fig. 11, the first network device may include a processing circuit 1101 and an interface circuit 1102. The processing circuit 1101 may be a chip, a logic circuit, an integrated circuit or a system on chip (SoC) chip, and the interface circuit 1102 may be a communication interface, an input/output interface, and the like. Illustratively, the first network device may be a circuit system of the first network device in the above method embodiment. Illustratively, the first network device may also be a circuit system in the source end, the intermediate node, or the destination end, etc. in the above method embodiments. Illustratively, the first network device may also be a circuit system in node a, node w, node x, node z or node D, and the like in the above method embodiments.
For example, the processing circuit 1101 is configured to obtain a credit request message; then, according to the credit request message, the expected arrival time of the data message corresponding to the credit request message is determined; determining a target sending time period of the data message corresponding to the credit request message according to the expected arrival time; acquiring a data message corresponding to the credit request message; the interface circuit 1102 is configured to output a data packet corresponding to the credit request packet in the target sending time period.
In this embodiment of the application, in some implementation manners, the processing circuit 1101 may be configured to obtain a credit request packet and obtain a data packet corresponding to the credit request packet. In other implementations, the processing circuit 1101 may be further configured to obtain the credit request message and obtain a data message corresponding to the credit request message through the interface circuit 1102.
For another example, the processing circuit 1101 is specifically configured to determine, according to the expected arrival time, a target transmission time period of the data packet corresponding to the credit request packet at the first transmission port; the interface circuit 1102 is specifically configured to output a data packet corresponding to the credit request packet through the first sending port in the target sending time period.
For another example, the processing circuit 1101 is specifically configured to obtain available credits in N transmission time periods after the expected arrival time in the M transmission time periods, where M is greater than or equal to N, and N is greater than or equal to 1; and determining at least one transmission time period as a target transmission time period according to the available credits within the N transmission time periods.
For another example, the credit request message further includes first time information; the processing circuit 1101 is specifically configured to determine the expected arrival time according to the acquisition time of the credit request packet and the first time information.
Also for example, the processing circuit 1101 is further configured to update the first time information included in the credit request message according to the expected arrival time and the target transmission time period, and output the credit request message through the interface circuit 1102.
For another example, the interface circuit 1102 is specifically configured to cache the data packet corresponding to the credit request packet in the corresponding cache queue according to the identifier of the target sending time period; and outputting the data message corresponding to the credit request message through the corresponding cache queue at the first sending port.
For another example, the interface circuit 1102 is further configured to obtain a credit response packet corresponding to the credit request packet, where the credit response packet includes an identifier of a target sending time period of one or more network devices and a corresponding available credit; and/or the interface circuit 1102 is further configured to output a credit response packet corresponding to the credit request packet, where the credit response packet includes an identifier of a target transmission time period of the one or more network devices and corresponding available credit.
It is understood that the above specific descriptions of the target sending time period, the identification of the first sending port, the first time information, the expected arrival time or the static path round trip delay, etc. may refer to the description of the above method embodiments, and are not detailed here. For example, for a specific description of the identification of the first transmission port, reference may be made to the method shown in fig. 3, fig. 5a, fig. 5b, or the like. For a detailed description of the credit request message, the credit response message or the data message, reference may be made to fig. 5a and/or fig. 5b, etc. Reference may be made to the methods shown in fig. 6a, 6b, 8a, 8b or 8c for static path round trip delay, expected arrival time, M transmission time periods or N transmission time periods, etc.
In the embodiments of the present application, the processing circuit may be a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present application.
It will be appreciated that the methods performed by the interface circuit and the processing circuit shown above are merely examples, and reference may be made to the methods described above for the steps specifically performed by the interface circuit and the processing circuit.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the technical effects of the solutions provided by the embodiments of the present application.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a readable storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned readable storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In addition, an embodiment of the present application further provides a computer program, where the computer program is used to implement the operation and/or the processing performed by the first network device in the data processing method provided in the embodiment of the present application.
Embodiments of the present application also provide a computer-readable storage medium, in which computer code is stored, and when the computer code runs on a computer, the computer is caused to execute operations and/or processes performed by a first network device in the data processing method provided by the embodiments of the present application.
Embodiments of the present application also provide a computer program product, which includes computer code or a computer program, and when the computer code or the computer program runs on a computer, the computer program causes operations and/or processes performed by a first network device in the data processing method provided by the embodiments of the present application to be performed.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (28)

1. A data processing method applied to a first network device, the method comprising:
acquiring a credit request message;
determining the expected arrival time of the data message corresponding to the credit request message according to the credit request message;
determining a target sending time period of the data message corresponding to the credit request message according to the expected arrival time;
acquiring a data message corresponding to the credit request message;
and transmitting the data message corresponding to the credit request message in the target transmission time period.
2. The method of claim 1, wherein the available credit in the target sending time period satisfies the requirement of the data packet corresponding to the credit request packet.
3. The method according to claim 1 or 2, wherein the determining the target transmission time period of the data packet corresponding to the credit request packet according to the expected arrival time comprises:
determining the target sending time period of the data message corresponding to the credit request message at a first sending port according to the expected arrival time;
the sending the data message corresponding to the credit request message in the target sending time period includes:
and transmitting the data message corresponding to the credit request message through the first transmitting port in the target transmitting time period.
4. The method of claim 3, wherein the first transmitting port comprises M transmitting time periods, each of the transmitting time periods comprising available credits, and wherein M is greater than or equal to 2.
5. The method according to claim 4, wherein the determining the target transmission time period of the data packet corresponding to the credit request packet at the first transmission port according to the expected arrival time comprises:
obtaining available credits in N sending time periods after the expected arrival time in the M sending time periods, wherein M is greater than or equal to N, and N is greater than or equal to 1;
and determining at least one sending time period as the target sending time period according to the available credit in the N sending time periods.
6. The method according to any of claims 3-5, wherein the credit request message includes first time information;
the determining the expected arrival time of the data message corresponding to the credit request message according to the credit request message includes:
and determining the expected arrival time according to the acquisition time of the credit request message and the first time information.
7. The method of claim 6, further comprising:
updating the first time information included in the credit request message according to the expected arrival time and the target sending time period;
and sending the credit request message to the next-stage network equipment of the first network equipment, wherein the credit request message comprises the updated first time information.
8. The method according to claim 6 or 7, wherein the first time information is used to indicate a buffer queuing delay and/or a static path round trip delay of one or more network devices.
9. The method according to any of claims 3-8, wherein the credit request message includes an identification of the first sending port of one or more network devices, and wherein the identification of the first sending port of the one or more network devices includes an identification of the first sending port of the first network device.
10. The method according to any of claims 3-9, wherein the first sending port comprises P buffer queues, and wherein different buffer queues correspond to different sending time periods, and wherein P is greater than or equal to 2.
11. The method according to claim 10, wherein a data packet corresponding to the credit request packet includes an identifier of a target transmission time period of one or more network devices, and the identifier of the target transmission time period of the one or more network devices includes an identifier of a target transmission time period of the first network device;
the sending the data message corresponding to the credit request message through the first sending port in the target sending time period includes:
caching the data message corresponding to the credit request message into a corresponding cache queue according to the identification of the target sending time period of the first network equipment;
and at the first sending port, sending the data message corresponding to the credit request message through the corresponding cache queue.
12. The method of claim 11, wherein a data packet corresponding to the credit request packet further includes an identifier of a first transmission port of one or more network devices.
13. The method according to any one of claims 1-12, wherein after the obtaining the credit request message and before the obtaining the data message corresponding to the credit request message, the method further comprises:
receiving a credit response message corresponding to the credit request message, wherein the credit response message comprises the identification of the target sending time period of one or more network devices and corresponding available credit; and/or the presence of a gas in the gas,
and sending a credit response message corresponding to the credit request message, wherein the credit response message comprises the identification of the target sending time period of one or more network devices and the corresponding available credit.
14. A first network device, wherein the first network device comprises:
the processor is used for acquiring a credit request message;
the processor is further configured to determine an expected arrival time of a data packet corresponding to the credit request packet according to the credit request packet;
the processor is further configured to determine a target sending time period of the data packet corresponding to the credit request packet according to the expected arrival time;
the processor is further configured to obtain a data packet corresponding to the credit request packet;
and the transceiver is used for transmitting the data message corresponding to the credit request message in the target transmission time period.
15. The first network device of claim 14, wherein the available credit in the target sending time period satisfies a requirement of a data packet corresponding to the credit request packet.
16. The first network device of claim 14 or 15,
the processor is specifically configured to determine, according to the expected arrival time, the target transmission time period of the data packet corresponding to the credit request packet at the first transmission port;
the transceiver is specifically configured to send, through the first sending port, a data packet corresponding to the credit request packet in the target sending time period.
17. The first network device of claim 16, wherein the first sending port comprises M sending time periods, each sending time period comprising available credits, and wherein M is greater than or equal to 2.
18. The first network device of claim 17,
the processor is specifically configured to acquire available credits in N transmission time periods following the expected arrival time among the M transmission time periods, where M is greater than or equal to N, and N is greater than or equal to 1; and determining at least one transmission time period as the target transmission time period according to the available credit in the N transmission time periods.
19. The first network device according to any of claims 16-18, wherein the credit request message comprises first time information;
the processor is specifically configured to determine the expected arrival time according to the acquisition time of the credit request packet and the first time information.
20. The first network device of claim 19,
the processor is further configured to update the first time information included in the credit request packet according to the expected arrival time and the target sending time period;
the transceiver is further configured to send the credit request packet to a next-stage network device of the first network device, where the credit request packet includes the updated first time information.
21. The first network device of claim 19 or 20, wherein the first time information is used to indicate a buffer queuing delay and/or a static path round trip delay of one or more network devices.
22. The first network device according to any of claims 16-21, wherein the credit request message includes an identifier of the first sending port of one or more network devices, and wherein the identifier of the first sending port of the one or more network devices includes an identifier of the first sending port of the first network device.
23. The first network device of any of claims 16-22, wherein the first sending port comprises P buffer queues, and wherein different buffer queues correspond to different sending time periods, and wherein P is greater than or equal to 2.
24. The first network device according to claim 23, wherein a data packet corresponding to the credit request packet includes an identifier of a target transmission time period of one or more network devices, and the identifier of the target transmission time period of the one or more network devices includes an identifier of a target transmission time period of the first network device;
the transceiver is specifically configured to cache, by the processor, a data packet corresponding to the credit request packet in a corresponding cache queue according to the identifier of the target transmission time period of the first network device; and at the first sending port, sending the data message corresponding to the credit request message through the corresponding cache queue.
25. The first network device of claim 24, wherein a data packet corresponding to the credit request packet further includes an identifier of a first transmission port of one or more network devices.
26. The first network device of any of claims 14-25,
the transceiver is further configured to receive a credit response packet corresponding to the credit request packet, where the credit response packet includes an identifier of a target transmission time period of one or more network devices and a corresponding available credit; and/or the presence of a gas in the gas,
the transceiver is further configured to send a credit response packet corresponding to the credit request packet, where the credit response packet includes an identifier of a target sending time period of one or more network devices and a corresponding available credit.
27. A computer-readable storage medium, comprising a computer program which, when run on a computer, performs the method of any of claims 1-13.
28. A computer program product enabling the method according to any of claims 1-13 to be performed when the computer program product runs on a computer.
CN202010480984.3A 2020-05-30 2020-05-30 Data processing method and device Pending CN113746746A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010480984.3A CN113746746A (en) 2020-05-30 2020-05-30 Data processing method and device
PCT/CN2021/096537 WO2021244404A1 (en) 2020-05-30 2021-05-27 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010480984.3A CN113746746A (en) 2020-05-30 2020-05-30 Data processing method and device

Publications (1)

Publication Number Publication Date
CN113746746A true CN113746746A (en) 2021-12-03

Family

ID=78727860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010480984.3A Pending CN113746746A (en) 2020-05-30 2020-05-30 Data processing method and device

Country Status (2)

Country Link
CN (1) CN113746746A (en)
WO (1) WO2021244404A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102255815A (en) * 2011-08-11 2011-11-23 杭州华三通信技术有限公司 Data transmission method and device
US10999882B2 (en) * 2014-09-05 2021-05-04 Telefonaktiebolaget Lm Ericsson (Publ) Multipath control of data streams
JP2017201756A (en) * 2016-05-06 2017-11-09 富士通株式会社 Control device, rate control method, and network system
CN106230540B (en) * 2016-06-30 2019-03-26 电信科学技术第五研究所有限公司 High-precision NTP message method of reseptance and sending method

Also Published As

Publication number Publication date
WO2021244404A1 (en) 2021-12-09

Similar Documents

Publication Publication Date Title
CN108390820B (en) Load balancing method, equipment and system
EP2684321B1 (en) Data blocking system for networks
US11968111B2 (en) Packet scheduling method, scheduler, network device, and network system
US11785113B2 (en) Client service transmission method and apparatus
CN112585914B (en) Message forwarding method and device and electronic equipment
US9608927B2 (en) Packet exchanging device, transmission apparatus, and packet scheduling method
US10374959B2 (en) Method for transmitting data in a packet-oriented communications network and correspondingly configured user terminal in said communications network
KR100463697B1 (en) Method and system for network processor scheduling outputs using disconnect/reconnect flow queues
US9197570B2 (en) Congestion control in packet switches
JP7487316B2 (en) Service level configuration method and apparatus
JP2020072336A (en) Packet transfer device, method, and program
WO2002098047A2 (en) System and method for providing optimum bandwidth utilization
CN109995608B (en) Network rate calculation method and device
US8660001B2 (en) Method and apparatus for providing per-subscriber-aware-flow QoS
CN112751776A (en) Congestion control method and related device
CN111434079B (en) Data communication method and device
WO2023274165A1 (en) Parameter configuration method and apparatus, controller, communication device, and communication system
CN113746746A (en) Data processing method and device
JP2024519555A (en) Packet transmission method and network device
CN114095431A (en) Queue management method and network equipment
KR101681613B1 (en) Apparatus and method for scheduling resources in distributed parallel data transmission system
US20230254264A1 (en) Software-defined guaranteed-latency networking
WO2024016327A1 (en) Packet transmission
WO2024036476A1 (en) Packet forwarding method and apparatus
WO2020143509A1 (en) Method for transmitting data and network device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination