CN117155874A - Data packet transmitting method, forwarding node, transmitting terminal and storage medium - Google Patents

Data packet transmitting method, forwarding node, transmitting terminal and storage medium Download PDF

Info

Publication number
CN117155874A
CN117155874A CN202210575912.6A CN202210575912A CN117155874A CN 117155874 A CN117155874 A CN 117155874A CN 202210575912 A CN202210575912 A CN 202210575912A CN 117155874 A CN117155874 A CN 117155874A
Authority
CN
China
Prior art keywords
scheduling
scheduling queue
object block
queue
data packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210575912.6A
Other languages
Chinese (zh)
Inventor
徐安民
于德雷
程宏涛
李凤凯
孟锐
王闯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210575912.6A priority Critical patent/CN117155874A/en
Priority to PCT/CN2023/092332 priority patent/WO2023226716A1/en
Publication of CN117155874A publication Critical patent/CN117155874A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Abstract

The application discloses a data packet sending method, a forwarding node, a sending end and a storage medium, and belongs to the technical field of communication. The method comprises the following steps: the forwarding node determines a first scheduling queue from the plurality of scheduling queues based on the priority of each scheduling queue in the plurality of scheduling queues; each scheduling queue in the plurality of scheduling queues is used for caching data packets of at least one object block, the data packets of the same object block are cached in the same scheduling queue, the priority of each scheduling queue is determined based on the target attribute of the object block to which the data packet cached in the corresponding scheduling queue belongs, and the target attribute is an attribute which remains unchanged in the sending process of the data packet of the corresponding object block; the forwarding node sends the data packet in the first scheduling queue. Compared with the method for reducing the frequency of interleaving transmission among data packets of different object blocks based on the comparator and the matcher, the method and the device for forwarding the data packets of the object blocks can be realized through scheduling the queues and the corresponding priorities, and are low in hardware cost and high in forwarding efficiency.

Description

Data packet transmitting method, forwarding node, transmitting terminal and storage medium
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to a data packet sending method, a forwarding node, a sending end and a storage medium.
Background
The basic unit of the upper layer application processing data is an object block, and one object block comprises a complete section of data, for example, the data corresponding to one video frame in the media class application is an object block. The basic unit of the network for transmitting data is a packet (packet), so when the transmitting end transmits data to the receiving end, the object block provided by the upper layer application needs to be divided into a plurality of data packets, then each data packet is transmitted to a forwarding node of the network, and each data packet is transmitted to the receiving end by the forwarding node.
In the related art, a best effort (best effort) queue is configured on a forwarding node. Each time a packet is received by a forwarding node, the packet is added to a best effort queue. When the forwarding node transmits data, the data packet with the earliest enqueuing time is selected from the best-effort queue through the comparator, the data packet which belongs to the same object block with the data packet with the earliest enqueuing time is determined from the best-effort queue through the matcher, and then the data packets are sequentially transmitted. In this way, the data packets belonging to the same object block can be transmitted together as much as possible, so that the frequency of interleaving transmission among the data packets of different object blocks is reduced.
However, the flow for sending the data packet is complex, which results in high algorithm overhead of the forwarding node, thereby affecting the forwarding efficiency of the forwarding node.
Disclosure of Invention
The embodiment of the application provides a data packet sending method, a forwarding node, a sending end and a storage medium, which can improve the forwarding efficiency of the forwarding node. The technical proposal is as follows:
in a first aspect, a method for forwarding a data packet is provided. In the method, a forwarding node determines a first dispatch queue from a plurality of dispatch queues based on a priority of each of the plurality of dispatch queues; each scheduling queue in the plurality of scheduling queues is used for caching data packets of at least one object block, the data packets of the same object block are cached in the same scheduling queue, the priority of each scheduling queue is determined based on the target attribute of the object block to which the data packet cached in the corresponding scheduling queue belongs, and the target attribute is an attribute which remains unchanged in the sending process of the data packet of the corresponding object block; the forwarding node sends the data packet in the first scheduling queue.
Because the data packets of the same object block are cached in the same scheduling queue, and the priority of the scheduling queue is related to the fixed target attribute of the object block cached in the scheduling queue, rather than the specific data packet, the size relationship between the priorities of the scheduling queues is basically unchanged in a shorter period of time. In the case that the priority of each scheduling queue is unchanged, the forwarding node selects the data packet in the same scheduling queue to send every time in the shorter time. And because the data packets of the same object block are cached in the same scheduling queue, the probability that the data packets transmitted by the forwarding node belong to the same object block in the shorter time is very high, so that the data blocks in the same object block are transmitted together as much as possible, and the frequency of interleaving transmission among the data packets of different object blocks is reduced.
Compared with the method for reducing the frequency of interleaving transmission among data packets of different object blocks based on the comparator and the matcher, the method and the device for forwarding the data packets of the object blocks can be realized through scheduling the queues and the corresponding priorities, and are low in hardware cost and high in forwarding efficiency.
Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, in the method, a forwarding node receives a first data packet, where the first data packet carries an identifier of a target object block, and the target object block is an object block to which the first data packet belongs; if the first data packet does not carry a first packet label, the forwarding node adds the first data packet to a second scheduling queue based on the identification of the target object block, wherein the second scheduling queue is a scheduling queue in which the data packets of the target object block are cached in a plurality of scheduling queues, and the first packet label is used for indicating whether the first data packet is the first data packet of the target object block.
If the first data packet does not carry the first packet label, the first data packet is not the first packet of the target object block. In this scenario, the first packet is simply added to the scheduling queue of the packet of the target object block cached before the current time. In this way, the data packets of the same object block can be cached in the same scheduling queue.
In one possible implementation manner of the data packet forwarding method according to the first aspect, after the forwarding node receives the first data packet, if the first data packet further carries a first packet label, the forwarding node selects one scheduling queue from the plurality of scheduling queues as a third scheduling queue, and adds the first data packet to the third scheduling queue.
And if the first data packet carries a first packet label, indicating that the first data packet is the first packet of the target object block. In this scenario, for the forwarding node, the target object block corresponds to a new object block, and a scheduling queue needs to be allocated to the target object block at this time, so that the packets of the subsequent target object block are all buffered in the allocated scheduling queue.
Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, the forwarding node selects one scheduling queue from the plurality of scheduling queues as the implementation process of the third scheduling queue, where the implementation process is as follows: the forwarding node selects a scheduling queue of the uncached data packet from the plurality of scheduling queues as a third scheduling queue.
In order to achieve buffering of different object blocks in different scheduling queues, for a newly received target object block, the forwarding node may select one scheduling queue from the remaining empty scheduling queues as a third scheduling queue. This enqueuing mode is subsequently referred to as the first enqueuing mode.
Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, each scheduling queue in the plurality of scheduling queues caches at most a data packet of one object block. By the first enqueuing method, each scheduling queue can be implemented to only cache one object block.
Based on the packet forwarding method provided in the first aspect, in a possible implementation manner, the target attribute is a network allowed delay of the corresponding object block, and the first packet further carries the network allowed delay of the target object block.
In this scenario, after the forwarding node selects a scheduling queue of the uncached data packet from the plurality of scheduling queues as the third scheduling queue, the forwarding node updates the priority of the third scheduling queue so that the priority of the updated third scheduling queue is higher than the priority of a fourth scheduling queue and lower than the priority of a fifth scheduling queue, where the fourth scheduling queue is a scheduling queue in which the network allowable delay of the object block cached in the plurality of scheduling queues is greater than the network allowable delay of the target object block, and the fifth scheduling queue is a scheduling queue in which the network allowable delay of the object block cached in the plurality of scheduling queues is less than the network allowable delay of the target object block.
In the first enqueuing mode, through the implementation mode, the priority of the scheduling queue where the object block with high network allowable delay requirement is located can be higher, so that the object block with high network allowable delay requirement is guaranteed to be sent out preferentially.
Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, the forwarding node selects one scheduling queue from the plurality of scheduling queues as the implementation process of the third scheduling queue, where the implementation process is as follows: the forwarding node determines the residual capacity of a sixth scheduling queue, wherein the sixth scheduling queue is the scheduling queue where the last received first packet before the current time is located; if the remaining capacity of the sixth scheduling queue is insufficient to buffer the target object block, the forwarding node selects a scheduling queue of the uncached data packet from the plurality of scheduling queues as a third scheduling queue. Accordingly, after determining the remaining capacity of the sixth scheduling queue, if the remaining capacity of the sixth scheduling queue is sufficient to buffer the target object block, the forwarding node takes the sixth scheduling queue as the third scheduling queue.
By the implementation manner, each scheduling queue can be used for caching a plurality of object blocks, and when one scheduling queue is full, a new object block can enter the next empty scheduling queue. This enqueuing mode is subsequently referred to as the second enqueuing mode.
Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, the implementation process of determining the remaining capacity of the sixth scheduling queue by the forwarding node is: the forwarding node determines the number of cached object blocks, wherein the number of cached object blocks is the number of object blocks to which the data packet cached in the sixth scheduling queue belongs; the forwarding node determines the difference value between the first threshold value and the number of the cached object blocks as the residual capacity of the sixth scheduling queue, wherein the first threshold value is the number of the object blocks which can be cached by the sixth scheduling queue; the remaining capacity of the sixth scheduling queue is insufficient to buffer the data packets of the target object block means that: the remaining capacity of the sixth dispatch queue is 0.
Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, the implementation process of determining the remaining capacity of the sixth scheduling queue by the forwarding node is: the forwarding node determines the total size of the cached data, wherein the total size of the cached data is the total size of the data packet cached in the sixth scheduling queue; the forwarding node determines the difference between the second threshold and the total size of the cached data as the residual capacity of the sixth scheduling queue, wherein the second threshold is the total size of the data packet which can be cached by the sixth scheduling queue; the first data packet further carries the data size of the target object block, and the data packet that the remaining capacity of the sixth scheduling queue is insufficient to buffer the target object block means that: the remaining capacity of the sixth dispatch queue is less than the data size of the target object block.
In the second enqueuing mode, based on the two implementation modes, whether one queue is full or not can be determined according to the maximum number of object blocks which can be borne by the scheduling queue or the maximum data size which can be borne, so that the flexibility of first packet enqueuing is improved.
Based on the packet forwarding method provided in the first aspect, in a possible implementation manner, the target attribute is arrival time of a first packet of the corresponding object block.
In this scenario, after the forwarding node selects a scheduling queue of the uncached data packet from the plurality of scheduling queues as the third scheduling queue, the forwarding node updates the priority of the third scheduling queue, so that the priority of the updated third scheduling queue is lower than the priority of the scheduling queues of other cached data packets.
In the first enqueuing mode or the second enqueuing mode, based on the modes, the priority of a scheduling queue where the object block of which the first packet arrives early is located can be higher.
Based on the data packet forwarding method provided in the first aspect, in one possible implementation manner, in the method, if all data packets of all object blocks cached in any one of the plurality of scheduling queues are sent completely, the priority of the any one scheduling queue is updated, so that the updated priority of the any scheduling queue is lower than the priorities of other scheduling queues cached with data packets.
In the first enqueuing method, since one scheduling queue is only used for buffering one object block, when all data packets of the object blocks (i.e., all object blocks) buffered in a certain scheduling queue are sent completely, updating of the priority of the scheduling queue needs to be triggered.
In the second enqueuing method, the priority of the scheduling queue is determined based on the target attribute of the first object block cached in the scheduling queue, and one scheduling queue caches a plurality of object blocks, and the first packets of the plurality of object blocks arrive continuously, so that when all the data packets of all the object blocks cached in a certain scheduling queue are sent completely, the priority of the scheduling queue needs to be updated.
Based on the packet forwarding method provided in the first aspect, in a possible implementation manner, the plurality of scheduling queues are sequentially arranged. In this scenario, the implementation process of the forwarding node selecting one scheduling queue from the plurality of scheduling queues as the third scheduling queue is: the forwarding node determines a sixth scheduling queue from a plurality of scheduling queues, wherein the sixth scheduling queue is a scheduling queue in which a first packet received last time before the current time is located; one of the dispatch queues ordered after the sixth dispatch queue is determined to be the third dispatch queue.
Through the implementation manner, each scheduling queue can be used for caching a plurality of object blocks, and new object blocks enter each scheduling queue in turn. This enqueuing mode is subsequently referred to as the third enqueuing mode.
Based on the packet forwarding method provided in the first aspect, in a possible implementation manner, the target attribute is arrival time of a first packet of the corresponding object block. In this scenario, after the forwarding node selects one scheduling queue from the plurality of scheduling queues as the third scheduling queue, if the third scheduling queue does not buffer the data packets of other object blocks except the first data packet, the forwarding node updates the priority of the third scheduling queue, so that the priority of the updated third scheduling queue is lower than the priority of the scheduling queues of other buffered data packets.
In the third enqueuing mode, based on the mode, the higher priority of the scheduling queue where the object block of which the first packet arrives early is located can be realized.
Based on the data packet forwarding method provided in the first aspect, in one possible implementation manner, in the method, if all the data packets of one of the object blocks cached in any one of the plurality of scheduling queues are sent, the priority of the plurality of scheduling queues is updated, so that the updated priority of any one scheduling queue is lower than the updated priority of the scheduling queues cached with the data packets.
In the third enqueuing method, the priority of the scheduling queues is determined based on the target attribute of the object blocks cached in the scheduling queues, and one scheduling queue caches a plurality of object blocks, and the first packets of the plurality of object blocks do not arrive continuously, that is, the first packets of adjacent forwarding nodes are respectively cached in different scheduling queues. In this scenario, in order to avoid that the transmission time of the first-in object block is long, when all the data packets of one of the object blocks buffered in a certain scheduling queue are transmitted, the priority of the scheduling queue needs to be updated.
Based on the packet forwarding method provided in the first aspect, in a possible implementation manner, an implementation manner of updating a priority of any scheduling queue may be: and if the priority of any scheduling queue is the highest priority, updating the priority of any scheduling queue.
By the method, the priority update can be triggered only when all the data packets of a certain object block exist in the scheduling queue with the highest priority are sent, and the data processing pressure of the node is reduced.
In a second aspect, a method for sending a data packet is provided, in which a sending end determines a plurality of data packets of an object block to be sent, each data packet in the plurality of data packets carries an identifier of the object block, and a first packet in the plurality of data packets carries a first packet tag, where the first packet tag is used to indicate that a corresponding data packet is a first data packet of the object block to be sent; the transmitting end transmits a plurality of data packets.
Based on the data packet forwarding method provided in the second aspect, in a possible implementation manner, the first packet further carries a network allowed delay of the object block to be sent.
Based on the data packet forwarding method provided in the second aspect, in a possible implementation manner, the first packet further carries a data size of the object block to be sent.
Based on the data packet forwarding method provided in the second aspect, in one possible implementation manner, a tail packet in the plurality of data packets carries a tail packet tag, where the tail packet tag is used to indicate that the corresponding data packet is the last data packet of the object block to be sent.
The related technical effects of the data packet transmission method provided in the second aspect may refer to the first aspect, and are not described herein.
In a third aspect, a forwarding node is provided, where the forwarding node has a function of implementing the packet transmission method behavior in the first aspect. The forwarding node comprises at least one module, and the at least one module is used for implementing the data packet sending method provided in the first aspect.
In a fourth aspect, a transmitting end is provided, where the transmitting end has a function of implementing the packet transmission method behavior in the second aspect. The transmitting end comprises at least one module, and the at least one module is used for realizing the data packet transmitting method provided by the second aspect.
In a fifth aspect, a forwarding node is provided, where the forwarding node includes a processor and a memory, where the memory is configured to store a program for supporting the forwarding node to perform the method for sending a data packet provided in the first aspect, and store data related to implementing the method for forwarding a node provided in the first aspect. The processor is configured to execute a program stored in the memory. The operating means of the memory device may further comprise a communication bus for establishing a connection between the processor and the memory.
In a sixth aspect, a transmitting end is provided, where the structure of the transmitting end includes a processor and a memory, where the memory is configured to store a program that supports the transmitting end to perform the method for transmitting a data packet provided in the second aspect, and store data related to implementing the method for transmitting a data packet provided in the second aspect. The processor is configured to execute a program stored in the memory. The operating means of the memory device may further comprise a communication bus for establishing a connection between the processor and the memory.
In a seventh aspect, there is provided a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the data packet transmission method of the first or second aspect described above.
In an eighth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of data packet transmission of the first or second aspect described above.
The technical effects obtained in the second to eighth aspects are similar to those obtained in the corresponding technical means in the first aspect, and are not described in detail herein.
Drawings
FIG. 1 is a schematic diagram of a queue scheduling process based on FIFO technology according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a scheduling result based on FIFO according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an object block-based scheduling result for the three object blocks of FIG. 2;
FIG. 4 is a schematic diagram of a network architecture according to an embodiment of the present application;
fig. 5 is a flowchart of a method for sending a data packet according to an embodiment of the present application;
fig. 6 is a flowchart of a method for sending a data packet according to an embodiment of the present application;
fig. 7 is a flowchart of a method for sending a data packet according to an embodiment of the present application;
fig. 8 is a flowchart of a method for sending a data packet according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a forwarding node according to an embodiment of the present application;
Fig. 10 is a schematic structural diagram of a transmitting end according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings.
Before explaining the embodiment of the present application in detail, an application scenario of the embodiment of the present application is described.
Currently, when processing data, an upper layer application (or an application program or a service) of a network usually processes the data by taking an object block (or an object or object data) as a basic unit. An object block may be understood as a complete piece of data processed by an upper layer application. For example, when the upper layer application is a media class application, an object block may be understood as an entire video frame. For another example, when the upper layer application is a picture class application, an object block may be understood as a whole picture.
However, the basic unit of network data transmission supported by the current network protocol is a data packet (or a data block or a message), so the basic process of transmitting data from a transmitting end to a receiving end may be: the upper layer application of the transmitting end provides the object block for the network transmission layer of the transmitting end, the network transmission layer of the transmitting end divides the object block into a plurality of data packets, then each data packet is transmitted to the network transmission layer of the receiving end, after each data packet is received by the network transmission layer of the receiving end, each data packet is recombined into the object block, and the recombined object block is submitted to the upper layer application of the receiving end.
As networks are increasingly large in size, packets sent by a sender are typically forwarded by a forwarding node (or an intermediate node or forwarding end, or forwarding device, etc.) in the network to a receiver. Based on the above, the efficiency of forwarding the data packet by the forwarding node affects the final transmission efficiency of the object block to a certain extent, and further affects the efficiency of processing the object block by the upper layer application. Therefore, how a forwarding node forwards a data packet is a big hot spot of current research.
First-in first-out (first in first out, FIFO) technology is a technology that a forwarding node uses to forward data packets. In FIFO technology, each time a packet is received by a forwarding node, the packet is buffered in a best effort queue. When the forwarding node transmits data, the forwarding node directly transmits each data packet in the best effort queue in sequence from the early to the late according to the enqueuing time of each data packet. That is, packets that are first sent out in a best effort queue are called FIFO techniques.
Fig. 1 is a schematic diagram of a queue scheduling process based on FIFO technology according to an embodiment of the application. As shown in fig. 1, the data packets received by the forwarding node enter the best effort queue sequentially from the left, that is, the received data packets are added to the tail of the best effort queue sequentially. The forwarding node sends each packet in turn from the head of the best effort queue (right side of the queue in fig. 1).
The block marked 1 in fig. 1 indicates the packet of the target block 1, the block marked 2 indicates the packet of the target block 2, and the block marked 3 indicates the packet of the target block 3.
It should be noted that the service classes of different terminals may be different, and the service class of each terminal is planned by the operator, based on which a best effort queue is configured at the forwarding node for each service class, so that data packets sent by different terminals belonging to the same service class are buffered in the same best effort queue.
Because the data packets sent by different terminals belonging to the same service class are buffered in the same best effort queue, the forwarding node cannot ensure that the data packets with special requirements reach the receiving end preferentially, i.e. the quality of service (quality of service, qoS) level or certain priority of the user is not ensured. In other words, the forwarding node simply sends each data packet in turn based on the enqueuing time of each data packet in the best-effort queue, and if the current network has a large traffic load, such as the data packets of a plurality of terminals are buffered in the best-effort queue, the data transmission bit rate and the transmission time perceived by the user side corresponding to a certain terminal in this scenario are not fixed.
Based on the description of the FIFO technology, the FIFO technology has the following problems:
(1) In the FIFO technology, data packets sent by different terminals belonging to the same service class are buffered in the same best effort queue, so that data packets of different object blocks sent by different upper layer applications of different terminals or different upper layer applications of the same terminal are correspondingly buffered in the same best effort queue, which easily results in the loss of data packets of a certain object block. However, for most upper layer applications, the object block must be complete and cannot lack any data packets, otherwise the upper layer application cannot process the object block. That is, FIFO techniques do not take into account the transmission requirements of upper layer applications for the object blocks, but simply schedule based on data packets.
(2) In FIFO technology, a forwarding node simply sends individual packets in turn based on the enqueue time of the individual packets in the best-effort queue. If the sender has a requirement on the Completion Time (CT) of the object block, for example, the sender requires that the CT be minimized or that the CT not exceed the Deadline (DL), the requirement of the sender cannot be satisfied by FIFO technology in this scenario.
The DL is understood as a requirement of the sender for the network transmission duration of the data, and therefore, the DL is hereinafter referred to as a network allowed time delay.
Fig. 2 is a schematic diagram of a FIFO-based scheduling result according to an embodiment of the application. As shown in fig. 2, it is assumed that there are currently three object blocks, respectively labeled as object block 1, object block 2, and object block 3. The data size of the object block 1 is shown as 1, the data size of the object block 2 is shown as 2, and the data size of the object block 3 is shown as 3. The data size does not represent the actual data size of the target block, and is merely used to describe the data size relationship among the target block 1, the target block 2, and the target block 3. Further, as shown in fig. 2, the network allowable time delay of the object block 1 is 1 second(s), the network allowable time delay of the object block 2 is 6s, and the network allowable time delay of the object block 3 is 6s.
As shown in fig. 2, since the packets of the object block 1, the object block 2 and the object block 3 are buffered in the best effort queue in an interleaved manner, the packets of the object block 1, the object block 2 and the object block 3 are transmitted in an interleaved manner in the period of 0-3s, that is, the packets of the object block 1, the object block 2 and the object block 3 are all transmitted in the period, and the transmission of all the packets of the object block 1 is completed at the end of the period. The data packets of the object block 2 and the object block 3 are transmitted in an interleaving manner in the period of 3-5s, that is, the data packets of the object block 2 and the object block 3 are transmitted in the period, and the transmission of all the data packets of the object block 2 is completed at the end of the period. And transmitting the rest data packets in the object block 3 within the period of 5-6 s.
Thus, the total completion time of the object block 1 is 3s, the total completion time of the object block 2 is 5s, and the total completion time of the object block is 6s. It is apparent that the individual total completion time of each object block is relatively large and that the total completion time 3s of object block 1 exceeds the network allowed time delay 1s of object block 1.
Based on this, the forwarding node may consider that the data packets belonging to the same object block are transmitted together when forwarding the data packets, that is, avoid interleaving transmission between the data packets of different object blocks.
Fig. 3 is a schematic diagram of an object block-based scheduling result for the three object blocks in fig. 2. As shown in fig. 3, if the object block 1 is transmitted first, then the object block 2, and finally the object block 3, the total completion time of the object block 1 is 1s, the total completion time of the object block 2 is 3-1=2s, and the total completion time of the object block 3 is 6-3=3s. Obviously, the individual total completion time of each object block is shorter than the total completion time scheduled based on FIFO technology, and the total completion time of each object block does not exceed the corresponding network allowable delay.
In this scenario, assuming that object block 1, object block 2, and object block 3 reach the forwarding node at the same time, if the waiting time of the remaining object blocks when one object block is transmitted is also counted, the actual completion time of object block 1 is 1s, the actual completion time of object block 2 is 3s, and the actual completion time of object block 3 is 6s. In this case, the total actual completion time of the three object blocks may be expressed as 1+3+6=10s, and the total actual completion time of the three object blocks is 3+5+6=14s in fig. 2. Obviously, the total actual completion time of all object blocks is much less.
As shown in fig. 3, if the object block 1 is transmitted first, then the object block 3, and finally the object block 2, the total completion time of the object block 1 is 1s, the total completion time of the object block 3 is 4-1=3 s, and the total completion time of the object block 2 is 6-4=2 s. Obviously, in this scenario, the total completion time of each object block alone is shorter than the total completion time scheduled based on FIFO technology, and the total completion time of each object block does not exceed the corresponding network allowable delay.
In this scenario as well, assuming that object block 1, object block 2, and object block 3 reach the forwarding node at the same time, if the waiting time of the remaining object blocks when one object block is transmitted is also counted, the actual completion time of object block 1 is 1s, the actual completion time of object block 2 is 6s, and the actual completion time of object block 3 is 4s. In this case, the total actual completion time of the three object blocks may be expressed as 1+4+6=11s, and the total actual completion time of the three object blocks is 3+5+6=14s in fig. 2. Obviously, the total actual completion time of all object blocks is much less.
At present, forwarding nodes can avoid interleaving transmission among data packets of different object blocks through configuration comparators and matchers. However, this technology requires a comparator and a matcher to be disposed on the forwarding node, which results in a large hardware overhead of the forwarding node. On the other hand, the algorithm complexity involved in this technique is too high, which in turn affects the forwarding efficiency of the forwarding node.
Based on this, the embodiment of the application provides a data packet sending method, in which the frequency of interleaving sending among data packets of different object blocks can be reduced based on a scheduling queue. On the one hand, excessive hardware is not required to be configured, so that the hardware cost is low. On the other hand, the algorithm is simple, so that the efficiency of forwarding the data packet by the forwarding node is improved.
The method for transmitting the data packet provided by the embodiment of the application is explained in detail below.
Fig. 4 is a schematic diagram of a network architecture according to an embodiment of the present application, where the network architecture is used to implement a data packet sending method according to an embodiment of the present application. As shown in fig. 4, the network architecture includes a transmitting end 401, a forwarding node 402, and a receiving end 403. The transmitting end 401 and the forwarding node 402 are connected by a wired or wireless mode to perform communication, and the forwarding node 402 and the receiving end 403 are connected by a wired or wireless mode to perform communication.
When transmitting any one of the object blocks, the transmitting end 401 divides the object block into a plurality of packets. The plurality of packets includes a first packet (i.e., a first packet, labeled B in fig. 4), a last packet (i.e., a last packet, labeled E in fig. 4), and a middle packet (labeled M in fig. 4) located between the first packet and the last packet. When the transmitting end transmits each data packet of the target block, the transmitting end sequentially transmits each data packet in the sequence from the first packet to the last packet.
In addition, in order to facilitate forwarding scheduling by the forwarding node 402 in units of an object block, for each data packet of the object block, the transmitting end marks the identifier of the object block in each data packet, and marks the data packet as a first packet in a first packet of the object block, optionally marks the data packet as a last packet in a last packet of the object block. In other words, for any object block sent by the sending end 401, each data packet of the object block carries an identifier of the object block, and a first packet label is carried in a first packet of the object block, and optionally a last packet label is carried in a last packet of the object block. The first packet label is used for indicating that the corresponding data packet is the first data packet of the object block, and the last packet label is used for indicating that the corresponding data packet is the last data packet of the object block.
Based on the configuration of the transmitting end 401, when the forwarding node 402 receives any data packet, the forwarding node 402 can determine the object block to which the data packet belongs, because the data packet carries the identifier of the object block to which the data packet belongs. And if the data packet further carries a header tag, the forwarding node 402 may determine that the data packet is the header of the object block to which the data packet belongs. If the packet further carries a tail packet tag, forwarding node 402 may determine that the packet is a tail packet of the object block to which the packet belongs.
Based on this, the forwarding node 402 can implement scheduling transmission of data packets in units of object blocks, thereby reducing the frequency of interleaving transmission between data packets of different object blocks. After receiving the data packet, the receiving end 403 reorganizes each data packet into an object block based on the identifier of the object block carried by the data packet, and submits the object block to an upper layer application.
In addition, the data packet sent by the sending end 401 may also carry the data size and/or the network allowable delay of the object block, and the function related to the data size and the network allowable delay of the object block will be described in detail in the following embodiments, which are not expanded herein. The data size and/or the network allowable delay of the object block may be carried only in the first packet of the object block, or alternatively, the data size and/or the network allowable delay of the object block may be carried in each data packet of the object block, which is not limited in the embodiment of the present application.
The sending end 401 and the receiving end 403 may be any data processing end, such as a terminal or a server of a mobile phone, a computer, etc. Forwarding node 402 may be a forwarding device such as a switch, router, etc. In addition, the three of the transmitting end 401, the forwarding node 402 and the receiving end 403 perform clock synchronization during network initialization so as to transmit data later, which is not described in detail in the embodiment of the present application.
Based on the network architecture shown in fig. 4, the embodiment of the application provides a data packet sending method. In order to facilitate the development of the following embodiments, the general idea of the embodiments of the present application will be explained.
In the embodiment of the application, in order to enable the forwarding node to realize scheduling and sending of each data packet by taking the object block as a unit, the forwarding node is provided with a plurality of scheduling queues, and each scheduling queue is used for caching the data packet of at least one object block. For example, each scheduling queue is used for buffering data packets of one object block, or each scheduling queue is used for buffering data packets of two or more object blocks. In other words, packets of the same object block are buffered in the same dispatch queue.
Further, to reduce the algorithmic complexity of the forwarding node, priorities may be configured for each scheduling queue. Therefore, when forwarding the data packet, the forwarding node can quickly select one scheduling queue from a plurality of scheduling queues based on the priority of each scheduling queue, and then send the data packet in the selected scheduling queue.
Wherein the priorities of the dispatch queues are related to fixed target attributes of the object blocks cached by the dispatch queues, but not to specific data packets, so that the size relationship between the priorities of the dispatch queues is not basically changed in a short period of time. In the case that the priority of each scheduling queue is unchanged, the forwarding node selects the data packet in the same scheduling queue to send every time in the shorter time. And because the data packets of the same object block are cached in the same scheduling queue, the probability that the data packets transmitted by the forwarding node belong to the same object block in the shorter time is very high, so that the data blocks in the same object block are transmitted together as much as possible, and the frequency of interleaving transmission among the data packets of different object blocks is reduced.
In some embodiments, for an object block that is sent first by the sender, the forwarding node needs to send the object block preferentially to avoid the long network transmission time of the object block. Based on this, the target attribute may be, for example, a first packet arrival time of the object block. The first packet arrival time specifically refers to the time when the forwarding node receives the first packet.
In other embodiments, if the sender has a special requirement on the network transmission delay of a certain object block, in this scenario, the forwarding node may schedule the object block based on the network transmission delay of each object block. That is, the target property may be, for example, a network-allowed delay for the object block.
It should be noted that, the arrival time of the first packet of the object block and the network transmission delay are two exemplary descriptions of the target attribute. Alternatively, the target attribute may be other attributes of the object block, such as other requirements of the sender on the object block, which is not limited by the embodiment of the present application.
In addition, the target attribute is carried in the data packet of the object block. When a transmitting end transmits a certain object block, the transmitting end needs to carry the target attribute at least in the first packet of the object block so that the forwarding node adjusts the priority of each scheduling queue based on the target attribute.
Based on this, for the transmitting end, the method provided by the embodiment of the application includes that the transmitting end transmits the data packet and carries the information related to the object block in the data packet. For forwarding nodes, the method provided by the embodiment of the application comprises two aspects. One aspect is a process in which a forwarding node receives a data packet and buffers the data packet in a schedule queue in units of object blocks. And on the other hand, the forwarding node transmits the data packet based on the priority of each scheduling queue. This is illustrated in the following three examples.
Fig. 5 is a flowchart of a data packet sending method according to an embodiment of the present application. The method is used for explaining how the sending end sends the data packet. As shown in fig. 5, the method includes the following steps 501 and 502.
Step 501: the transmitting end determines a plurality of data packets of the object block to be transmitted. Each data packet of the plurality of data packets carries an identifier of the object block, and a first packet of the plurality of data packets carries a first packet tag, where the first packet tag is used to indicate that the corresponding data packet is a first data packet of the object block.
The purpose of the first packet carrying the first packet label is to facilitate the forwarding node to identify a new object block and allocate a scheduling queue for the new object block. Each data packet carries the identifier of the object block, so that the forwarding node can cache the data packets of the same object block in the same scheduling queue.
In addition, in step 501, the header packet may further carry the data size of the object block and/or the network allowable delay of the object block. Optionally, each data packet of the object block may carry the data size of the object block and/or the network allowed time delay of the object block.
Wherein the data size of the object block indicates the total size of all data packets comprised by the object block. The purpose of the first packet carrying the data size of the object block is to facilitate the forwarding node to allocate a scheduling queue for the newly received object block based on the capacity of the scheduling queue.
The purpose of the network allowable delay of the object block carried by the first packet is to facilitate the forwarding node to adjust the priority of the scheduling queue based on the network allowable delay of the object block, so that the object block with high requirement on network transmission delay can be sent out preferentially.
In addition, in step 501, a tail packet in the plurality of data packets may further carry a tail packet tag, where the tail packet tag is used to indicate that the corresponding data packet is the last data packet of the object block. The purpose of the tail packet carrying the tail packet label is to facilitate the forwarding node to recognize that all data of the object block are forwarded, and then update the priority of the scheduling queue.
Optionally, in a scenario that the forwarding node can identify whether all data of the object block is forwarded through other ways, the tail packet may not carry a tail packet tag. For example, in the case that the first packet carries the data size of the object block, the forwarding node may determine whether the data of the object block is forwarded based on the amount of data of the object block that has been transmitted.
The specific implementation of the information carried in the data packet will be described in detail later, and will not be expanded here.
Step 502: the transmitting end transmits a plurality of data packets of the object block.
Specifically, the transmitting end sequentially transmits the plurality of data packets in the order from the first packet to the last packet.
Fig. 6 is a flowchart of a data packet sending method according to an embodiment of the present application. The method is used for explaining how the forwarding node receives and caches the data packets. As shown in fig. 6, the method includes the following steps 601 to 603.
Step 601: the forwarding node receives a first data packet, wherein the first data packet carries an identifier of a target object block, and the target object block is an object block to which the first data packet belongs.
Step 602: if the first data packet does not carry a first packet label, the forwarding node adds the first data packet to a second scheduling queue based on the identification of the target object block, wherein the second scheduling queue is a scheduling queue in which the data packets of the target object block are cached in a plurality of scheduling queues, and the first packet label is used for indicating whether the first data packet is the first data packet of the target object block.
If the first data packet does not carry the first packet label, the first data packet is not the first packet of the target object block. In other words, the forwarding node has received the data packet of the target object block before the current time. In this scenario, the first packet is simply added to the scheduling queue of the packet of the target object block cached before the current time. In this way, the data packets of the same object block can be cached in the same scheduling queue.
In some embodiments, to facilitate a forwarding node to quickly determine which object blocks of packets are buffered in the dispatch queue. The forwarding node is configured with a mapping relationship between the scheduling queue and the object block. The mapping relation comprises object block identifiers corresponding to each scheduling queue, and the object block identifiers corresponding to each scheduling queue are used for marking the cached object blocks in the scheduling queue.
In this scenario, in step 602, the forwarding node may determine, based on the identification of the target object block, a second scheduling queue from the mapping relationship between the scheduling queue and the object block.
Step 603: if the first data packet also carries a first packet label, the forwarding node selects one scheduling queue from a plurality of scheduling queues as a third scheduling queue, and adds the first data packet to the third scheduling queue.
And if the first data packet carries a first packet label, indicating that the first data packet is the first packet of the target object block. In this scenario, for the forwarding node, the target object block corresponds to a new object block, and a scheduling queue needs to be allocated to the target object block at this time, so that the packets of the subsequent target object block are all buffered in the allocated scheduling queue.
In addition, in some embodiments, after determining the third scheduling queue, the forwarding node may further add the correspondence between the identifier of the target object block and the third scheduling queue to the mapping relationship between the scheduling queue and the object block, so that the subsequent forwarding node caches the data packet of the target object block in the third scheduling queue.
Further, the following three implementations are exemplified in which the forwarding node selects one schedule queue from the plurality of schedule queues as the third schedule queue.
First enqueue mode: each scheduling queue caches at most one data packet of an object block, and each time a new object block is received, an empty scheduling queue is allocated for the new object block.
An empty dispatch queue, i.e., one that does not cache any data. In the first enqueuing method, since each scheduling queue is used for buffering a data packet of an object block, one scheduling queue may be selected from the remaining empty scheduling queues as a third scheduling queue for a newly received target object block.
That is, in step 603, the implementation manner of selecting, by the forwarding node, one scheduling queue from the plurality of scheduling queues as the third scheduling queue is: the forwarding node selects a scheduling queue of the uncached data packet from the plurality of scheduling queues as a third scheduling queue.
For example, there are 10 dispatch queues at the forwarding node, labeled dispatch queue 1 through dispatch queue 10, respectively. When the forwarding node receives the first data packet, the data packets are buffered in the scheduling queues 1 to 3 in the 10 scheduling queues, and at this time, the forwarding node selects one scheduling queue, such as the scheduling queue 4, from the scheduling queue 4 to the scheduling queue 10, and takes the selected scheduling queue as the scheduling queue 4.
Alternatively, if there is no empty dispatch queue at the current time, the forwarding node may add the first packet to the default queue. The default queue may be, for example, a best effort queue.
It should be noted that, in the embodiment of the present application, the priority of the best effort queue is lower than the priority of the plurality of scheduling queues. Therefore, when the forwarding node receives the data packet, the forwarding node prioritizes to add the first data packet to the plurality of scheduling queues, and adds the first data packet to the best effort queue when none of the plurality of scheduling queues satisfies the condition.
In addition, in the first enqueuing manner, the target attribute may be the arrival time of the first data packet (i.e., the first packet) of the corresponding object block, or may be a network allowable delay of the corresponding object block.
In some embodiments, in the case that the target attribute is the arrival time of the first packet of the corresponding object block, the forwarding node selects a scheduling queue of the uncached data packet from the plurality of scheduling queues as a third scheduling queue, and after adding the first data packet to the third scheduling queue, the forwarding node further updates the priority of the third scheduling queue, so that the priority of the updated third scheduling queue is lower than the priority of other scheduling queues in which the data packet is cached. I.e. the priority of the dispatch queue in which the object block, the first packet arrives earlier, is higher.
The priority of the forwarding node for further updating the third scheduling queue may have two updating manners.
In the first updating mode, the priority of the third scheduling queue is set as follows: the priority of the other scheduling queues with data packets is kept unchanged, one level lower than the lowest priority in the priorities of the other scheduling queues with data packets.
For example, in the scenario that the forwarding node selects the dispatch queue 4 from the dispatch queues 4 to 10 as the third dispatch queue, the other dispatch queues having buffered data packets are dispatch queues 1 to 3, and if the priorities of the dispatch queues 1 to 3 are 10, 9, and 8, respectively, then the priority of the dispatch queue 4 may be set to 7 directly.
In the second updating mode, the priority of the scheduling queues of other cached data packets is increased by one level, and the priority of the third scheduling queue is set as follows: the priority is one level lower than the lowest priority in the priorities after the other scheduling queues with data packets cached are upgraded.
For example, in the scenario that the forwarding node selects the schedule queue 4 from the schedule queues 4 to 10 as the third schedule queue, the other schedule queues having buffered data packets are schedule queues 1 to 3, and if the priorities of the schedule queues 1 to 3 are 3, 2, and 1, respectively, then the priorities of the schedule queues 1 to 3 may be directly upgraded to 4, 3, and 2, respectively, and the priority of the schedule queue 4 may be set to 1.
In other embodiments, the first data packet further carries the network allowable delay for the target object block in the case where the target attribute is the network allowable delay for the corresponding object block. In this scenario, the forwarding node selects a scheduling queue of the uncached data packet from the plurality of scheduling queues as a third scheduling queue, and after adding the first data packet to the third scheduling queue, the forwarding node further updates the priority of the third scheduling queue, so that the priority of the updated third scheduling queue is higher than the priority of the fourth scheduling queue and lower than the priority of the fifth scheduling queue. The fourth scheduling queue refers to a scheduling queue of which the network allowable time delay of the object block cached in the plurality of scheduling queues is larger than that of the target object block, and the fifth scheduling queue refers to a scheduling queue of which the network allowable time delay of the object block cached in the plurality of scheduling queues is smaller than that of the target object block.
Based on the method, the priority of the scheduling queue where the object block with high network allowable delay requirement is located can be higher, so that the object block with high network allowable delay requirement is guaranteed to be sent out preferentially.
The priority of the forwarding node for further updating the third scheduling queue can be updated in two ways.
In the first updating mode, the priority of the fifth scheduling queue is kept unchanged, the priority of the fourth scheduling queue is reduced by one level, and the priority of the third scheduling queue is set as follows: one level lower than the lowest priority among the priorities of the fifth schedule queue.
For example, in the scenario where the forwarding node selects the dispatch queue 4 from the dispatch queues 4 to 10 as the third dispatch queue, the other dispatch queues having cached the data packet are the dispatch queues 1 to 3. Assume that the priorities of the dispatch queues 1 to 3 are 10, 9 and 8, respectively, and the network allowable delays of the object blocks cached by the three dispatch queues are 1, 3 and 6, respectively, and the network allowable delay of the target object block is 4. At this time, the priority of the dispatch queue 3 is lowered to 7, and the priority of the dispatch queue 4 is set to 8 which is one step lower than 9.
In the second updating mode, the priority of the fourth scheduling queue is kept unchanged, the priority of the fifth scheduling queue is upgraded by one level, and the priority of the third scheduling queue is set as follows: one level higher than the highest priority among the priorities of the fourth schedule queue.
For example, in the scenario where the forwarding node selects the dispatch queue 4 from the dispatch queues 4 to 10 as the third dispatch queue, the other dispatch queues having cached the data packet are the dispatch queues 1 to 3. Assume that the priorities of the dispatch queues 1 to 3 are 10, 9 and 8, respectively, and the network allowable delays of the object blocks cached by the three dispatch queues are 1, 3 and 6, respectively, and the network allowable delay of the target object block is 4. At this time, the priorities of the dispatch queue 1 and the dispatch queue 2 are upgraded to 11 and 10, respectively, and the priority of the dispatch queue 4 is set to 9 which is 8 one step higher.
It should be noted that, if the fourth scheduling queue does not exist in the plurality of scheduling queues, the priority of the updated third scheduling queue only needs to be lower than the priority of the fifth scheduling queue. That is, the network allowable delay of the object blocks cached in the plurality of scheduling queues is smaller than the network allowable delay of the target object block, so that the priority of the third scheduling queue is directly set to be the lowest.
Correspondingly, if the fifth scheduling queue does not exist in the plurality of scheduling queues, the priority of the updated third scheduling queue is only required to be higher than that of the fourth scheduling queue. That is, the network allowable delay of the object blocks cached in the plurality of scheduling queues is larger than the network allowable delay of the target object block, so that the priority of the third scheduling queue is directly set to be the highest.
Assuming that the respective schedule queues in which the data packets are buffered are arranged in order of priority from top to bottom, the above implementation of updating the priority of the third schedule queue according to the network allowed delay may be understood as: starting from the first dispatch queue after queuing, inserting a third dispatch queue in front of the first dispatch queue satisfying the following conditions: the network allowable delay of the cached object block exceeds the network allowable delay of the target object block. The size relationship of the priorities of the respective schedule queues after the insertion of the third schedule queue still satisfies the ordering relationship.
For example, the dispatch queue in which packets have been buffered is marked A, B, C, D in order of priority from high to low. Wherein, the network allowable Delay (DL) of the object block to which the data packet of the A buffer memory belongs is 1, the network allowable Delay (DL) of the object block to which the data packet of the B and C buffer memories belongs is 3, and the network allowable Delay (DL) of the object block to which the data packet of the D buffer memory belongs is 6. If the network allowable Delay (DL) carried by the first data packet is 4 and the third scheduling queue is selected to be N, after the first data packet is added to N, the scheduling queues are ordered according to the order of priority from high to low, and after the ordering, the scheduling queue N should be before the scheduling queue D and after the scheduling queues B and C. That is, the new individual dispatch queues are prioritized from high to low as A, B, C, N, D.
It should be noted that, for the dispatch queues of the uncached data packets, the priority of these dispatch queues may be set to a default priority, for example, level 0, which is not limited in the embodiment of the present application.
Further, since the priority of the scheduling queue is determined based on the target attribute of the target block buffered in the scheduling queue, when all the packets of the target block buffered in a certain scheduling queue are transmitted, the priority of the scheduling queue needs to be updated.
In some embodiments, when all the data packets of the object block cached in a certain scheduling queue are sent, the implementation manner of updating the priority of the scheduling queue by the forwarding node may be: if all the data packets of the object blocks cached in any one of the plurality of scheduling queues are sent, updating the priority of the scheduling queue so that the updated priority of the scheduling queue is lower than the priority of other scheduling queues cached with the data packets.
The priorities of other scheduling queues in which data packets are cached may or may not be updated. If the data packet is updated, the size relation between the priorities before and after updating other scheduling queues in which the data packet is cached is kept unchanged.
For example, the respective scheduling queues in which the data packets are buffered are A, B, C, N, D in order of priority from high to low, and the priorities of the five scheduling queues are 10, 9, 8, 7, and 6, respectively. If all the data packets of the object blocks cached in the scheduling queue C at the current time are sent, the priority of the scheduling queue C is set to be the lowest, and the priorities of the other four scheduling queues A, B, N, D can be kept unchanged or can be reset to be 10, 9, 8 and 7 respectively.
Alternatively, in other embodiments, the priority update may be triggered only when the packets of the object blocks of the highest priority dispatch queue are all sent. In this scenario, if all the data packets of the object blocks cached in any one of the plurality of scheduling queues are sent completely, it is also necessary to determine whether the scheduling queue is the scheduling queue with the highest priority, and if the scheduling queue is not the scheduling queue with the highest priority, the updating of the priority is not triggered.
Accordingly, if the dispatch queue is the highest priority dispatch queue, an update of the priority is triggered. Specifically, the scheduling queues of all the data packets of the object block cached at the current time are determined, and the priority of the scheduling queues is set to be lower than the priority of the other scheduling queues cached with the data packets.
Likewise, the priorities of other scheduling queues in which packets are cached may or may not be updated. If the data packet is updated, the size relation between the priorities before and after updating other scheduling queues in which the data packet is cached is kept unchanged.
In addition, in the scenario that the transmitting end adds the tail packet label in the tail packet of the object block, for any scheduling queue, if the data packet currently transmitted by the forwarding node is the data packet in the scheduling queue and the data packet carries the tail packet label, the forwarding node can determine that all the data packets of the object block cached in the scheduling queue are completely transmitted.
Optionally, if the transmitting end does not add a tail packet tag in the tail packet of the object block, for any scheduling queue, if the data packet currently transmitted by the forwarding node is the data packet in the scheduling queue, the forwarding node may determine whether all the data packets of the object block cached in the scheduling queue are transmitted through other approaches (such as the data size of the object block), which will not be described in detail herein.
The second enqueue mode: each scheduling queue is used for caching a plurality of object blocks, and when one scheduling queue is full, a new object block can enter the next empty scheduling queue.
Based on the second enqueuing manner, the implementation manner in which the forwarding node selects one scheduling queue from the plurality of scheduling queues as the third scheduling queue in step 603 may be: the forwarding node determines the residual capacity of a sixth scheduling queue, wherein the sixth scheduling queue is the scheduling queue where the last received first packet before the current time is located; if the remaining capacity of the sixth scheduling queue is insufficient to buffer the target object block, the forwarding node selects a scheduling queue of the uncached data packet from the plurality of scheduling queues as a third scheduling queue. Accordingly, if the remaining capacity of the sixth scheduling queue is sufficient to buffer the target object block, the forwarding node treats the sixth scheduling queue as the third scheduling queue.
That is, when the forwarding node receives a new first packet, it is determined whether the scheduling queue to which the first packet is added last time can buffer the object block to which the new first packet belongs. If the new data packet can be cached, selecting the object block to which the new first packet belongs from the scheduling queue, and if the new data packet can not be cached, selecting the next empty scheduling queue to cache the object block to which the new data packet belongs.
In other words, in the second enqueuing method, the forwarding node sequentially selects each scheduling queue to buffer the target block, and if the remaining capacity of one scheduling queue is insufficient, the forwarding node selects the next empty scheduling queue to buffer the target block.
In some embodiments, the implementation of the forwarding node determining the remaining capacity of the sixth scheduling queue may be: the forwarding node determines the number of cached object blocks, wherein the number of cached object blocks is the number of object blocks to which the data packet cached in the sixth scheduling queue belongs; the forwarding node determines the difference between the first threshold and the number of cached object blocks as the remaining capacity of the sixth scheduling queue, where the first threshold is the number of object blocks that can be cached by the sixth scheduling queue. In this scenario, the data packet with insufficient remaining capacity of the sixth scheduling queue to buffer the target object block means: the remaining capacity of the sixth dispatch queue is 0.
The first threshold may also be referred to as an upper limit of the number of bearers of the scheduling queue, where the upper limit of the number of bearers may be understood as a maximum number of object blocks that the scheduling queue can carry.
For example, 4 dispatch queues, labeled dispatch queue 1 through dispatch queue 4, respectively, are configured at the forwarding node. Wherein the first threshold of each scheduling queue is 3. Assuming that the first packets of the object blocks 1 to 6 arrive in sequence, the first packets of the object blocks 1 to 3 are added to the schedule queue 1, and the first packets of the object blocks 4 to 6 are added to the schedule queue 2.
In this scenario, after the forwarding node adds the first data packet to the third scheduling queue, the forwarding node may further determine the remaining capacity of the third scheduling queue. Illustratively, the remaining capacity of the third dispatch queue is: the first threshold of the third dispatch queue is subtracted by 1 (1 represents that the third dispatch queue has been used to cache an object block).
In other embodiments, the implementation manner of determining the remaining capacity of the sixth scheduling queue by the forwarding node may be: the forwarding node determines the total size of the cached data, wherein the total size of the cached data is the total size of the data packet cached in the sixth scheduling queue; the forwarding node determines the difference between the second threshold and the total size of the buffered data as the remaining capacity of the sixth scheduling queue, where the second threshold is the total size of the data packets that can be buffered by the sixth scheduling queue.
In this scenario, the first data packet further carries the data size of the target object block, and the data packet that the remaining capacity of the sixth scheduling queue is insufficient to buffer the target object block means that: the remaining capacity of the sixth dispatch queue is less than the data size of the target object block.
The second threshold may also be referred to as a Token (Token) of the scheduling queue, where the Token may be understood as a total data size that can be cached by the cache space of the scheduling queue.
For example, 4 dispatch queues, labeled dispatch queue 1 through dispatch queue 4, respectively, are configured at the forwarding node. Wherein the second threshold value of each scheduling queue is 10. Assuming that the first packets of the object block 1 to the object block 3 reach in order, and the data sizes of the object block 1 and the object block 2 are both 5, the first packets of the object block 1 and the object block 2 are added to the scheduling queue 1. For the target block 3, since the remaining capacity of the schedule queue 1 is 0, the first packet of the target block 3 needs to be added to the next empty schedule queue (i.e., schedule queue 2).
In this scenario, after the forwarding node adds the first data packet to the third scheduling queue, the forwarding node may further determine the remaining capacity of the third scheduling queue. Illustratively, the remaining capacity of the third dispatch queue is: the second threshold of the third dispatch queue subtracts the data size of the target object block.
In addition, for the second enqueuing method, in the process that the forwarding node selects a scheduling queue of the uncached data packet from the plurality of scheduling queues as the third scheduling queue, if the current time does not have an empty scheduling queue, the forwarding node may add the first data packet to the default queue. The default queue may be, for example, a best effort queue.
In the second enqueuing mode, since a scheduling queue is cached with a plurality of object blocks, in order to reduce the complexity of the priority algorithm, the target attribute in this scenario may be the arrival time of the first packet of the corresponding object block. At this time, the priority of the schedule queue may be determined based on the arrival time of the first packet added for the first time in the schedule queue. That is, for any scheduling queue, when an object block is added to the scheduling queue for the first time, a priority is configured for the scheduling queue, and when an object block is added to the scheduling queue subsequently, the priority of the scheduling queue is not updated any more.
Based on this, in the second enqueuing method, if the remaining capacity of the sixth scheduling queue is enough to buffer the target object block, the forwarding node takes the sixth scheduling queue as the third scheduling queue, and after adding the first data packet to the third scheduling queue, the forwarding node does not update the priority of the third scheduling queue (i.e., the sixth scheduling queue).
Accordingly, if the remaining capacity of the sixth scheduling queue is insufficient to buffer the target object block, the forwarding node selects a scheduling queue of the uncached data packet from the plurality of scheduling queues as a third scheduling queue, and after adding the first data packet to the third scheduling queue, the forwarding node updates the priority of the third scheduling queue so that the priority of the updated third scheduling queue is lower than the priorities of other scheduling queues in which the data packet is buffered.
The detailed implementation manner of updating the priority of the third scheduling queue by the forwarding node may refer to two priority updating manners in the first enqueuing manner under the scenario that the target attribute is the arrival time of the first data packet of the corresponding object block, which is not described herein.
In the second enqueuing method, the priority of the scheduling queue is determined based on the target attribute of the object block buffered in the scheduling queue, and one scheduling queue buffers a plurality of object blocks, and the first packets of the plurality of object blocks arrive consecutively.
In some embodiments, in the case that all the data packets of all the object blocks cached in a certain scheduling queue are sent completely, the implementation manner that the forwarding node updates the priority of the scheduling queue may be: if all the data packets of all the object blocks cached in any one of the plurality of scheduling queues are sent completely, the priority of the scheduling queue is updated, so that the updated priority of the scheduling queue is lower than the priority of other scheduling queues cached with the data packets.
The priorities of other scheduling queues in which data packets are cached may or may not be updated. If the data packet is updated, the size relation between the priorities before and after updating other scheduling queues in which the data packet is cached is kept unchanged.
The specific implementation manner may refer to the first enqueuing manner, which is not described herein.
Alternatively, in other embodiments, the priority update may be triggered only when the packets of the object blocks of the highest priority dispatch queue are all sent. In this scenario, if all the data packets of all the object blocks cached in any one of the plurality of scheduling queues are sent completely, it is also necessary to determine whether the scheduling queue is the scheduling queue with the highest priority, and if the scheduling queue is not the scheduling queue with the highest priority, the updating of the priority is not triggered.
Accordingly, if the dispatch queue is the highest priority dispatch queue, an update of the priority is triggered. Specifically, the scheduling queues of all the data packets of all the object blocks cached at the current time are determined, and the priority of the scheduling queues is set to be lower than the priority of the other scheduling queues cached with the data packets.
Likewise, the priorities of other scheduling queues in which packets are cached may or may not be updated. If the data packet is updated, the size relation between the priorities before and after updating other scheduling queues in which the data packet is cached is kept unchanged.
In the second enqueuing method, the implementation manner in which the forwarding node determines whether all the packets of the object block are sent is referred to the first enqueuing method, which is not described herein again.
Third means of enqueuing: each scheduling queue is used for caching a plurality of object blocks, and new object blocks enter each scheduling queue in turn.
Because the new object blocks enter each scheduling queue in turn, a plurality of scheduling queues are orderly arranged in advance, so that the forwarding node can select each scheduling queue in turn to cache the new object block when receiving the new object block each time.
The specific implementation manner in which the plurality of scheduling queues are arranged in sequence is not limited, and will not be described in detail herein.
Based on this, the implementation manner of selecting, by the forwarding node, one scheduling queue from the plurality of scheduling queues as the third scheduling queue in step 603 may be: the forwarding node determines a sixth scheduling queue from a plurality of scheduling queues, wherein the sixth scheduling queue is a scheduling queue in which a first packet received last time before the current time is located; one of the dispatch queues ordered after the sixth dispatch queue is determined to be the third dispatch queue.
For example, 4 dispatch queues, labeled dispatch queue 1 through dispatch queue 4, respectively, are configured at the forwarding node. Assuming that the first packets of the object blocks 1 to 6 arrive in sequence, the first packets of the object blocks 1 to 4 are respectively added to the schedule queues 1 to 4, the first packet of the object block 5 is added to the schedule queue 1, and the first packet of the object block 6 is added to the schedule queue 2.
Similarly, in the third enqueuing manner, since a scheduling queue is cached with a plurality of object blocks, in order to reduce the complexity of the priority algorithm, the target attribute in this scenario may be the arrival time of the first packet of the corresponding object block. At this time, the priority of the schedule queue may be determined based on the arrival time of the first packet added for the first time in the schedule queue. That is, for any scheduling queue, when an object block is added to the scheduling queue for the first time, a priority is configured for the scheduling queue, and when an object block is added to the scheduling queue subsequently, the priority of the scheduling queue is not updated any more.
Based on this, in the third enqueuing method, the forwarding node selects one scheduling queue from the plurality of scheduling queues as the third scheduling queue, and adds the first data packet to the third scheduling queue, and if the third scheduling queue has data packets of other object blocks buffered in addition to the first data packet, the forwarding node does not update the priority of the third scheduling queue. Accordingly, if the third scheduling queue does not buffer the data packets of other object blocks except the first data packet, the forwarding node updates the priority of the third scheduling queue.
The detailed implementation manner of updating the priority of the third scheduling queue by the forwarding node may refer to two priority updating manners in the first enqueuing manner under the scenario that the target attribute is the arrival time of the first data packet of the corresponding object block, which is not described herein.
In addition, in the third enqueuing method, the priority of the scheduling queues is determined based on the target attribute of the object blocks buffered in the scheduling queues, and one scheduling queue buffers a plurality of object blocks and the first packets of the plurality of object blocks do not arrive consecutively, that is, the first packets of the adjacent forwarding nodes are buffered in different scheduling queues, respectively. In this scenario, in order to avoid that the transmission time of the first-in object block is long, when all the data packets of one of the object blocks buffered in a certain scheduling queue are transmitted, the priority of the scheduling queue needs to be updated.
Thus, in some embodiments, in a case where all the data packets of one of the object blocks cached in a certain scheduling queue are sent, the implementation manner of updating the priority of the scheduling queue by the forwarding node may be: if all the data packets of one object block cached in any one of the plurality of scheduling queues are sent completely, updating the priority of the scheduling queue so that the updated priority of the scheduling queue is lower than the priority of the scheduling queues cached with the data packets.
The priorities of other scheduling queues in which data packets are cached may or may not be updated. If the data packet is updated, the size relation between the priorities before and after updating other scheduling queues in which the data packet is cached is kept unchanged.
The specific implementation manner may refer to the first enqueuing manner, which is not described herein.
Alternatively, in other embodiments, the priority update may be triggered only when the data packet of a certain object block exists in the scheduling queue with the highest priority is completely transmitted. In this scenario, if all the data packets of one of the object blocks cached in any one of the plurality of scheduling queues are sent, it is further required to determine whether the scheduling queue is the scheduling queue with the highest priority, and if the scheduling queue is not the scheduling queue with the highest priority, the updating of the priority is not triggered.
Accordingly, if the dispatch queue is the highest priority dispatch queue, an update of the priority is triggered. Specifically, the scheduling queues of all the data packets of at least one object block cached before the current time are determined, and the priority of the scheduling queues is set to be lower than the priority of other scheduling queues cached with the data packets.
Likewise, the priorities of other scheduling queues in which packets are cached may or may not be updated. If the data packet is updated, the size relation between the priorities before and after updating other scheduling queues in which the data packet is cached is kept unchanged.
In addition, in the third enqueuing manner, the implementation manner in which the forwarding node determines whether all the data packets of the object block are sent completely may refer to the first enqueuing manner, which is not described herein again.
It should be noted that, the above three enqueuing manners are used for explaining step 603, and embodiments of the present application are not limited to a specific implementation manner in which the forwarding node allocates the third scheduling queue for the newly received object block.
Fig. 7 is a flowchart of a packet transmission method according to an embodiment of the present application, which is used to explain how a forwarding node transmits packets in each scheduling queue. As shown in fig. 7, the method includes the following steps 701 and 702.
Step 701: the forwarding node determines a first dispatch queue from the plurality of dispatch queues based on the priority of each of the plurality of dispatch queues. Each scheduling queue in the plurality of scheduling queues is used for caching data packets of at least one object block, the data packets of the same object block are cached in the same scheduling queue, the priority of each scheduling queue is determined based on the target attribute of the object block to which the data packet cached in the corresponding scheduling queue belongs, and the target attribute is an attribute which remains unchanged in the sending process of the data packet of the corresponding object block.
As can be seen from the embodiment shown in fig. 6, in the embodiment of the present application, in order to enable the forwarding node to schedule and send each data packet in units of object blocks, the forwarding node is configured with a plurality of scheduling queues, each of which is used for buffering the data packet of at least one object block. For example, each scheduling queue is used for buffering data packets of one object block, or each scheduling queue is used for buffering data packets of two or more object blocks. In other words, packets of the same object block are buffered in the same dispatch queue.
Further, to reduce the algorithmic complexity of the forwarding node, priorities may be configured for each scheduling queue. Therefore, when forwarding the data packet, the forwarding node can quickly select one scheduling queue from a plurality of scheduling queues based on the priority of each scheduling queue, and then send the data packet in the selected scheduling queue. Based on this, the following technical effects can be achieved:
since the priorities of the dispatch queues are related to fixed target attributes of the object blocks cached by the dispatch queues, rather than specific data packets, the size relationship between the priorities of the dispatch queues is not basically changed in a short period of time. In the case that the priority of each scheduling queue is unchanged, the forwarding node selects the data packet in the same scheduling queue to send every time in the shorter time. And because the data packets of the same object block are cached in the same scheduling queue, the probability that the data packets transmitted by the forwarding node belong to the same object block in the shorter time is very high, so that the data blocks in the same object block are transmitted together as much as possible, and the frequency of interleaving transmission among the data packets of different object blocks is reduced.
The determining, by the forwarding node, the first scheduling queue in step 701 may specifically be: and selecting a scheduling queue with the highest priority from the plurality of scheduling queues as a first scheduling queue.
Step 702: the forwarding node sends the data packet in the first scheduling queue.
In some embodiments, the implementation of step 702 may be: the forwarding node determines a second data packet from the first scheduling queue, wherein the second data packet is the data packet with the earliest enqueuing time in the first scheduling queue; the forwarding node sends the second data packet. The method is simple to operate and easy to implement.
Alternatively, in other embodiments, the implementation of step 702 may be: the forwarding node determines an object block to which the data packet transmitted last time before the current time belongs, and then selects the data packet belonging to the object block from the first scheduling queue for transmission. In this way, it can be strictly ensured that the data packets of different object blocks are not transmitted in an interleaved manner.
In order to further understand the embodiments of the present application, the following describes an interaction procedure between a transmitting end, a forwarding node, and a receiving end, taking fig. 8 as an example.
As shown in fig. 8, when an upper layer application of the transmitting end needs to transmit a target block, the transmitting end packetizes the target block to obtain a plurality of data packets. And setting a first packet and a last packet in the plurality of data packets, namely the first packet carries a first packet label, and the last packet carries a last packet label. In addition, each data packet also carries the identity of the object block. The sender then sends the plurality of data packets.
When the forwarding node receives the first packet, a scheduling queue is selected, and the first packet is added to the selected scheduling queue (i.e., the first packet is selected for enqueuing). In addition, the forwarding node determines the priority of the scheduling queue based on the arrival order of the object blocks (i.e., the first packet arrival time) or the DL of the object blocks (i.e., the network allowable delay). The forwarding node forwards the data packets in each scheduling queue based on the priority of the scheduling queue. After receiving the data packets, the receiving end splices each data packet into an object block, so that the receiving end receives the data packets based on the object block.
In addition, in the process of transmitting the packet, if all the data packets of a certain object block are transmitted, the forwarding node can trigger the priority update of each scheduling queue.
It should be noted that, each step in fig. 8 may refer to the embodiment of fig. 5-7, and is not limited to the embodiment of fig. 5-7.
As can be seen from the embodiments shown in fig. 5 to 8, the embodiments of the present application include the following:
1. a single object block monotonic queue can be implemented.
The packets of one object block are buffered in 1 scheduling queue, so as to avoid interleaving transmission between packets of different object blocks, such as the first enqueuing mode in the embodiment of fig. 6. Alternatively, if the number of object blocks is excessive, multiple object blocks may be cached in a dispatch queue, such as the second enqueue mode and the third enqueue mode in the embodiment of fig. 6.
2. The cyclic scheduling of each scheduling queue can be realized, and each scheduling queue has the most priority opportunity.
Based on the three enqueuing modes of fig. 6, the object blocks are circularly input into each scheduling queue, so that each scheduling queue is likely to be the scheduling queue with the highest priority, and the priority is circularly input between different scheduling queues. And by means of the cyclic priority, the fairness of forwarding opportunities among all scheduling queues is guaranteed, so that starvation of a certain scheduling queue is avoided.
3. Based on the first data packet (header packet) of the object block, the last data packet (trailer packet) triggers an update of the priority of the dispatch queue. The scheduling queue to be joined is selected based on the data size of the object block or a first threshold (upper number of bearers) or a second threshold (tokens) of the scheduling queue.
Compared with the traditional scheduling, the method and the device directly take the priority of the data packet into consideration, and do not consider the information of the object block, and the embodiment of the application takes the data size of the object block, the information of the head packet, the tail packet and the like of the object block into consideration to perform corresponding scheduling, thereby realizing the de-interleaving effect of different object blocks.
4. And other information of the object block, such as network allowed time delay, is utilized to realize a better transmission effect.
When some object blocks have network permission delay requirements, the priority of the scheduling queue where the object block is located can be determined according to the network permission delay, so that more urgent objects can be sent out more quickly.
In summary, based on the above, the following technical effects may be achieved in the embodiments of the present application:
1. the embodiment of the application transmits the object block based on the scheduling queue. Because the data packets of the same object block are cached in the same scheduling queue, and the priority of the scheduling queue is related to the fixed target attribute of the object block cached in the scheduling queue, rather than the specific data packet, the size relationship between the priorities of the scheduling queues is basically unchanged in a shorter period of time. In the case that the priority of each scheduling queue is unchanged, the forwarding node selects the data packet in the same scheduling queue to send every time in the shorter time. And because the data packets of the same object block are cached in the same scheduling queue, the probability that the data packets transmitted by the forwarding node belong to the same object block in the shorter time is very high, so that the data blocks in the same object block are transmitted together as much as possible, and the frequency of interleaving transmission among the data packets of different object blocks is reduced.
Compared with the method for reducing the frequency of interleaving transmission among data packets of different object blocks based on a comparator and a matcher, the method and the device for scheduling the data packets of the object blocks can be realized through scheduling the queues, and the hardware cost is low.
2. The embodiment of the application can realize the high-speed forwarding of the data packet by the forwarding node without losing throughput.
If the frequency of interleaving transmission between data packets of different object blocks is reduced by using the comparator and the matcher, the forwarding efficiency of the forwarding node is reduced. The embodiment of the application can be realized by adopting the scheduling queue, thereby realizing the high-speed forwarding of the data packet by the forwarding node.
3. The whole process of the embodiment of the application does not need a protocol negotiation process.
And forwarding each object block by adopting a resource reservation mode (such as reserving a time slice for a certain object block) relative to each node so as to avoid interleaving transmission among different object blocks. According to the embodiment of the application, the intermediate node is not required to reserve resources, and the negotiation process is not required, so that the data packet can be transmitted efficiently with low time delay.
Fig. 9 is a schematic structural diagram of a forwarding node according to an embodiment of the present application. As shown in fig. 9, the forwarding node 900 includes the following modules 901-903.
The processing module 901 is configured to determine a first scheduling queue from the plurality of scheduling queues based on a priority of each scheduling queue in the plurality of scheduling queues. Specific implementations may refer to step 701 in the embodiment of fig. 7.
Each scheduling queue in the plurality of scheduling queues is used for caching data packets of at least one object block, the data packets of the same object block are cached in the same scheduling queue, the priority of each scheduling queue is determined based on the target attribute of the object block to which the data packet cached in the corresponding scheduling queue belongs, and the target attribute is an attribute which remains unchanged in the sending process of the data packet of the corresponding object block;
a sending module 902, configured to send a data packet in the first scheduling queue. A specific implementation may refer to step 702 in the fig. 7 embodiment.
Optionally, the forwarding node further includes:
the receiving module 903 is configured to receive a first data packet, where the first data packet carries an identifier of a target object block, and the target object block is an object block to which the first data packet belongs. A specific implementation may refer to step 601 in the embodiment of fig. 6.
The processing module 901 is further configured to, if the first data packet does not carry a first packet tag, add the first data packet to a second scheduling queue based on the identifier of the target object block, where the second scheduling queue is a scheduling queue of the plurality of scheduling queues in which the data packets of the target object block are cached, and the first packet tag is used to indicate whether the first data packet is the first data packet of the target object block. Specific implementations may refer to step 602 in the embodiment of fig. 6.
Optionally, the processing module 901 is further configured to: and if the first data packet also carries a first packet label, selecting one scheduling queue from a plurality of scheduling queues as a third scheduling queue, and adding the first data packet to the third scheduling queue. Specific implementations may refer to step 603 in the embodiment of fig. 6.
Optionally, the processing module is configured to:
and selecting a scheduling queue of the uncached data packet from the plurality of scheduling queues as a third scheduling queue.
Optionally, each of the plurality of scheduling queues buffers at most one data packet of the object block.
Optionally, the target attribute is a network allowable delay of the corresponding object block, and the first data packet further carries the network allowable delay of the target object block;
the processing module is also used for:
and updating the priority of the third scheduling queue so that the priority of the updated third scheduling queue is higher than the priority of a fourth scheduling queue and lower than the priority of a fifth scheduling queue, wherein the fourth scheduling queue is a scheduling queue in which the network allowable delay of the object block cached in the plurality of scheduling queues is greater than the network allowable delay of the target object block, and the fifth scheduling queue is a scheduling queue in which the network allowable delay of the object block cached in the plurality of scheduling queues is less than the network allowable delay of the target object block.
Optionally, the processing module is configured to:
determining the residual capacity of a sixth scheduling queue, wherein the sixth scheduling queue is a scheduling queue in which a first packet received last time before the current time is located;
and if the residual capacity of the sixth scheduling queue is insufficient to buffer the target object block, selecting the scheduling queue of the uncached data packet from the plurality of scheduling queues as a third scheduling queue.
Optionally, the processing module is further configured to:
and if the residual capacity of the sixth scheduling queue is enough to buffer the target object block, taking the sixth scheduling queue as a third scheduling queue.
Optionally, the processing module is configured to:
determining the number of cached object blocks, wherein the number of cached object blocks is the number of object blocks to which the data packets cached in the sixth scheduling queue belong;
determining a difference value between a first threshold and the number of cached object blocks as the residual capacity of a sixth scheduling queue, wherein the first threshold is the number of object blocks which can be cached by the sixth scheduling queue;
the remaining capacity of the sixth scheduling queue is insufficient to buffer the data packets of the target object block means that: the remaining capacity of the sixth dispatch queue is 0.
Optionally, the processing module is configured to:
determining the total size of the cached data, wherein the total size of the cached data is the total size of the cached data packet of the sixth scheduling queue;
Determining a difference value between a second threshold value and the total size of the cached data as the residual capacity of the sixth scheduling queue, wherein the second threshold value is the total size of the data packet which can be cached by the sixth scheduling queue;
the first data packet further carries the data size of the target object block, and the data packet that the remaining capacity of the sixth scheduling queue is insufficient to buffer the target object block means that: the remaining capacity of the sixth dispatch queue is less than the data size of the target object block.
Optionally, the target attribute is the arrival time of the first packet of the corresponding object block;
the processing module is also used for:
and updating the priority of the third scheduling queue so that the priority of the updated third scheduling queue is lower than the priority of other scheduling queues in which the data packets are cached.
Optionally, the processing module is further configured to:
if all the data packets of all the object blocks cached in any one of the plurality of scheduling queues are sent completely, updating the priority of any one scheduling queue so that the updated priority of any one scheduling queue is lower than the priorities of other scheduling queues cached with the data packets.
Optionally, the plurality of scheduling queues are sequentially arranged in order;
the processing module is used for:
determining a sixth scheduling queue from the plurality of scheduling queues, wherein the sixth scheduling queue is the scheduling queue in which the first packet received last time before the current time is located;
One of the dispatch queues ordered after the sixth dispatch queue is determined to be the third dispatch queue.
Optionally, the target attribute is the arrival time of the first packet of the corresponding object block;
the processing module is also used for:
if the third scheduling queue does not cache the data packets of other object blocks except the first data packet, updating the priority of the third scheduling queue so that the priority of the updated third scheduling queue is lower than the priority of other scheduling queues cached with the data packets.
Optionally, the processing module is further configured to:
if all the data packets of one object block cached in any one of the plurality of scheduling queues are sent, updating the priority of the plurality of scheduling queues so that the updated priority of any one scheduling queue is lower than the updated priority of the scheduling queues cached with the data packets.
Optionally, the processing module is configured to:
and if the priority of any scheduling queue is the highest priority, updating the priority of any scheduling queue.
In the embodiment of the application, because the data packets of the same object block are cached in the same scheduling queue, and the priority of the scheduling queue is related to the fixed target attribute of the object block cached in the scheduling queue, but not to a specific data packet, the size relationship between the priorities of the scheduling queues basically does not change in a short period of time. In the case that the priority of each scheduling queue is unchanged, the forwarding node selects the data packet in the same scheduling queue to send every time in the shorter time. And because the data packets of the same object block are cached in the same scheduling queue, the probability that the data packets transmitted by the forwarding node belong to the same object block in the shorter time is very high, so that the data blocks in the same object block are transmitted together as much as possible, and the frequency of interleaving transmission among the data packets of different object blocks is reduced.
Compared with the method for reducing the frequency of interleaving transmission among data packets of different object blocks based on the comparator and the matcher, the method and the device for forwarding the data packets of the object blocks can be realized through scheduling the queues and the corresponding priorities, and are low in hardware cost and high in forwarding efficiency.
It should be noted that: when forwarding a data packet, the forwarding node provided in the foregoing embodiment only uses the division of each functional module to illustrate, and in practical application, the foregoing functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the forwarding node provided in the foregoing embodiment and the method embodiment for sending a data packet belong to the same concept, and the specific implementation process of the forwarding node is detailed in the method embodiment and will not be described herein again.
Fig. 10 is a schematic structural diagram of a transmitting end according to an embodiment of the present application. As shown in fig. 10, the transmitting end 1000 includes the following modules 1001-1002.
The processing module 1001 is configured to determine a plurality of data packets of the object block to be sent, where each data packet of the plurality of data packets carries an identifier of the object block, and a first packet of the plurality of data packets carries a first packet tag, where the first packet tag is configured to indicate that the corresponding data packet is the first data packet of the object block to be sent. A specific implementation may refer to step 501 in the embodiment of fig. 5.
A sending module 1002, configured to send a plurality of data packets. A specific implementation may refer to step 502 in the embodiment of fig. 5.
Optionally, the first packet also carries a network allowed delay of the object block to be sent.
Optionally, the header packet also carries the data size of the object block to be transmitted.
Optionally, the tail packet of the plurality of data packets carries a tail packet tag, and the tail packet tag is used for indicating that the corresponding data packet is the last data packet of the object block to be sent.
The purpose of the first packet carrying the first packet label is to facilitate the forwarding node to identify a new object block and allocate a scheduling queue for the new object block. Each data packet carries the identifier of the object block, so that the forwarding node can cache the data packets of the same object block in the same scheduling queue. Based on the configuration of the transmitting end, the forwarding node may implement scheduling of the data packet in units of object blocks through the embodiments shown in fig. 6 and 7.
It should be noted that: when forwarding a data packet, the transmitting end provided in the foregoing embodiment only uses the division of the foregoing functional modules to illustrate, and in practical application, the foregoing functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the embodiments of the method for transmitting the data packet and the transmitting end provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the embodiments of the method are detailed in the embodiments of the method, which are not repeated herein.
Fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present application. The forwarding node, the transmitting end, the receiving end, and the like in the foregoing embodiments may be implemented by the computer device shown in fig. 11. With reference to FIG. 11, the computer device includes a processor 1101, a communication bus 1102, a memory 1103 and at least one communication interface 1104.
The processor 1101 may be a general purpose central processing unit (central processing unit, CPU), application Specific Integrated Circuit (ASIC), or one or more integrated circuits for controlling the execution of the program of the present application.
Communication bus 1102 is used to transfer information between the aforementioned components.
The Memory 1103 may be, but is not limited to, a read-only Memory (ROM) or other type of static storage device that can store static information and instructions, a random access Memory (random access Memory, RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only Memory (electrically erasable programmable read-only Memory, EEPROM), a compact disc (compact disc read-only Memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 1103 may be separate and coupled to the processor 1101 by a communication bus 1102. The memory 1103 may also be integrated with the processor 1101.
The memory 1103 is used for storing program codes for executing the scheme of the present application, and the processor 1101 controls the execution. The processor 1101 is configured to execute program code stored in the memory 1103. One or more software modules may be included in the program code. Both the forwarding node and the sender may determine the data for developing the application by one or more software modules in the processor 1101 and program code in the memory 1103.
Communication interface 1104 uses any transceiver-like device for communicating with other devices or communication networks, which may be ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc.
In a particular implementation, as one embodiment, a computer device may include multiple processors, such as processor 1101 and processor 1105 shown in FIG. 11. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
The computer device may be a general purpose computer device or a special purpose computer device. In a specific implementation, the computer device may be a desktop, a portable computer, a network server, a palm computer (personal digital assistant, PDA), a mobile phone, a tablet computer, a wireless terminal device, a router, or an embedded device. Embodiments of the application are not limited to the type of computer device.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, data subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital versatile disk (digital versatile disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), etc.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The embodiments of the present application are not limited to the above embodiments, but any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the embodiments of the present application should be included in the protection scope of the embodiments of the present application.

Claims (36)

1. A method for transmitting a data packet, the method comprising:
the forwarding node determines a first scheduling queue from a plurality of scheduling queues based on the priority of each scheduling queue in the plurality of scheduling queues;
each scheduling queue of the plurality of scheduling queues is used for caching data packets of at least one object block, the data packets of the same object block are cached in the same scheduling queue, the priority of each scheduling queue is determined based on a target attribute of the object block to which the data packet cached by the corresponding scheduling queue belongs, and the target attribute is an attribute which remains unchanged in the sending process of the data packet of the corresponding object block;
And the forwarding node sends the data packet in the first scheduling queue.
2. The method of claim 1, wherein the method further comprises:
the forwarding node receives a first data packet, wherein the first data packet carries an identifier of a target object block, and the target object block is an object block to which the first data packet belongs;
and if the first data packet does not carry a first packet label, the forwarding node adds the first data packet to a second scheduling queue based on the identification of the target object block, wherein the second scheduling queue is a scheduling queue in which the data packets of the target object block are cached in the plurality of scheduling queues, and the first packet label is used for indicating whether the first data packet is the first data packet of the target object block.
3. The method of claim 2, wherein after the forwarding node receives the first data packet, the method further comprises:
and if the first data packet also carries the first packet label, the forwarding node selects one scheduling queue from the plurality of scheduling queues as a third scheduling queue, and adds the first data packet to the third scheduling queue.
4. The method of claim 3, wherein the forwarding node selecting one of the plurality of dispatch queues as the third dispatch queue comprises:
and the forwarding node selects a scheduling queue of the uncached data packet from the plurality of scheduling queues as the third scheduling queue.
5. The method of claim 4, wherein each of the plurality of dispatch queues buffers at most one object block of data packets.
6. The method of claim 5, wherein the target attribute is a network allowed time delay for the corresponding object block, the first data packet further carrying the network allowed time delay for the target object block;
after the forwarding node selects a scheduling queue of the uncached data packet from the plurality of scheduling queues as the third scheduling queue, the method further includes:
the forwarding node updates the priority of the third scheduling queue so that the priority of the updated third scheduling queue is higher than the priority of a fourth scheduling queue and lower than the priority of a fifth scheduling queue, wherein the fourth scheduling queue is a scheduling queue in which the network allowable time delay of the object block cached in the plurality of scheduling queues is greater than the network allowable time delay of the target object block, and the fifth scheduling queue is a scheduling queue in which the network allowable time delay of the object block cached in the plurality of scheduling queues is less than the network allowable time delay of the target object block.
7. The method of claim 3, wherein the forwarding node selecting one of the plurality of dispatch queues as the third dispatch queue comprises:
the forwarding node determines the residual capacity of a sixth scheduling queue, wherein the sixth scheduling queue is a scheduling queue where a first packet received last time before the current time is located;
and if the residual capacity of the sixth scheduling queue is insufficient to buffer the target object block, the forwarding node selects a scheduling queue of the uncached data packet from the plurality of scheduling queues as the third scheduling queue.
8. The method of claim 7, wherein after the forwarding node determines the remaining capacity of the sixth dispatch queue, the method further comprises:
and if the residual capacity of the sixth scheduling queue is enough to buffer the target object block, the forwarding node takes the sixth scheduling queue as the third scheduling queue.
9. The method of claim 7 or 8, wherein the forwarding node determining a remaining capacity of a sixth dispatch queue comprises:
the forwarding node determines the number of cached object blocks, wherein the number of cached object blocks is the number of object blocks to which the data packet cached in the sixth scheduling queue belongs;
The forwarding node determines the difference value between a first threshold value and the number of cached object blocks as the residual capacity of the sixth scheduling queue, wherein the first threshold value is the number of object blocks which can be cached by the sixth scheduling queue;
the data packet of which the remaining capacity of the sixth scheduling queue is insufficient to buffer the target object block means that: the remaining capacity of the sixth dispatch queue is 0.
10. The method of claim 7 or 8, wherein the forwarding node determining a remaining capacity of a sixth dispatch queue comprises:
the forwarding node determines the total size of cached data, wherein the total size of the cached data is the total size of a data packet cached in the sixth scheduling queue;
the forwarding node determines the difference value between a second threshold value and the total size of the cached data as the residual capacity of the sixth scheduling queue, wherein the second threshold value is the total size of the data packets which can be cached by the sixth scheduling queue;
the first data packet further carries the data size of the target object block, and the remaining capacity of the sixth scheduling queue is insufficient to buffer the data packet of the target object block means that: the remaining capacity of the sixth scheduling queue is smaller than the data size of the target object block.
11. The method of claim 4 or 7, wherein the target attribute is an arrival time of a first packet of the corresponding object block;
after the forwarding node selects a scheduling queue of the uncached data packet from the plurality of scheduling queues as the third scheduling queue, the method further includes:
and the forwarding node updates the priority of the third scheduling queue so that the priority of the updated third scheduling queue is lower than the priority of other scheduling queues in which data packets are cached.
12. The method of any one of claims 4-11, wherein the method further comprises:
and if all the data packets of all the object blocks cached in any one of the plurality of scheduling queues are sent completely, updating the priority of any one scheduling queue so that the updated priority of any scheduling queue is lower than the priorities of other scheduling queues cached with the data packets.
13. The method of claim 3, wherein the plurality of scheduling queues are ordered in sequence;
the forwarding node selecting one scheduling queue from the plurality of scheduling queues as the third scheduling queue, including:
The forwarding node determines a sixth scheduling queue from the plurality of scheduling queues, wherein the sixth scheduling queue is a scheduling queue where a first packet received last time before the current time is located;
and determining one scheduling queue ordered after the sixth scheduling queue as the third scheduling queue.
14. The method of claim 13, wherein the target attribute is a time of arrival of a first packet of a corresponding object block;
after the forwarding node selects one scheduling queue from the plurality of scheduling queues as the third scheduling queue, the method further includes:
and if the third scheduling queue does not cache the data packets of other object blocks except the first data packet, the forwarding node updates the priority of the third scheduling queue so that the priority of the updated third scheduling queue is lower than the priority of other scheduling queues cached with the data packets.
15. The method of claim 13 or 14, wherein the method further comprises:
and if all the data packets of one of the object blocks cached in any one of the plurality of scheduling queues are sent completely, updating the priorities of the plurality of scheduling queues so that the updated priority of any one scheduling queue is lower than the updated priorities of other scheduling queues cached with the data packets.
16. The method of claim 12 or 15, wherein said updating the priority of said any one of the dispatch queues comprises:
and if the priority of any scheduling queue is the highest priority, updating the priority of any scheduling queue.
17. A method for transmitting a data packet, the method comprising:
a transmitting end determines a plurality of data packets of an object block to be transmitted, each data packet in the plurality of data packets carries an identifier of the object block, and a first packet in the plurality of data packets carries a first packet tag, wherein the first packet tag is used for indicating that the corresponding data packet is the first data packet of the object block to be transmitted;
the transmitting end transmits the data packets.
18. The method of claim 17, wherein the header packet also carries a network allowed time delay for the object block to be transmitted.
19. The method according to claim 17 or 18, wherein the header packet also carries the data size of the object block to be transmitted.
20. The method according to any of claims 17-19, wherein a tail packet of the plurality of data packets carries a tail packet tag, the tail packet tag being used to indicate that the corresponding data packet is the last data packet of the object block to be transmitted.
21. A forwarding node, the forwarding node comprising:
a processing module, configured to determine a first scheduling queue from a plurality of scheduling queues based on a priority of each scheduling queue in the plurality of scheduling queues;
each scheduling queue of the plurality of scheduling queues is used for caching data packets of at least one object block, the data packets of the same object block are cached in the same scheduling queue, the priority of each scheduling queue is determined based on a target attribute of the object block to which the data packet cached by the corresponding scheduling queue belongs, and the target attribute is an attribute which remains unchanged in the sending process of the data packet of the corresponding object block;
and the sending module is used for sending the data packet in the first scheduling queue.
22. The forwarding node of claim 21 wherein the forwarding node further comprises:
the receiving module is used for receiving a first data packet, wherein the first data packet carries the identification of a target object block, and the target object block is an object block to which the first data packet belongs;
the processing module is further configured to, if the first data packet does not carry a first packet tag, add the first data packet to a second scheduling queue based on the identifier of the target object block, where the second scheduling queue is a scheduling queue in the plurality of scheduling queues in which the data packet of the target object block is cached, and the first packet tag is used to indicate whether the first data packet is the first data packet of the target object block.
23. The forwarding node of claim 22 wherein the processing module is further configured to:
and if the first data packet also carries the first packet label, selecting one scheduling queue from the plurality of scheduling queues as a third scheduling queue, and adding the first data packet to the third scheduling queue.
24. The forwarding node of claim 23 wherein the processing module is to:
and selecting a scheduling queue of the uncached data packet from the plurality of scheduling queues as the third scheduling queue.
25. The forwarding node of claim 24 wherein each of the plurality of dispatch queues buffers at most one object block of data packets.
26. The forwarding node of claim 25 wherein the target attribute is a network allowed delay for a corresponding object block, the first data packet further carrying the network allowed delay for the target object block;
the processing module is further configured to:
and updating the priority of the third scheduling queue so that the priority of the updated third scheduling queue is higher than the priority of a fourth scheduling queue and lower than the priority of a fifth scheduling queue, wherein the fourth scheduling queue is a scheduling queue in which the network allowable delay of the object blocks cached in the plurality of scheduling queues is greater than the network allowable delay of the target object block, and the fifth scheduling queue is a scheduling queue in which the network allowable delay of the object blocks cached in the plurality of scheduling queues is less than the network allowable delay of the target object block.
27. The forwarding node of claim 23 wherein the processing module is to:
determining the residual capacity of a sixth scheduling queue, wherein the sixth scheduling queue is a scheduling queue in which a first packet received last time before the current time is located;
and if the residual capacity of the sixth scheduling queue is insufficient to buffer the target object block, selecting a scheduling queue of the uncached data packet from the plurality of scheduling queues as the third scheduling queue.
28. A forwarding node according to claim 24 or 27, wherein the target attribute is the arrival time of the first packet of the corresponding object block;
the processing module is further configured to:
and updating the priority of the third scheduling queue so that the priority of the updated third scheduling queue is lower than the priority of other scheduling queues in which the data packets are cached.
29. The forwarding node according to any of claims 24-28, wherein said processing module is further configured to:
and if all the data packets of all the object blocks cached in any one of the plurality of scheduling queues are sent completely, updating the priority of any one scheduling queue so that the updated priority of any scheduling queue is lower than the priorities of other scheduling queues cached with the data packets.
30. The forwarding node of claim 23 wherein the plurality of scheduling queues are arranged in sequence;
the processing module is used for:
determining a sixth scheduling queue from the plurality of scheduling queues, wherein the sixth scheduling queue is a scheduling queue in which a first packet received last time before the current time is located;
and determining one scheduling queue ordered after the sixth scheduling queue as the third scheduling queue.
31. The forwarding node of claim 30 wherein the target attribute is a time of arrival of a first packet of a corresponding object block;
the processing module is further configured to:
and if the third scheduling queue does not cache the data packets of other object blocks except the first data packet, updating the priority of the third scheduling queue so that the priority of the updated third scheduling queue is lower than the priority of other scheduling queues cached with the data packets.
32. The forwarding node of claim 30 or 31, wherein the processing module is further configured to:
and if all the data packets of one of the object blocks cached in any one of the plurality of scheduling queues are sent completely, updating the priorities of the plurality of scheduling queues so that the updated priority of any one scheduling queue is lower than the updated priorities of other scheduling queues cached with the data packets.
33. A transmitting terminal, characterized in that the transmitting terminal comprises:
the processing module is used for determining a plurality of data packets of an object block to be sent, each data packet in the plurality of data packets carries an identifier of the object block, and a first packet in the plurality of data packets carries a first packet label, wherein the first packet label is used for indicating that the corresponding data packet is the first data packet of the object block to be sent;
and the sending module is used for sending the data packets.
34. A forwarding node, the forwarding node comprising a memory and a processor;
the memory being adapted to store a program for supporting the forwarding node to perform the method of any of claims 1-16 and to store data involved in implementing the method of any of claims 1-16;
the processor is configured to execute a program stored in the memory.
35. A transmitting terminal, characterized in that the transmitting terminal comprises a memory and a processor;
the memory is configured to store a program for supporting the sender to perform the method of any one of claims 17 to 20, and to store data related to implementing the method of any one of claims 17 to 20;
The processor is configured to execute a program stored in the memory.
36. A computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of any of claims 1-16 or 17-20.
CN202210575912.6A 2022-05-24 2022-05-24 Data packet transmitting method, forwarding node, transmitting terminal and storage medium Pending CN117155874A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210575912.6A CN117155874A (en) 2022-05-24 2022-05-24 Data packet transmitting method, forwarding node, transmitting terminal and storage medium
PCT/CN2023/092332 WO2023226716A1 (en) 2022-05-24 2023-05-05 Packet transmission method, forwarding node, transmission end and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210575912.6A CN117155874A (en) 2022-05-24 2022-05-24 Data packet transmitting method, forwarding node, transmitting terminal and storage medium

Publications (1)

Publication Number Publication Date
CN117155874A true CN117155874A (en) 2023-12-01

Family

ID=88906821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210575912.6A Pending CN117155874A (en) 2022-05-24 2022-05-24 Data packet transmitting method, forwarding node, transmitting terminal and storage medium

Country Status (2)

Country Link
CN (1) CN117155874A (en)
WO (1) WO2023226716A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579705A (en) * 2024-01-16 2024-02-20 四川并济科技有限公司 System and method for dynamically scheduling servers based on batch data requests

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110413210B (en) * 2018-04-28 2023-05-30 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for processing data
CN112311693B (en) * 2019-07-26 2022-08-26 华为技术有限公司 Service data transmission method and device
CN114448903A (en) * 2020-10-20 2022-05-06 华为技术有限公司 Message processing method, device and communication equipment
CN113067778B (en) * 2021-06-04 2021-09-17 新华三半导体技术有限公司 Flow management method and flow management chip
CN113327053A (en) * 2021-06-21 2021-08-31 中国农业银行股份有限公司 Task processing method and device
CN114153581A (en) * 2021-11-29 2022-03-08 北京金山云网络技术有限公司 Data processing method, data processing device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579705A (en) * 2024-01-16 2024-02-20 四川并济科技有限公司 System and method for dynamically scheduling servers based on batch data requests
CN117579705B (en) * 2024-01-16 2024-04-02 四川并济科技有限公司 System and method for dynamically scheduling servers based on batch data requests

Also Published As

Publication number Publication date
WO2023226716A1 (en) 2023-11-30

Similar Documents

Publication Publication Date Title
US10341260B2 (en) Early queueing network device
US8234435B2 (en) Relay device
RU2419226C2 (en) Memory control for high-speed control of access to environment
JP5640234B2 (en) Layer 2 packet aggregation and fragmentation in managed networks
JP5628211B2 (en) Flexible reservation request and scheduling mechanism within a managed shared network with quality of service
JP7433479B2 (en) Packet transfer methods, equipment and electronic devices
US8121120B2 (en) Packet relay apparatus
US9276873B2 (en) Time-based QoS scheduling of network traffic
US8848532B2 (en) Method and system for processing data
JP2004200905A (en) Router apparatus, output port circuit therefor, and control method thereof
US20150131443A1 (en) Method For Transmitting Data In A Packet-Oriented Communications Network And Correspondingly Configured User Terminal In Said Communications Network
US20190166058A1 (en) Packet processing method and router
WO2023226716A1 (en) Packet transmission method, forwarding node, transmission end and storage medium
CN113141320B (en) System, method and application for rate-limited service planning and scheduling
CN111817985A (en) Service processing method and device
KR100655290B1 (en) Methods and Apparatus for Transmission Queue in Communication Systems
CN112838992A (en) Message scheduling method and network equipment
CN114401235B (en) Method, system, medium, equipment and application for processing heavy load in queue management
CN111756557A (en) Data transmission method and device
CA3119033C (en) Method and apparatus for dynamic track allocation in a network
EP3902215A1 (en) Method for transmitting data and network device
JP2002368799A (en) Band controller
JP2006020069A (en) Method and apparatus of radio communicating and program
CN117857466A (en) Method, device, equipment and storage medium for processing deterministic arrangement of network traffic
CN117793583A (en) Message forwarding method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication