CN113810309A - Congestion processing method, network device and storage medium - Google Patents

Congestion processing method, network device and storage medium Download PDF

Info

Publication number
CN113810309A
CN113810309A CN202010547269.7A CN202010547269A CN113810309A CN 113810309 A CN113810309 A CN 113810309A CN 202010547269 A CN202010547269 A CN 202010547269A CN 113810309 A CN113810309 A CN 113810309A
Authority
CN
China
Prior art keywords
queue
buffer space
message queue
parameter
delay
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010547269.7A
Other languages
Chinese (zh)
Inventor
王云波
高翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN202010547269.7A priority Critical patent/CN113810309A/en
Publication of CN113810309A publication Critical patent/CN113810309A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a congestion processing method, network equipment and a storage medium. The congestion processing method comprises the steps of obtaining a buffer space parameter, improving the buffer space allocated to a first message queue with higher priority according to the buffer space parameter, so that the situation that messages of the first message queue are lost is avoided, dynamic adjustment is realized, and the flexibility of congestion control is improved.

Description

Congestion processing method, network device and storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a congestion processing method, a network device, and a storage medium.
Background
In an existing data center network, different types of traffic are usually mapped to different priorities, generally, a congestion control mechanism for low-priority traffic (e.g., TCP traffic, etc.) is inefficient, however, a network device (e.g., a switch, etc.) removes a fixed static buffer space and a head buffer space, and a remaining buffer space is limited.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
Embodiments of the present invention provide a congestion processing method, a network device, and a storage medium, which can improve network transmission efficiency of a high-priority service in the case of network congestion.
In a first aspect, an embodiment of the present invention provides a congestion processing method, which is applied to a network device, where the network device at least forwards a first packet queue and a second packet queue, and a priority of the first packet queue is higher than a priority of the second packet queue, and the method includes:
obtaining a cache space parameter of the network equipment, and increasing the cache space allocated to the first message queue according to the cache space parameter;
and acquiring queuing time delay of the first message queue after the buffer space is increased, and adjusting an explicit congestion notification ECN parameter according to the queuing time delay, wherein the ECN parameter is used for triggering congestion control.
In a second aspect, an embodiment of the present invention further provides a network device, including at least one processor and a memory, where the memory is used for being communicatively connected to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of congestion handling according to the first aspect.
In a third aspect, an embodiment of the present invention further provides a computer-readable storage medium, where computer-executable instructions are stored, and the computer-executable instructions are configured to cause a computer to execute the congestion processing method according to the first aspect.
The embodiment of the invention comprises the following steps: obtaining a buffer space parameter, increasing the buffer space distributed to the first message queue according to the buffer space parameter, obtaining the queuing time delay of the first message queue after the buffer space is increased, and adjusting the ECN parameter according to the queuing time delay. By obtaining the buffer space parameter, the buffer space allocated to the first message queue with higher priority is improved according to the buffer space parameter, so that the condition that the messages of the first message queue are lost is avoided, dynamic adjustment is realized, and the flexibility of congestion control is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a schematic diagram of a transmission architecture of a switch in a mixed-running scenario of a data center network according to an embodiment of the present invention;
fig. 2 is a flowchart of a congestion processing method provided by an embodiment of the present invention;
fig. 3 is a schematic diagram of a buffer message queue according to an embodiment of the present invention;
fig. 4 is a flowchart of increasing a buffer space allocated to a first packet queue according to a buffer space parameter according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating a queuing delay of a first message queue after increasing a buffer space according to an embodiment of the present invention;
fig. 6 is a flowchart for adjusting an explicit congestion notification ECN parameter according to queuing delay according to an embodiment of the present invention;
fig. 7 is a flowchart of adjusting an ECN parameter according to a size relationship between a first queuing delay and a second queuing delay according to an embodiment of the present invention;
fig. 8 is a flowchart of a congestion handling method according to another embodiment of the present invention;
FIG. 9 is a diagram illustrating allocation of cache space according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a network device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be understood that in the description of the embodiments of the present invention, a plurality (or a plurality) means two or more, more than, less than, more than, etc. are understood as excluding the number, and more than, less than, etc. are understood as including the number. If the description of "first", "second", etc. is used for the purpose of distinguishing technical features, it is not intended to indicate or imply relative importance or to implicitly indicate the number of indicated technical features or to implicitly indicate the precedence of the indicated technical features.
Embodiments of the present invention provide a congestion processing method, a network device, and a storage medium, which have high flexibility and can improve network transmission efficiency of a high-priority service.
In an existing data center network, different types of traffic are usually mapped to different priorities, generally, a congestion control mechanism for low-priority traffic (e.g., TCP traffic, etc.) is inefficient, however, a network device (e.g., a switch, etc.) removes a fixed static buffer space and a head buffer space, and a remaining buffer space is limited.
An embodiment of the present invention provides a congestion processing method, which is applied to a network device, where the network device may be a switch, a router, and the like.
A data center is a globally collaborative network of devices that is used to communicate, accelerate, present, compute, store data information over an internet network infrastructure. The data types transmitted by the data center network can be very many, and the embodiment of the invention is explained by a mixed running scene of data TCP (Transmission Control Protocol) and RDMA (Remote Direct Memory Access). Wherein, the priority of the TCP packet queue is lower than that of the RDMA packet queue, and referring to fig. 1, it is a schematic diagram of a transmission architecture of a switch in a mixed-running scenario of a data center network provided by an embodiment of the present invention, where a of the switch is1To AnA plurality of message queues enter the switch, and then are transmitted to the next node from the B outlet of the switch.
When the data flow transmitted by the data center is increased, the B outlet of the switch is easy to be congested, as the TCP performs congestion control through a congestion window adjustment and packet loss retransmission mechanism, the efficiency is low, and the shared memory space of the switch is limited, the situation that the TCP message queue occupies a large amount of shared cache of the switch is easy to occur, so that the RDMA message queue is limited from entering the cache space, the RDMA message queue reaches a cache stop bit quickly when the B outlet is congested, and the packet loss phenomenon is easy to occur.
Based on this, referring to fig. 2, the congestion handling method provided in the embodiment of the present application includes, but is not limited to, the following steps 201 to 203:
step 201: obtaining cache space parameters;
in step 201, the buffer space parameters include a first buffer space parameter corresponding to the first packet queue and a second buffer space parameter corresponding to the second packet queue, and the priority of the first packet queue is higher than that of the second packet queue; illustratively, the first packet queue is an RDMA packet queue, and the second packet queue may be a TCP packet queue; it will be appreciated by those skilled in the art that the first and second packet queues may also be other types of packet queues.
Step 202: increasing the buffer space allocated to the first message queue according to the first buffer space parameter and the second buffer space parameter;
in step 202, increasing the buffer space allocated to the first packet queue may allocate most of the remaining buffer space to the first packet queue, or allocate part of the remaining buffer space to the first packet queue, which may be determined according to the actual situation.
In an embodiment, the first buffer space parameter may be a first queue depth of the first packet queue, and the second buffer space parameter may be a second queue depth of the second packet queue. The buffer space occupation conditions of the single first message queue and the single second message queue can be obtained through the depth of the first queue and the depth of the second queue.
Referring to fig. 3, taking the depth of the second queue as an example, when the depth of the second queue reaches the cache stop bit, it is proved that the traffic of the TCP packet is large and a congestion situation is likely to occur.
Step 203: and acquiring the queuing time delay of the first message queue after the buffer space is increased, and adjusting the explicit congestion notification ECN parameter according to the queuing time delay.
In step 203, the ECN parameter is used to control whether congestion control is triggered, and by obtaining the queuing delay of the first packet queue after the buffer space is increased in step 202, the explicit congestion notification ECN parameter is adjusted according to the queuing delay, thereby avoiding the delay performance degradation caused by the increase of the buffer space.
In the above steps 201 to 203, by obtaining the buffer space parameter, the buffer space allocated to the first message queue is increased according to the buffer space parameter, the queuing delay of the first message queue after the buffer space is increased is obtained, and the explicit congestion notification ECN parameter is adjusted according to the queuing delay. By obtaining the buffer space parameter, the buffer space allocated to the first message queue with higher priority is improved according to the buffer space parameter, thereby avoiding the situation that the messages of the first message queue are lost, realizing dynamic adjustment, and improving the flexibility of congestion control.
Referring to fig. 4, in an embodiment, in the step 202, increasing the buffer space allocated to the first packet queue according to the first buffer space parameter and the second buffer space parameter may specifically include the following steps 401 to 402:
step 401: comparing the first queue depth and the second queue depth;
in step 401, the flow statistical sampling technique of the switch may be utilized, and the packet queues in which flows with different priorities are located and the queue depths of the respective packet queues are analyzed and identified according to the first-class fields of the two-layer field and the third-layer field by sending the sampled flows to the CPU, and then the queue depths of the different packet queues are compared.
Step 402: judging whether the first queue depth is smaller than the second queue depth, and if the first queue depth is smaller than the second queue depth, jumping to step 403;
step 403: and increasing the buffer space allocated to the first message queue.
In steps 402 to 403, when the depth of the first queue is smaller than the depth of the second queue, that is, the number of packets in the second packet queue is large, there is a risk of packet loss in the first packet queue, so that the buffer space allocated to the first packet queue is increased, thereby avoiding the situation that the packets in the first packet queue are lost, implementing dynamic adjustment, and improving flexibility of congestion control.
In an embodiment, the cache space parameter may further include a cache utilization rate of the switch in addition to the first queue depth and the second queue depth, and when the cache utilization rate of the switch is too high, a situation that the cache space is used up may occur at any time, and a packet loss phenomenon is likely to occur. Illustratively, when the depth of the first queue is smaller than the depth of the second queue, but the cache utilization rate of the switch is not large, for example, about 40%, the cache space allocated to the first packet queue may not be increased, so as to ensure that both the first packet queue and the second packet queue can be efficiently transmitted, and ensure the overall performance of the network.
It will be appreciated that the first threshold may be preset according to the actual situation, for example, may be set to 90%.
In an embodiment, in the step 202, the step of increasing the buffer space allocated to the first packet queue according to the first buffer space parameter and the second buffer space parameter may specifically be:
and increasing the cache stop bit of the first message queue according to the first cache space parameter and the second cache space parameter. Referring to fig. 3, increasing the cache stop bit of the first packet queue may increase the maximum value of the first queue depth, so that the switch may cache more packets in the first packet queue, thereby avoiding a situation that the packets in the first packet queue are lost.
The step of increasing the buffer stop bit of the first packet queue may be to allocate all the remaining buffer space to the first packet queue. Illustratively, when the depth of the first queue is smaller than the depth of the second queue and the cache utilization rate of the switch exceeds 90%, the remaining 10% of the cache space is completely allocated to the first packet queue, and at this time, the packets in the second packet queue are not cached any more, that is, the packets in the second packet queue are directly forwarded. Or distributing most of the remaining buffer space to the first message queue. Illustratively, when the first queue depth is less than the second queue depth and the cache utilization of the switch exceeds 90%, the remaining 8% of the cache space is allocated to the first packet queue and the remaining 2% is allocated to the second packet queue.
In an embodiment, before the buffer space of the first packet queue is adjusted, a preceding determination condition may be added to improve the rationality of the adjustment. Specifically, a second threshold, a third threshold and a fourth threshold are preset, where the second threshold is a bandwidth utilization rate when congestion occurs, the third threshold is a transmission delay of the switch, and the fourth threshold is a PFC (Priority-based Flow Control) packet sending rate.
The method comprises the steps of obtaining a first bandwidth utilization rate before adjusting the cache space of a first message queue, and judging that the current bandwidth utilization rate is too high and the congestion condition of a switch is possibly aggravated when the first bandwidth utilization rate exceeds a second threshold value, so that cache space parameters of the switch are obtained, and adjusting the cache spaces of the first message queue and a second message queue. Illustratively, the second threshold may be set to 98%.
And acquiring a first transmission delay of the switch, and when the first transmission delay exceeds a third threshold, judging that the current transmission delay of the switch is too high and the congestion condition of the switch is possibly intensified, so that the buffer space parameters of the switch are acquired, and the buffer spaces of the first message queue and the second message queue are adjusted. Illustratively, the third threshold may be set at 50 microseconds.
And acquiring a packet sending rate of the flow control PFC based on the priority, and when the packet sending rate exceeds a fourth threshold, easily triggering excessive PFCs to increase the risks of deadlock and packet loss, so that cache space parameters of the switch are acquired, and cache spaces of the first message queue and the second message queue are adjusted. Illustratively, the third threshold may be set to 10/sec.
It is understood that the foregoing pre-determination process based on the second threshold, the third threshold and the fourth threshold may be set alternatively or collectively, depending on the specific network requirements.
Referring to fig. 5, in an embodiment, in the step 203, obtaining the queuing delay of the first packet queue after the buffer space is increased may specifically include the following steps 501 to 502:
step 501: acquiring the queue length of a first message queue and the transmission rate of the first message queue in unit time;
in step 501, the unit time may be freely set according to the actual situation, referring to fig. 3, the unit time may be implemented by using a timestamp marking method, for example, the unit time may be 10 microseconds, the queue length of the first packet queue may reflect the number of packets in the first packet queue in the unit time, the number of packets may be read by using a self-contained function of the switch, the transmission rate of the first packet queue may be an instantaneous rate of the queue exit at the end of the queue, where the instantaneous rate may be obtained by dividing the total number of buffered packets in the unit time by the unit time.
Step 502: and obtaining the queuing time delay of the first message queue according to the queue length and the transmission rate.
In step 502, the queue length of the first packet queue is divided by the transmission rate to obtain the queuing delay of the first packet queue.
In an embodiment, the queuing delay of the first packet queue may be averaged to obtain an average queuing delay, and the average queuing delay is used as a basis for judgment, which is beneficial to improving the accuracy of the judgment.
Referring to fig. 6, in an embodiment, in the step 203, adjusting the explicit congestion notification ECN parameter according to the queuing delay may specifically include the following steps 601 to 602:
step 601: acquiring a current first queuing time delay of a first message queue and an initial second queuing time delay of the first message queue;
in step 601, continuously acquiring a current first queuing delay of the first message queue and an initial second queuing delay of the first message queue, wherein the queuing delay of the first message queue is the second queuing delay when a buffer space of the first message queue is not increased during first acquisition, and the queuing delay of the first message queue is the first queuing delay after the buffer space of the first message queue is increased; and when the next round of the packet data is acquired, the first queuing delay acquired in the previous round is the second queuing delay of the current round, the queuing delay of the first packet queue acquired again in the current round is the new first queuing delay, and the like. Certainly, the first queuing delay in the first acquisition may also be the queuing delay of the first packet queue after the buffer space of the first packet queue is increased, and in brief, the first queuing delay is the queuing delay acquired in the current round, and the second queuing delay is the queuing delay acquired in the previous round, depending on the acquisition time.
Step 602: and adjusting the ECN parameters according to the size relation of the first queuing delay and the second queuing delay.
In step 602, when the first queuing delay is greater than the second queuing delay, it represents that the congestion is increased; and when the first queuing delay is smaller than the second queuing delay, the congestion is relieved, and the ECN parameters are adjusted according to the congestion condition.
Referring to fig. 7, in an embodiment, in the step 602, adjusting the ECN parameter according to a size relationship between the first queuing delay and the second queuing delay may specifically include the following steps 701 to 703:
step 701: when the first queuing delay is larger than the second queuing delay, acquiring a difference value between the first queuing delay and the second queuing delay;
wherein, in step 701, T is used1Representing the first queuing delay, T2Representing the second queuing delay, the above difference β can be expressed as:
β=T1-T2
when beta is less than 0, the congestion is slowed down; when β is greater than 0, congestion is indicated to be increased.
Step 702: obtaining a threshold adjusting coefficient according to the difference value;
in step 702, an adjustable parameter α is introduced, and a threshold adjustment coefficient F is obtained according to the difference, that is:
Fnew=(1-αβ)*Foldwherein F isnewFor the threshold adjustment coefficient of the current round, FoldAdjusting coefficient for threshold of last round, wherein 0<α<1 for fine tuning β.
Step 703: and reducing the ECN threshold value and/or reducing the ECN marking probability by using the threshold adjusting coefficient.
In step 703, the ECN parameter K, i.e. K, is adjusted by the threshold adjustment factornew=Kold*FnewWherein, K isnewAs ECN threshold parameter, K, for the current roundoldThe ECN threshold parameter is an ECN threshold parameter of the previous round, where the initial ECN threshold parameter may be obtained by using a DC-QCN (data Center Quantized Congestion Notification) algorithm when the buffer space of the first packet queue is not adjusted.
In an embodiment, the ECN parameter may include an ECN threshold and an ECN marking probability, so in step 703, the ECN threshold may be reduced by using a threshold adjustment coefficient, or the ECN marking probability may be reduced by using a threshold adjustment coefficient, where reducing the ECN threshold may facilitate triggering the ECN marking in time, so as to perform congestion control, and reducing the ECN marking probability may ensure throughput of a data packet with a larger traffic.
In an embodiment, after the ECN parameter is adjusted, a verification step may be further performed, which specifically may be:
and obtaining a second bandwidth utilization rate, and when the second bandwidth utilization rate is lower than a fifth threshold, restoring the buffer spaces allocated to the first message queue and the second message queue to the initial state, wherein the second bandwidth utilization rate is the bandwidth utilization rate of the switch after the ECN parameter is adjusted, and correspondingly, the fifth threshold may be 70%.
And acquiring a second transmission delay, and when the second transmission delay is lower than a sixth threshold, restoring the buffer space allocated to the first message queue and the second message queue to the initial state, wherein the second transmission delay is the transmission delay of the switch after the ECN parameter is adjusted, and correspondingly, the sixth threshold may be 40 microseconds.
The buffer spaces allocated to the first packet queue and the second packet queue are restored to the initial state, that is, the buffer space of the first packet queue is not increased and the buffer space allocation of the switch is performed when the ECN parameter is adjusted in the above embodiment, and the initial allocation manner is determined according to the specific network requirements and is not listed here.
The buffer space is restored to the initial distribution state, and the effective transmission of messages with various priorities can be ensured.
It is to be understood that the above-mentioned determination process based on the fifth threshold and the sixth threshold may be set alternatively or in combination, depending on the specific network requirements.
The following describes the congestion handling method of the present application in detail by using a practical example.
Referring to fig. 8, an embodiment of the present invention further provides a congestion handling method, including, but not limited to, the following steps 801 to 810:
step 801: judging whether the cache space of the switch is occupied, if so, skipping to the step 802, otherwise, ending the process;
step 802: acquiring a bandwidth utilization rate and transmission delay preset by a switch;
step 803: judging whether the bandwidth utilization rate exceeds a preset value, if so, skipping to step 805, otherwise, skipping to step 804;
step 804: judging whether the transmission delay exceeds a preset value, if so, skipping to step 805, otherwise, ending the process;
step 805: acquiring queue depths of message queues with different priorities and cache utilization rate of a switch;
step 806: judging whether the queue depth of the low-priority message queue is greater than the queue depth of the high-priority message queue and whether the cache utilization rate of the switch exceeds a preset value, if so, skipping to the step 807, otherwise, ending the flow;
step 807: distributing all the residual buffer space of the switch to a high-priority message queue, and directly forwarding a low-priority message queue;
step 808: counting the buffer number and queue length of a high-priority message queue in unit time in a timestamp marking mode to obtain the queuing time delay of the high-priority message queue, determining a threshold adjustment coefficient, and dynamically adjusting an ECN parameter;
step 809: acquiring the bandwidth utilization rate and transmission delay of the switch after the ECN parameters are adjusted;
step 810: if the bandwidth utilization rate of the switch is lower than the preset value after the ECN parameters are adjusted, skipping to the step 811; if the bandwidth utilization rate of the switch is higher than the preset value after the ECN parameters are adjusted, skipping to step 801;
step 811: if the transmission delay of the switch is lower than the preset value after the ECN parameters are adjusted, the process is ended, otherwise, the step 801 is skipped.
In the above steps 801 to 811, it is first determined whether the switch is congested by determining whether the buffer space of the switch is occupied, and if the buffer space of the switch is not occupied, that is, the switch is not congested, the process may not be performed. When the switch is congested, judging whether the bandwidth utilization rate and transmission delay of the switch exceed preset values, if not, representing that the network condition is good and the network condition can not be processed, when the bandwidth utilization rate and transmission delay of the switch exceed the preset values, representing that the network congestion is serious, and improving the buffer space allocated to the message queue with higher priority by obtaining the queue depth of the message queues with different priorities and the buffer utilization rate of the switch according to the size relation of the queue depth of the message queues with different priorities and the buffer utilization rate of the switch, thereby avoiding the condition that the messages of the message queue with high priority are lost, realizing dynamic adjustment and improving the flexibility of congestion control, on the basis, obtaining the queuing delay of the message queue with high priority after increasing the buffer space, and determining a threshold adjustment coefficient according to the queuing delay, the ECN parameters are dynamically adjusted, and the time delay performance reduction caused by the increase of the buffer space is avoided, so that the network transmission efficiency of the high-priority service is improved under the condition of network congestion, and the network transmission performance of the high-priority service is ensured. After the ECN parameters are adjusted, it is determined whether the steps of increasing the buffer space of the high-priority packet queue and adjusting the ECN parameters need to be executed again by obtaining the bandwidth utilization rate and the transmission delay of the switch again until the bandwidth utilization rate and the transmission delay of the switch meet the requirements, that is, the above steps 801 to 811 are executed in a loop.
Illustratively, the high-priority packet queue is exemplified by an RDMA packet queue, the low-priority packet queue is exemplified by a TCP packet queue, and after all the remaining buffer space of the switch is allocated to the high-priority packet queue, the buffer space allocation of the switch is as shown in fig. 9.
It should also be appreciated that the various implementations provided by the embodiments of the present invention can be combined arbitrarily to achieve different technical effects.
Fig. 10 illustrates a network device 1000 according to an embodiment of the present invention. The network device 1000 includes: a memory 1001, a processor 1002 and a computer program stored on the memory 1001 and executable on the processor 1002, the computer program being operable to perform the congestion handling method described above.
The processor 1002 and the memory 1001 may be connected by a bus or other means.
The memory 1001, which is a non-transitory computer readable storage medium, may be used to store a non-transitory software program and a non-transitory computer executable program, such as the congestion handling method described in the embodiments of the present invention. The processor 1002 implements the congestion handling method described above by running a non-transitory software program and instructions stored in the memory 1001.
The memory 1001 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data for performing the congestion handling method described above. Further, the memory 1001 may include a high speed random access memory 1001 and may also include a non-transitory memory 1001, such as at least one piece of disk memory 1001, flash memory device, or other non-transitory solid state memory 1001. In some embodiments, the memory 1001 may optionally include memory 1001 located remotely from the processor 1002, and such remote memory 1001 may be coupled to the network device 1000 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Non-transitory software programs and instructions needed to implement the congestion handling method described above are stored in the memory 1001 and, when executed by the one or more processors 1002, perform the congestion handling method described above, e.g., performing method steps 401 to 402 in fig. 4, method steps 601 to 602 in fig. 6, method steps 701 to 703 in fig. 7, and method steps 801 to 811 in fig. 8.
The embodiment of the invention also provides a computer-readable storage medium, which stores computer-executable instructions, and the computer-executable instructions are used for executing the congestion processing method.
In one embodiment, the computer-readable storage medium stores computer-executable instructions that, when executed by one or more control processors 1002, for example, by one of the processors 1002 in the network device 1000, cause the one or more processors 1002 to perform the congestion handling method described above, for example, to perform method steps 401 to 402 in fig. 4, method steps 601 to 602 in fig. 6, method steps 701 to 703 in fig. 7, and method steps 801 to 811 in fig. 8.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory 1001 technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
While the preferred embodiments of the present invention have been described in detail, it will be understood by those skilled in the art that the foregoing and various other changes, omissions and deviations in the form and detail thereof may be made without departing from the scope of this invention.

Claims (13)

1. A congestion handling method, comprising:
obtaining cache space parameters, wherein the cache space parameters comprise a first cache space parameter corresponding to a first message queue and a second cache space parameter corresponding to a second message queue, and the priority of the first message queue is higher than that of the second message queue;
increasing the buffer space allocated to the first message queue according to the first buffer space parameter and the second buffer space parameter;
and acquiring queuing time delay of the first message queue after the buffer space is increased, and adjusting an explicit congestion notification ECN parameter according to the queuing time delay, wherein the ECN parameter is used for triggering congestion control.
2. The congestion handling method according to claim 1, wherein:
the first buffer space parameter comprises a first queue depth of the first message queue;
the second buffer space parameter includes a second queue depth of the second packet queue.
3. The congestion handling method according to claim 2, wherein the increasing the buffer space allocated to the first packet queue according to the first buffer space parameter and the second buffer space parameter comprises:
comparing the size of the first queue depth and the second queue depth;
and when the depth of the first queue is smaller than that of the second queue, increasing the buffer space allocated to the first message queue.
4. The congestion handling method according to claim 3, wherein the buffer space parameter further includes a buffer utilization, and the increasing the buffer space allocated to the first packet queue when the first queue depth is smaller than the second queue depth includes:
and when the depth of the first queue is smaller than that of the second queue and the cache utilization rate exceeds a first threshold value, increasing the cache space allocated to the first message queue.
5. The method according to claim 3 or 4, wherein the increasing the buffer space allocated to the first packet queue comprises:
and increasing the cache stop bit of the first message queue.
6. The method according to claim 5, wherein the increasing the buffer stop bit of the first packet queue comprises:
and distributing all the residual buffer space to the first message queue.
7. The method according to claim 1, wherein the obtaining of the queuing delay of the first packet queue after increasing the buffer space comprises:
acquiring the queue length of the first message queue and the transmission rate of the first message queue in unit time;
and obtaining the queuing time delay of the first message queue according to the queue length and the transmission rate.
8. The congestion handling method according to claim 1, wherein said adjusting the explicit congestion notification ECN parameter according to the queuing delay comprises:
acquiring a current first queuing time delay of the first message queue and an initial second queuing time delay of the first message queue;
and adjusting the ECN parameters according to the size relation of the first queuing delay and the second queuing delay.
9. The congestion handling method according to claim 8, wherein said adjusting the ECN parameter according to the magnitude relationship between the first queuing delay and the second queuing delay comprises:
when the first queuing delay is larger than the second queuing delay, acquiring a difference value between the first queuing delay and the second queuing delay;
obtaining a threshold adjusting coefficient according to the difference value;
and reducing the ECN threshold value and/or reducing the ECN marking probability by using the threshold adjusting coefficient.
10. The method according to claim 1, wherein the obtaining the buffer space parameter comprises at least one of:
acquiring a first bandwidth utilization rate, and acquiring a cache space parameter when the first bandwidth utilization rate exceeds a second threshold;
acquiring a first transmission delay, and acquiring a cache space parameter when the first transmission delay exceeds a third threshold;
and acquiring a packet sending rate of the flow control PFC based on the priority, and acquiring a cache space parameter when the packet sending rate exceeds a fourth threshold.
11. The method of claim 1, further comprising at least one of:
acquiring a second bandwidth utilization rate, and when the second bandwidth utilization rate is lower than a fifth threshold value, restoring the buffer spaces allocated to the first message queue and the second message queue to an initial state;
and acquiring a second transmission delay, and when the second transmission delay is lower than a sixth threshold, restoring the buffer spaces distributed to the first message queue and the second message queue to the initial state.
12. A network device, characterized by:
comprising at least one processor and a memory for communicative connection with the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the congestion handling method of any one of claims 1 to 11.
13. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the congestion processing method according to any one of claims 1 to 11.
CN202010547269.7A 2020-06-16 2020-06-16 Congestion processing method, network device and storage medium Pending CN113810309A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010547269.7A CN113810309A (en) 2020-06-16 2020-06-16 Congestion processing method, network device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010547269.7A CN113810309A (en) 2020-06-16 2020-06-16 Congestion processing method, network device and storage medium

Publications (1)

Publication Number Publication Date
CN113810309A true CN113810309A (en) 2021-12-17

Family

ID=78892527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010547269.7A Pending CN113810309A (en) 2020-06-16 2020-06-16 Congestion processing method, network device and storage medium

Country Status (1)

Country Link
CN (1) CN113810309A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449016A (en) * 2022-01-25 2022-05-06 南京奥拓电子科技有限公司 Method, device, equipment and storage medium for controlling equipment of Internet of things
CN114584517A (en) * 2022-02-25 2022-06-03 百果园技术(新加坡)有限公司 Congestion processing method, system, equipment and storage medium based on cache state
CN114598653A (en) * 2022-05-09 2022-06-07 上海飞旗网络技术股份有限公司 Data stream acceleration method based on time delay management model
CN114640635A (en) * 2022-03-17 2022-06-17 新华三技术有限公司合肥分公司 Method and device for processing PFC deadlock
CN114760252A (en) * 2022-03-24 2022-07-15 北京邮电大学 Data center network congestion control method and system
CN114938354A (en) * 2022-06-24 2022-08-23 北京有竹居网络技术有限公司 Congestion control method, device, equipment and storage medium
CN115022227A (en) * 2022-06-12 2022-09-06 长沙理工大学 Data transmission method and system based on circulation or rerouting in data center network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104426790A (en) * 2013-08-26 2015-03-18 中兴通讯股份有限公司 Method and device for carrying out distribution control on cache space with multiple queues
CN104661260A (en) * 2015-01-20 2015-05-27 中南大学 Wireless Mesh intelligent power grid routing mechanism with QoS perceiving and loading balancing
US20200004692A1 (en) * 2018-07-02 2020-01-02 Beijing Boe Optoelectronics Technology Co., Ltd. Cache replacing method and apparatus, heterogeneous multi-core system and cache managing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104426790A (en) * 2013-08-26 2015-03-18 中兴通讯股份有限公司 Method and device for carrying out distribution control on cache space with multiple queues
CN104661260A (en) * 2015-01-20 2015-05-27 中南大学 Wireless Mesh intelligent power grid routing mechanism with QoS perceiving and loading balancing
US20200004692A1 (en) * 2018-07-02 2020-01-02 Beijing Boe Optoelectronics Technology Co., Ltd. Cache replacing method and apparatus, heterogeneous multi-core system and cache managing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李新国;胡恩博;: "路由器缓存管理算法之比较研究", 计算机应用研究, no. 04, 30 April 2007 (2007-04-30) *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449016A (en) * 2022-01-25 2022-05-06 南京奥拓电子科技有限公司 Method, device, equipment and storage medium for controlling equipment of Internet of things
CN114449016B (en) * 2022-01-25 2024-04-02 南京奥拓电子科技有限公司 Method, device, equipment and storage medium for controlling equipment of Internet of things
CN114584517A (en) * 2022-02-25 2022-06-03 百果园技术(新加坡)有限公司 Congestion processing method, system, equipment and storage medium based on cache state
CN114640635A (en) * 2022-03-17 2022-06-17 新华三技术有限公司合肥分公司 Method and device for processing PFC deadlock
CN114640635B (en) * 2022-03-17 2024-02-09 新华三技术有限公司合肥分公司 PFC deadlock processing method and device
CN114760252A (en) * 2022-03-24 2022-07-15 北京邮电大学 Data center network congestion control method and system
CN114760252B (en) * 2022-03-24 2024-06-07 北京邮电大学 Data center network congestion control method and system
CN114598653A (en) * 2022-05-09 2022-06-07 上海飞旗网络技术股份有限公司 Data stream acceleration method based on time delay management model
CN115022227A (en) * 2022-06-12 2022-09-06 长沙理工大学 Data transmission method and system based on circulation or rerouting in data center network
CN114938354A (en) * 2022-06-24 2022-08-23 北京有竹居网络技术有限公司 Congestion control method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113810309A (en) Congestion processing method, network device and storage medium
CN110493145B (en) Caching method and device
CN106789729B (en) Cache management method and device in network equipment
US9935884B2 (en) Application data flow management in an IP network
EP2702730B1 (en) Effective circuits in packet-switched networks
US7603475B2 (en) Method for flow control in a communication system
CN112995048B (en) Blocking control and scheduling fusion method of data center network and terminal equipment
JP4523596B2 (en) Encapsulating packets into frames for networks
EP3907944A1 (en) Congestion control measures in multi-host network adapter
EP3395023B1 (en) Dynamically optimized queue in data routing
WO2020090474A1 (en) Packet forwarding apparatus, method and program
CN113783785A (en) ECN (engineering-centric networking) water line value configuration method and device and network equipment
CN116889024A (en) Data stream transmission method, device and network equipment
CN112055382A (en) Service access method based on refined differentiation
KR102064679B1 (en) Method for processing data
CN115720213B (en) Air-ground communication flow control method and device and airborne equipment
JP2009212632A (en) Communication equipment and communication method
CN112787919B (en) Message transmission method and device and readable medium
Panju et al. Queuing theoretic models for multicasting under fading
KR100462475B1 (en) Apparatus for queue scheduling using linear control and method therefor
US10742710B2 (en) Hierarchal maximum information rate enforcement
CN114339863B (en) Uplink data transmission method and device, storage medium and electronic device
KR101176754B1 (en) Method of predicting access delay time reflected by piggyback bandwidth in data over cable service interface specification cable network
CN118802770A (en) Transmission delay bound determination method, device, equipment and readable storage medium
JP4104756B2 (en) Method and system for scheduling data packets in a telecommunications network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination