CN114237546A - Management method of cache queue, electronic device and storage medium - Google Patents

Management method of cache queue, electronic device and storage medium Download PDF

Info

Publication number
CN114237546A
CN114237546A CN202111284931.5A CN202111284931A CN114237546A CN 114237546 A CN114237546 A CN 114237546A CN 202111284931 A CN202111284931 A CN 202111284931A CN 114237546 A CN114237546 A CN 114237546A
Authority
CN
China
Prior art keywords
data
cached
queue
cache
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111284931.5A
Other languages
Chinese (zh)
Inventor
陈晓彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wangsu Science and Technology Co Ltd
Original Assignee
Wangsu Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wangsu Science and Technology Co Ltd filed Critical Wangsu Science and Technology Co Ltd
Priority to CN202111284931.5A priority Critical patent/CN114237546A/en
Publication of CN114237546A publication Critical patent/CN114237546A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • G06F5/08Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor having a sequence of storage locations, the intermediate ones not being accessible for either enqueue or dequeue operations, e.g. using a shift register

Abstract

The embodiment of the invention relates to the technical field of communication, and discloses a management method of a cache queue, electronic equipment and a storage medium. The management method of the buffer queue comprises the following steps: acquiring the data volume of the cached data in each cache queue with the changed cache condition; detecting whether the data volume of the cached data exceeds a first preset threshold value; under the condition that the data volume of the cached data exceeds the first preset threshold value, acquiring the enqueue rate of the cached data in the cache queue and determining the dequeue weight corresponding to the cache queue according to the data volume of the cached data and the enqueue rate; and under the condition that the data volume of the cached data does not exceed the first preset threshold, determining the dequeuing weight corresponding to the cache queue according to the data volume of the cached data. Therefore, when the flow shaping is carried out based on the buffer queue, the time delay and the packet loss can be reduced.

Description

Management method of cache queue, electronic device and storage medium
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a management method of a cache queue, electronic equipment and a storage medium.
Background
The traffic shaping generally includes storing a data stream in a buffer queue, and then managing and scheduling the buffer queue to send data to the outside according to a certain mode, where a common management mode of the buffer queue is a random fair queuing (SFQ), that is, a plurality of buffer queues are set, the queues are sequentially scheduled to send data of a preset data amount to the outside in a polling mode, and one queue finishes sending and then starts sending the next queue, and so on.
However, the SQF sends data outwards in a preset data volume due to the binding of the sending data volume and the queue, no matter whether the data volume of the data stream stored in the queue is large or small, when the data volume of the stored data volume is large, large time delay may be caused due to the need to wait for other buffer queues to send data for many times, and even a packet loss mechanism is triggered due to the overlong time delay or the fact that the currently buffered data exceeds the maximum queue length of the buffer queue, i.e., the maximum buffer data volume, so that excessive packet loss is caused.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a method for managing a cache queue, an electronic device, and a storage medium, so that when traffic shaping is performed based on the cache queue, delay and packet loss can be reduced.
To achieve the above object, an embodiment of the present invention provides a method for managing a cache queue, including the following steps: acquiring the data volume of the cached data in each cache queue with the changed cache condition; detecting whether the data volume of the cached data exceeds a first preset threshold value; under the condition that the data volume of the cached data exceeds the first preset threshold value, acquiring the enqueue rate of the cached data in the cache queue and determining the dequeue weight corresponding to the cache queue according to the data volume of the cached data and the enqueue rate; and under the condition that the data volume of the cached data does not exceed the first preset threshold, determining the dequeuing weight corresponding to the cache queue according to the data volume of the cached data.
To achieve the above object, an embodiment of the present invention further provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of cache queue management as described above.
To achieve the above object, an embodiment of the present invention further provides a computer-readable storage medium storing a computer program, which when executed by a processor implements the method for managing a cache queue as described above.
The management method for the buffer queue provided in the embodiment of the present invention obtains the data amount of the buffered data of the buffer queue whose buffering condition is sent to change, and then determines whether the influence of the enqueuing rate on the dequeuing weight needs to be considered by detecting whether the data amount exceeds a first preset threshold, which can be understood that, when the data amount of the buffered data does not exceed the first preset threshold, that is, when there is more storage space in the buffer queue to continue storing data, even if the data enqueuing rate is higher, the buffered data in the buffer queue is continuously dequeued, the buffer queue basically does not reach the maximum queue length, that is, the problem of packet loss is better avoided, at this time, the influence of the data amount of the buffered data on the dequeuing weight is mainly considered, so that the dequeuing weight is adapted to the data amount, and the buffered data in the buffer queue can be sent out through a smaller number of dequeuing times, the times of waiting for the sending of other buffer queues are reduced, the time delay is finally reduced, and the problems that the dequeuing weight of the buffer queue is overlarge, the data amount of one dequeuing is overlarge, and other buffer queues need to wait for a longer time are avoided; under the condition that the data volume of the cached data exceeds a first preset threshold, namely when the storage space capable of continuously storing the data in the cache queue is not large, the influence of the data enqueuing rate on whether the cache queue reaches the maximum queue length is large, at this time, the influence of the enqueuing rate on the stored data in the cache queue needs to be considered, otherwise, the enqueuing rate is too large, the dequeuing rate is too small due to small dequeuing weight, and therefore the data is overstocked in the cache queue, packet loss occurs, and the problem of overlarge time delay is solved, that is, the influence of the data volume and the data enqueuing rate on the dequeuing weight is comprehensively considered at this time, and the problem of packet loss and large time delay caused by the data overstocked due to small dequeuing weight determined only due to the queue length is avoided. That is to say, the polling dequeuing weight can be flexibly configured according to the actual situation of the cache data in the cache queue, so that the dequeuing data amount and the cache data in the polling process of the cache queue are adapted, the slow dequeuing of the data of other cache queues is not delayed, the data is instantly prolonged, and meanwhile, the problems that the data backlog is caused by too little dequeuing data flow of the queue, the queue length is too fast, and finally too large transmission delay and serious packet loss are caused are avoided.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a flowchart of a method for managing a cache queue according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an electronic device provided in another embodiment of the present invention.
Detailed Description
As known from the background art, when traffic shaping is performed, the current SFQ method is adopted to manage the buffer queue, which causes problems of time delay and serious packet loss.
To solve the above problem, an embodiment of the present invention provides a method for managing a cache queue, including: acquiring the data volume of the cached data in each cache queue with the changed cache condition; detecting whether the data volume of the cached data exceeds a first preset threshold value; under the condition that the data volume of the cached data exceeds the first preset threshold value, acquiring the enqueue rate of the cached data in the cache queue and determining the dequeue weight corresponding to the cache queue according to the data volume of the cached data and the enqueue rate; and under the condition that the data volume of the cached data does not exceed the first preset threshold, determining the dequeuing weight corresponding to the cache queue according to the data volume of the cached data.
The management method for the buffer queue provided in the embodiment of the present invention obtains the data amount of the buffered data of the buffer queue whose buffering condition is sent to change, and then determines whether the influence of the enqueuing rate on the dequeuing weight needs to be considered by detecting whether the data amount exceeds a first preset threshold, which can be understood that, when the data amount of the buffered data does not exceed the first preset threshold, that is, when there is more storage space in the buffer queue to continue storing data, even if the data enqueuing rate is higher, the buffered data in the buffer queue is continuously dequeued, the buffer queue basically does not reach the maximum queue length, that is, the problem of packet loss is better avoided, at this time, the influence of the data amount of the buffered data on the dequeuing weight is mainly considered, so that the dequeuing weight is adapted to the data amount, and the buffered data in the buffer queue can be sent out through a smaller number of dequeuing times, the times of waiting for the sending of other buffer queues are reduced, the time delay is finally reduced, and the problems that the dequeuing weight of the buffer queue is overlarge, the data amount of one dequeuing is overlarge, and other buffer queues need to wait for a longer time are avoided; under the condition that the data volume of the cached data exceeds a first preset threshold, namely when the storage space capable of continuously storing the data in the cache queue is not large, the influence of the data enqueuing rate on whether the cache queue reaches the maximum queue length is large, at this time, the influence of the enqueuing rate on the stored data in the cache queue needs to be considered, otherwise, the enqueuing rate is too large, the dequeuing rate is too small due to small dequeuing weight, and therefore the data is overstocked in the cache queue, packet loss occurs, and the problem of overlarge time delay is solved, that is, the influence of the data volume and the data enqueuing rate on the dequeuing weight is comprehensively considered at this time, and the problem of packet loss and large time delay caused by the data overstocked due to small dequeuing weight determined only due to the queue length is avoided. That is to say, the polling dequeuing weight can be flexibly configured according to the actual situation of the cache data in the cache queue, so that the dequeuing data amount and the cache data in the polling process of the cache queue are adapted, the slow dequeuing of the data of other cache queues is not delayed, the data is instantly prolonged, and meanwhile, the problems that the data backlog is caused by too little dequeuing data flow of the queue, the queue length is too fast, and finally too large transmission delay and serious packet loss are caused are avoided.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be described in detail with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that in various embodiments of the invention, numerous technical details are set forth in order to provide a better understanding of the present invention. However, the claimed invention may be practiced without these specific details or with various changes and modifications based on the following embodiments.
The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
The embodiment of the invention provides a management method of a cache queue on one hand, which is applied to electronic equipment such as computers, mobile phones, servers and the like. The flow of the management method of the buffer queue is shown in fig. 1.
Step 101, obtaining the data amount of the buffered data in each buffer queue with the changed buffer condition.
In this embodiment, the buffer queue is mainly a buffer queue for traffic shaping in an electronic device such as a computer, and particularly, the buffer queue may be a buffer queue of a communication output interface of the electronic device such as a computer.
In this embodiment, the changing of the buffering condition of the buffer queue may include: the buffered data in the buffer queue is dequeued, data is enqueued in the buffer queue, and packet loss processing is performed on the buffered data in the buffer queue according to a certain policy, and the like.
In particular, the data amount of the buffered data may be represented in various ways, such as frames, messages, bits, and the like.
In an example, the data amount of the buffered data is represented by a number of messages, for example, the buffer queue includes a queue 1-a queue 10, where the data amount of the buffered data in the queue 1 is changed from 100 messages to 78 messages due to dequeuing of the buffered data, and the data amount of the buffered data in the queue 5 is changed from 35 messages to 66 messages due to new enqueuing of data, in this case, the data amount of the buffered data in each buffer queue whose buffer condition is changed is obtained, that is, the data amount of the buffered data in the queue 1 is obtained as 78 messages, and the data amount of the buffered data in the queue 5 is obtained as 66 messages.
In another example, the amount of buffered data is expressed in bytes, e.g., the buffer queue includes queue 1-queue 100, wherein, the data amount of the buffered data in the queue 65 is changed from 456 bytes to 244 bytes due to dequeuing of the buffered data, the data amount of the buffered data in the queue 87 is changed from 546 bytes to 658 bytes due to new enqueuing of the data, the queue 100 is configured to determine the data amount of the buffered data according to the data amount of the buffered data, i.e., waiting for a timeout, triggering a packet loss mechanism, 33 bytes of buffered data are all discarded, in which case, the data amount of the cached data in each cache queue whose cache condition changes is obtained, that is, the data amount of the cached data in the obtaining queue 65 is 244 bytes, the data amount of the cached data in the obtaining queue 87 is 66 packets, and the data amount of the cached data in the obtaining queue 100 is 0 byte.
It is worth mentioning that, when the data amount of the cached data is expressed by the number of the messages, the data sizes of different messages may be different due to the indefinite length of the messages, so that the number of the messages may not uniquely determine the data size, the data sizes of the cached data with the same number of the messages may be different, and further, the time required for sending the cached data with the same number of the messages may be correspondingly different, that is, the waiting time before the messages are sent is different according to the data size of one message, and finally, the time delay is uncontrollable, for example, when the shaping bandwidth of the electronic device such as a computer is 100M and the data amount of the cached data in all the cache queues is 1000 messages, the sending time of each message with the length of 600 bytes is obviously 2 times of the sending time of each message with the length of 300 bytes, and thus, it can be known that, the same message is 100 th sent, and the sending time delay when the length of each message is 600 bytes is 2 times of the sending time delay of each message with the length of 300 bytes And (4) doubling. And the number of bytes is used for replacing the number of messages to represent the data volume, so that the data size can be accurately determined, the problems are avoided, the time delay of the cache queue can be accurately controlled, and particularly, the queue length can be configured according to the time delay parameter under the condition that the data volume is the number of bytes.
It should be noted that, the above is only exemplified by two cases of the number of messages and the number of bytes, actually, the number of messages may be replaced by the number of frames, etc., and the number of bytes may be replaced by the number of words, the number of bits, etc., which can uniquely represent the size of the data, and thus, the description is omitted here.
In order to facilitate those skilled in the art to better understand the above scenario in which the buffering condition changes, the following description will take data enqueuing as an example.
Before obtaining the data amount of the buffered data in each buffer queue with the changed buffer condition, the method for managing the buffer queues further includes: determining a cache queue corresponding to data to be cached; detecting whether a cache queue corresponding to the data to be cached can cache the data to be cached or not according to the data quantity of the data to be cached; discarding the data to be cached under the condition that the cache queue corresponding to the data to be cached cannot cache the data to be cached; and under the condition that the cache queue corresponding to the data to be cached can cache the data to be cached, caching the data to be cached in the corresponding cache queue and determining that the cache condition of the corresponding cache queue changes.
It should be noted that, it takes a certain time for the data to be cached to be enqueued, and in the enqueuing process, it may be determined that the buffer queue into which the data to be cached enters is sent out in turn by a polling manner, so as to ensure that the obtained dequeuing weight corresponds to the actual situation in real time, further reduce the time delay and alleviate the problem of packet loss, it may be set that each enqueued packet or data frame, etc. in the buffer queue is updated by performing steps 101-104 to determine the dequeuing weight of each buffer queue in real time, and it is not necessary to perform steps 101-104 after waiting for the data to be cached to be enqueued; or, during the enqueuing of the data to be buffered, monitoring of the buffer queue may be maintained, for example, whether the buffer queue in the sending sequence located in the last sending order of the buffer queue into which the data to be buffered enters is sending data is detected, whether the sending waiting time is predicted, and whether the waiting time for sending the data by the buffer queue into which the data to be buffered enters is detected to be less than or equal to a preset time length, and the like, and when it is detected that the buffered data in the buffer queue into which the data to be buffered enters is to be sent, the dequeuing weight of the buffer queue into which the data to be buffered enters is updated in real time, that is, the dequeuing weight of each buffer queue is determined in real time by updating the dequeuing weight once through executing steps 101 to 104 when a packet or a data frame is enqueued, and the data sending by the buffer queue into which the data to be buffered enters is stopped.
In one example, determining a buffer queue corresponding to data to be buffered may be implemented as follows: determining flow identification information of data to be cached, where the flow identification information may be information such as a quintuple or a triplet of a data flow to which the data to be cached belongs, and for example, the flow identification information may be a source (Internet Protocol, IP) address, a destination IP address, a source port number, a destination port number, and a Protocol number carried in a Message having a port number and a Protocol number, such as a Transmission Control Protocol (TCP), a User Datagram Protocol (UDP) Message, an Internet Control Message Protocol (ICMP) Message, and the like; determining a cache queue uniquely corresponding to data to be cached according to the flow identification information, the total number of the cache queues and a hash algorithm, where the flow identification information is a quintuple, the hash algorithm may be an exclusive or operation on five data in the quintuple, and then after obtaining a hash value, performing a modulo operation on the hash value based on the total number of the cache queues to obtain a queue number of the cache queue to which the data to be cached is enqueued, for example, if the total number of the cache queues is 1000 and the hash value is 5, the determined cache queue is the cache queue with the queue number of 5, and the hash value is 1027, the determined cache queue is the cache queue with the queue number of 27, and of course, in the case where the flow identification information is the source IP address and the destination IP address, the hash algorithm may also be a summation of the source IP address and the destination IP address, and perform a modulo operation on the summed value based on the total number of the cache queues, obtaining a hash value, where the hash value is the determined queue number of the cache queue to which the data to be cached is enqueued, and details are not repeated here.
It should be noted that, by determining the queue number as described above, data to be buffered having the same flow identification information, that is, data in the same data flow, is stored in the same buffer queue, and data from different data flows is stored in different buffer queues.
It should be further noted that, in the above exemplary description, the data to be cached is actually created based on the cache queue, and the enqueuing can be completed by determining that the data to be cached enters the cache queue, or actually, only a part of the cache queue is created, and when the determined queue number does not belong to the queue number of the currently created cache queue, a cache queue is newly created and the queue number thereof is set as the queue number determined by the hash algorithm or the like. In particular, the above steps: according to the flow identification information, the total number of the cache queues and the hash algorithm, determining the cache queue uniquely corresponding to the data to be cached, which can be replaced by: and determining a buffer queue uniquely corresponding to the data to be buffered according to the stream identification information, a preset threshold value of the number of the buffer queues and a hash algorithm, namely, at the moment, the total number of the buffer queues is predetermined in advance, so that excessive queues can be prevented from being created when the number of streams in the system is excessive, system memory resources are consumed, and system operation is influenced.
In another example, according to the data amount of the data to be cached, detecting whether the cache queue corresponding to the data to be cached can cache the data to be cached may be implemented as follows: detecting whether the data volume of the data to be cached exceeds a second preset threshold corresponding to a cache queue corresponding to the data to be cached; under the condition that the data volume of the data to be cached exceeds a corresponding second preset threshold value, determining that a cache queue corresponding to the data to be cached cannot cache the data to be cached; under the condition that the data volume of the data to be cached does not exceed the corresponding second preset threshold, detecting whether the sum of the data volume of the data to be cached and the data volumes of the cached data of all current cache queues exceeds a third preset threshold or not; determining that the cache queue corresponding to the data to be cached cannot cache the data to be cached under the condition that the third preset threshold is exceeded; and under the condition that the data to be cached does not exceed the third preset threshold, determining that the cache queue corresponding to the data to be cached can cache the data to be cached.
It is worth mentioning that, the comparison between the second preset threshold and the third preset threshold is actually to increase the judgment of the cacheable data amount of all the cache queues on the basis of determining whether to enqueue the to-be-cached data according to whether the remaining storage space of the cache queues can store the to-be-cached data or not, so as to avoid the excessive cache data, thereby preventing the electronic devices such as computers from occupying excessive system resources for storing or processing the data, and thus preventing the problem of affecting the working efficiency of the system.
Step 102, detecting whether the data amount of the cached data exceeds a first preset threshold, if so, executing step 103, and if not, executing step 104.
In this embodiment, the data amount of the cached data and the first preset threshold are represented in the same manner, for example, when the data amount of the cached data is represented by a packet number, the first preset threshold is a packet number threshold, and when the data amount of the cached data is represented by a byte number, the first preset threshold is a byte number threshold.
It should be noted that the first preset threshold may be a fixed value set according to experience, or may also be a dynamic value related to a change situation of system resources, an operating state, and the like of an electronic device such as a computer, a server, and the like, and of course, the first preset threshold may also be a dynamic value related to a change situation of an enqueue situation and a dequeue situation of cached data, which is not described herein any more.
It should be further noted that the first preset threshold of each buffer queue may be the same or different, and is mainly determined according to actual situations, for example, when the enqueue rate of the buffered data is higher, the first preset threshold corresponding to the buffered data may be set to a smaller value, so that the dequeue weight corresponding to the buffer queue where the buffered data is located is determined as much as possible according to the enqueue rate and the data amount of the buffered data, so as to avoid the problem that packet loss is likely to occur in the buffer queue due to an excessively high enqueue rate; similarly, when the enqueue rate of the buffered data is low, the first preset threshold corresponding to the buffered data may be set to a large value.
Step 103, obtaining the enqueue rate of the buffered data in the buffer queue and determining the dequeue weight of the corresponding buffer queue according to the data amount and the enqueue rate of the buffered data.
In this embodiment, the dequeue weight is positively correlated to the enqueue rate, and the dequeue weight is positively correlated to the data size of the cached data.
In one example, the data amount and the enqueue rate of the buffered data are respectively related in proportion to the dequeue weight, for example, the dequeue weight K-K1-s 1+ K2-v 1+ c1, where s1 is the data amount of the buffered data, v1 is the enqueue rate of the buffered data, and K1, K2, and c1 are all preset values; or, dequeue weight
Figure BDA0003332653270000071
Wherein s2 is the data volume of the buffered data, v2 is the enqueue rate of the buffered data, and K3, K4, K5, K6, K7, K8, K9, K10, c2, c3, c4 and c5 are all preset values.
Of course, the above is only a specific example, in other examples, the specific relationship between the dequeue weight and the data amount of the buffered data and the enqueue rate may also be a non-linear positive correlation, and details are not repeated here.
In this embodiment, before obtaining the enqueue rate of the buffered data in the buffer queue, the method for managing the buffer queue further includes: the accumulated data amount of the actual data to be cached in each cache queue is periodically obtained according to a preset period, where the preset period may be set according to actual requirements, for example, when an enqueue rate of the data to be cached is large, the period may be set to be a short time such as 100 milliseconds, and when the enqueue rate of the data to be cached is small, the period may be set to be a relatively long time such as 200 milliseconds. Therefore, the enqueue speed for obtaining the buffered data in the buffer queue can be realized by the following modes: determining the enqueuing rate of the corresponding cache queue according to a preset period and the accumulated data volume obtained in history, wherein if the accumulated data volume recorded at the 3 rd time is M1 byte number, the accumulated byte number recorded at the 4 th time is M2 byte number, and the period is T, the enqueuing rate v is (M1-M2)/T; the accumulated data recorded from the 7 th to the 12 th times are respectively M3 byte counts-M8 byte counts, the period is T ', and the enqueuing speed v ═ [ (M8-M5)/3T ++ (M7-M4)/3T ++ (M6-M3)/3T ═ M8+ M7+ M6-M5-M4-M3)/9T'.
It can be understood that, when the electronic device such as a computer performs statistics on the accumulated data amount, a deviation may exist at a corresponding actual statistical time, and therefore, in order to further improve the accuracy of the obtained enqueue rate, the current time may also be recorded in real time when the accumulated data amount is recorded, so that the enqueue rate is calculated according to the recorded time and the accumulated data amount, which is not described herein any more.
It can be further understood that, by executing steps 101 to 104, the dequeuing weights of the buffer queues with changed buffer conditions are updated, and the buffer queues with unchanged buffer conditions keep the dequeuing weights determined last time, that is, the dequeuing weights of all buffer queues are determined, so that data can be sent outwards according to the current dequeuing weights of the buffer queues to implement traffic shaping, that is, after determining the dequeuing weights of the corresponding buffer queues, the method for managing the buffer queues further includes: determining dequeue data volume corresponding to each buffer queue according to the dequeue weight corresponding to each buffer queue; and sending the buffered data outwards from each buffer queue by using the corresponding dequeue bandwidth, namely sequentially selecting one buffer queue to send the data outwards according to a certain sequence, wherein each buffer queue sends the corresponding dequeue data amount, one buffer queue finishes sending, and the next buffer queue starts sending, so that the data is circularly sent. Preferably, the dequeuing sequence can also determine the sending sequence according to the weight, and the sending is preferentially carried out with a higher weight, so that the buffer queue with a higher buffer pressure can be emptied preferentially, the influence on the enqueuing side is reduced, and the packet loss probability is reduced.
Further, after sending the buffered data from each buffer queue to the outside with the corresponding dequeue bandwidth, the method for managing the buffer queues further includes: and under the condition that the buffered data in at least one buffer queue is sent, determining that the buffer condition of the corresponding buffer queue changes, thereby triggering the process of executing the step 101 and the step 104, and updating the dequeue weight of the buffer queue where the buffered data is sent. Particularly, similar to the foregoing data enqueuing process, determining that the sending change of the buffering condition of the corresponding buffer queue may be to update the dequeuing weight once every time a packet or data frame is sent outwards, and the like, which is not described herein any more.
And step 104, determining the dequeuing weight of the corresponding buffer queue according to the data volume of the buffered data.
In this embodiment, the dequeue weight is positively correlated to the data size of the cached data, and is substantially the same as the description of step 103, and is not repeated here.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
Another aspect of the embodiments of the present invention further provides an electronic device, as shown in fig. 2, including: at least one processor 201; and a memory 202 communicatively coupled to the at least one processor 201; the memory 202 stores instructions executable by the at least one processor 201, and the instructions are executed by the at least one processor 201, so that the at least one processor 201 can execute the method for managing the cache queue described in any one of the method embodiments.
Where the memory 202 and the processor 201 are coupled in a bus, the bus may comprise any number of interconnected buses and bridges, the buses coupling one or more of the various circuits of the processor 201 and the memory 202 together. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 201 is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor 201.
The processor 201 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory 202 may be used to store data used by the processor 201 in performing operations.
In another aspect, an embodiment of the present invention further provides a computer-readable storage medium storing a computer program. When executed by a processor, the computer program implements the method for managing a buffer queue described in any of the above method embodiments.
That is, those skilled in the art can understand that all or part of the steps in the method according to the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific embodiments for practicing the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (11)

1. A method for managing a cache queue, comprising:
acquiring the data volume of the cached data in each cache queue with the changed cache condition;
detecting whether the data volume of the cached data exceeds a first preset threshold value;
under the condition that the data volume of the cached data exceeds the first preset threshold value, acquiring the enqueue rate of the cached data in the cache queue and determining the dequeue weight corresponding to the cache queue according to the data volume of the cached data and the enqueue rate;
and under the condition that the data volume of the cached data does not exceed the first preset threshold, determining the dequeuing weight corresponding to the cache queue according to the data volume of the cached data.
2. The method for managing the buffer queue according to claim 1, wherein the data amount of the buffered data is a byte number.
3. The method for managing the buffer queue according to claim 1, wherein the dequeue weight is positively correlated to the enqueue rate, and the dequeue weight is positively correlated to the data amount of the buffered data.
4. The method for managing the buffer queue according to claim 1, wherein before the obtaining the data amount of the buffered data in each buffer queue with the changed buffer condition, the method further comprises:
determining the cache queue corresponding to the data to be cached;
detecting whether the cache queue corresponding to the data to be cached can cache the data to be cached or not according to the data quantity of the data to be cached;
discarding the data to be cached under the condition that the cache queue corresponding to the data to be cached cannot cache the data to be cached;
and under the condition that the cache queue corresponding to the data to be cached can cache the data to be cached, caching the data to be cached in the corresponding cache queue and determining that the cache condition of the corresponding cache queue changes.
5. The method for managing the buffer queue according to claim 4, wherein the determining the buffer queue corresponding to the data to be buffered includes:
determining the flow identification information of the data to be cached;
and determining the buffer queue uniquely corresponding to the data to be buffered according to the flow identification information, the total number of the buffer queues and a Hash algorithm.
6. The method for managing the buffer queue according to claim 4, wherein the detecting whether the buffer queue corresponding to the data to be buffered can buffer the data to be buffered according to the data amount of the data to be buffered comprises:
detecting whether the data volume of the data to be cached exceeds a second preset threshold corresponding to the cache queue corresponding to the data to be cached;
determining that the cache queue corresponding to the data to be cached cannot cache the data to be cached under the condition that the data amount of the data to be cached exceeds the corresponding second preset threshold;
under the condition that the data volume of the data to be cached does not exceed the corresponding second preset threshold, detecting whether the sum of the data volume of the data to be cached and the data volumes of the cached data of all the current cache queues exceeds a third preset threshold or not;
determining that the cache queue corresponding to the data to be cached cannot cache the data to be cached under the condition that the third preset threshold is exceeded;
and under the condition that the third preset threshold value is not exceeded, determining that the cache queue corresponding to the data to be cached can cache the data to be cached.
7. The method for managing the buffer queue according to claim 1, wherein before the obtaining the enqueue rate of the buffered data in the buffer queue, the method further comprises:
according to a preset period, periodically acquiring the accumulated data quantity of the actual data to be cached in each cache queue;
the obtaining the enqueue rate of the buffered data in the buffer queue includes:
and determining the enqueuing rate of the corresponding cache queue according to the preset period and the accumulated data amount obtained in history.
8. The method for managing the buffer queue according to claim 1, wherein after determining the dequeuing weight corresponding to the buffer queue, the method further comprises:
determining dequeue data volume corresponding to each cache queue according to the dequeue weight corresponding to each cache queue;
and sending the buffered data of the corresponding dequeue data amount outwards from each buffer queue by using the corresponding dequeue bandwidth.
9. The method for managing the buffer queues according to claim 8, wherein after the buffered data is sent out from each of the buffer queues with the corresponding dequeue bandwidth, the method further comprises:
and determining that the buffering condition of the corresponding buffer queue changes under the condition that the buffered data in at least one buffer queue is sent.
10. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of managing a cache queue as claimed in any one of claims 1 to 9.
11. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements the method for managing a buffer queue according to any one of claims 1 to 9.
CN202111284931.5A 2021-11-01 2021-11-01 Management method of cache queue, electronic device and storage medium Pending CN114237546A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111284931.5A CN114237546A (en) 2021-11-01 2021-11-01 Management method of cache queue, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111284931.5A CN114237546A (en) 2021-11-01 2021-11-01 Management method of cache queue, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN114237546A true CN114237546A (en) 2022-03-25

Family

ID=80743543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111284931.5A Pending CN114237546A (en) 2021-11-01 2021-11-01 Management method of cache queue, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114237546A (en)

Similar Documents

Publication Publication Date Title
CN109120544B (en) Transmission control method based on host end flow scheduling in data center network
US7447152B2 (en) Controlling traffic congestion
EP3758315B1 (en) Congestion control method and network device
CN108418767B (en) Data transmission method, device and computer storage medium
US8081644B2 (en) Method and device for controlling a queue buffer
EP2959645B1 (en) Dynamic optimization of tcp connections
US20060203730A1 (en) Method and system for reducing end station latency in response to network congestion
CN109714267B (en) Transmission control method and system for managing reverse queue
WO2020063003A1 (en) Congestion control method, and network apparatus
US9276866B2 (en) Tuning congestion notification for data center networks
EP3120521A1 (en) Transport accelerator implementing request manager and connection manager functionality
CA2425706A1 (en) Method to synchronize and upload an offloaded network stack connection with a network stack
EP2540042A2 (en) Communication transport optimized for data center environment
EP3961981A1 (en) Method and device for congestion control, communication network, and computer storage medium
EP1471695B1 (en) Method for flow control in a communication system
CN110138678B (en) Data transmission control method and device, network transmission equipment and storage medium
CN113315720B (en) Data flow control method, system and equipment
CN111464452A (en) Fast congestion feedback method based on DCTCP
CN112995048A (en) Blocking control and scheduling fusion method for data center network and terminal equipment
CN109605383B (en) Information communication method, robot and storage medium
CN115514709B (en) Congestion control event queue scheduling method, device, equipment and storage medium
CN114237546A (en) Management method of cache queue, electronic device and storage medium
US10187317B1 (en) Methods for traffic rate control and devices thereof
JP2022176056A (en) Information processing device, information processing method and information processing program
US20040228280A1 (en) Dynamic blocking in a shared host-network interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination