CN111355673A - Data processing method, device, equipment and storage medium - Google Patents
Data processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN111355673A CN111355673A CN201811585237.5A CN201811585237A CN111355673A CN 111355673 A CN111355673 A CN 111355673A CN 201811585237 A CN201811585237 A CN 201811585237A CN 111355673 A CN111355673 A CN 111355673A
- Authority
- CN
- China
- Prior art keywords
- cached
- queue
- priority
- buffered
- data packet
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 34
- 238000000034 method Methods 0.000 claims abstract description 50
- 239000000872 buffer Substances 0.000 claims description 62
- 238000012545 processing Methods 0.000 claims description 25
- 230000003139 buffering effect Effects 0.000 claims description 11
- 238000004891 communication Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 230000014509 gene expression Effects 0.000 description 9
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000012634 fragment Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000007717 exclusion Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/625—Queue scheduling characterised by scheduling criteria for service slots or service orders
- H04L47/6275—Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/61—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The embodiment of the application provides a data processing method, a device, equipment and a storage medium, wherein the method comprises the following steps: the server determines a queue set to be cached according to the priority of the data packet to be cached sent by the terminal; determining a target cache queue from the set of queues to be cached according to the attribute information of each queue to be cached in the set of queues to be cached and the priority of the data packet to be cached; and caching the data packet to be cached into the target cache queue.
Description
Technical Field
The embodiment of the application relates to the technical field of internet, and relates to but is not limited to a data processing method, a device, equipment and a storage medium.
Background
In the current network communication, communication requirements of various service levels often exist, and in order to ensure the communication requirement with high timeliness, a priority queue mode is generally adopted to process the storage and forwarding of messages.
In these forwarding schemes, a buffer space is provided for each service priority. When the message arrives, firstly, the service level of the message is distinguished by analyzing the characteristic field of the message, the message is mapped to the corresponding service priority, then the queue cache condition of the priority to which the message belongs is inquired, if the occupation amount of the corresponding priority cache is too large, the message is discarded, otherwise, the message is written into the corresponding priority queue cache. In the allocation of the queues of each priority, the method of independently allocating the buffers by each queue is generally adopted, and the buffers are allocated according to the relationship between the total depth of the allocated buffers by each queue and the total amount of the actual buffers.
However, the prior art scheme suffers from the problems of idle buffers and high priority buffers being occupied.
Disclosure of Invention
In view of this, embodiments of the present application provide a data processing method, apparatus, device, and storage medium.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a data processing method, where the method includes:
the server determines a queue set to be cached according to the priority of the data packet to be cached sent by the terminal;
determining a target cache queue from the set of queues to be cached according to the attribute information of each queue to be cached in the set of queues to be cached and the priority of the data packet to be cached;
and caching the data packet to be cached into the target cache queue.
In other embodiments, each of the queues to be buffered has a preset priority; correspondingly, the server determines a queue set to be cached according to the priority of the data packet to be cached sent by the terminal, and the method comprises the following steps:
the server compares the priority of each queue to be cached with the priority of the data packet to be cached;
and determining the queue to be cached with the priority greater than or equal to that of the data packet to be cached as the queue set to be cached.
In other embodiments, the determining a target buffer queue from the set of queues to be buffered according to the attribute information of each queue to be buffered in the set of queues to be buffered and the priority of the packet to be buffered includes:
for each queue to be cached in the queue set to be cached, determining whether to allow the data packet to be cached in the corresponding queue to be cached according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information;
and for each queue to be cached in the queue set to be cached, if the determination result is allowable, determining the queue to be cached with the same priority as the data packet to be cached from the queue set to be cached as the target cache queue.
In other embodiments, the attribute information of the queue to be cached includes at least one of the following queues to be cached: the number of the cached data packets and the memory space of the cached data packets, and correspondingly, the preset threshold corresponding to the attribute information includes at least one of the following: a quantity threshold and a memory space threshold;
correspondingly, the determining, for each queue to be cached in the set of queues to be cached, whether to allow the data packet to be cached in the corresponding queue to be cached according to the attribute information of each queue to be cached and the preset threshold corresponding to the attribute information includes:
for each queue to be cached in the set of queues to be cached, if the number of the cached data packets in each queue to be cached is less than or equal to the number threshold of the corresponding queue to be cached, allowing the data packets to be cached in the corresponding queue to be cached;
and/or the presence of a gas in the gas,
for each queue to be cached in the set of queues to be cached, if the memory space of the cached data packet in each queue to be cached is less than or equal to the memory space threshold of the corresponding queue to be cached, allowing the data packet to be cached in the corresponding queue to be cached;
in other embodiments, the method further comprises:
when the cached data packet flows out of the queue to be cached, determining the priority of the queue to be cached for caching the cached data packet as a caching priority;
determining the attribute information of the queue to be cached with the priority greater than the caching priority as target attribute information;
and updating the target attribute information.
In other embodiments, the method further comprises:
if the number of the cached data packets of at least one queue to be cached is larger than the number threshold of the corresponding queue to be cached in the queue set to be cached, discarding the data packets to be cached;
and/or the presence of a gas in the gas,
if the memory space of the cached data packet of at least one queue to be cached is larger than the memory space threshold value of the corresponding queue to be cached in the queue set to be cached, discarding the data packet to be cached.
In a second aspect, an embodiment of the present application provides a data processing apparatus, where the apparatus includes:
the first determining unit is used for determining a queue set to be cached according to the priority of a data packet to be cached sent by a terminal;
a second determining unit, configured to determine a target cache queue from the set of queues to be cached according to the attribute information of each queue to be cached in the set of queues to be cached and the priority of the packet to be cached;
and the buffer unit is used for buffering the data packet to be buffered into the target buffer queue.
In a third aspect, an embodiment of the present application provides a data processing apparatus, where the apparatus at least includes: a processor and a storage medium configured to store executable instructions, wherein: the processor is configured to execute stored executable instructions;
the executable instructions are configured to perform the data processing method described above.
In a fourth aspect, an embodiment of the present application provides a storage medium, where computer-executable instructions are stored in the storage medium, and the computer-executable instructions are configured to execute the data processing method.
The embodiment of the application provides a data processing method, a device, equipment and a storage medium, wherein the method comprises the following steps: the server determines a queue set to be cached according to the priority of the data packet to be cached sent by the terminal; determining a target cache queue from the set of queues to be cached according to the attribute information of each queue to be cached in the set of queues to be cached and the priority of the data packet to be cached; and caching the data packet to be cached into the target cache queue. In this way, the target cache queue is determined according to the attribute information of each queue to be cached and the priority of the data packet to be cached, so that the purpose of preempting the cache queues by priority can be realized, and the problems of idle cache and occupied high-priority cache in the queue cache allocation can be solved.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. Like reference numerals having different letter suffixes may represent different examples of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed herein.
Fig. 1 is a schematic flow chart illustrating an implementation of a data processing method according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating an implementation of a data processing method according to a second embodiment of the present application;
fig. 3 is a schematic flow chart illustrating an implementation of a data processing method according to a third embodiment of the present application;
fig. 4 is a schematic flow chart illustrating an implementation of a data processing method according to a fourth embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a relationship between cache spaces that can be occupied by priorities according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic flow chart illustrating an implementation of a data processing method according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating an implementation flow of a message reading process in the data processing method according to the embodiment of the present application;
fig. 9 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the following describes specific technical solutions of the present invention in detail with reference to the accompanying drawings in the embodiments of the present invention. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning by themselves. Thus, "module", "component" or "unit" may be used mixedly.
In the current network communication, communication requirements of various service levels, such as voice call, video call, network browsing, file transmission and the like, often exist, and some communication requirements have higher requirements on timeliness of data packet and message forwarding, such as video call; some communication requirements have lower requirements on timeliness of data packet and message forwarding, such as network browsing. However, from the perspective of routing and forwarding, all communication messages are mixed and enter the router to wait for processing, so to ensure the communication requirement with high timeliness, a priority queue mode is generally adopted to process the storage and forwarding of the messages.
In these forwarding schemes, a buffer space is provided for each service priority. When the message arrives, firstly, the service level of the message is distinguished by analyzing the characteristic field of the message, the message is mapped to the corresponding service priority, then the queue cache condition of the priority to which the message belongs is inquired, if the occupation amount of the corresponding priority cache is too large, the message is discarded, otherwise, the message is written into the corresponding priority queue cache. And when the output port is idle, selecting the highest priority queue with the message according to the service level of the priority queue, and preferentially dequeuing. When the enqueue bandwidth of the high-priority queue is smaller than the port bandwidth, the high-priority queue can be served most timely, and the low-delay requirement of the high-priority queue is guaranteed.
In the distribution of each priority queue buffer, a mode that each queue independently distributes buffers is generally adopted, that is, the sum of maximum buffer values that each priority queue can occupy does not exceed the maximum system buffer value. Therefore, when the low-priority queue cannot obtain the output bandwidth and the buffer is occupied, the enqueuing and the dequeuing of the messages of other priority queues cannot be influenced.
However, when no high priority packet is input by the router, a portion of the caching capacity is wasted. For example, in the exclusive configuration mode in cache allocation, a router sets 4 priority queues, and sets a maximum cache depth T for each queue, where T × 4< total cache AT. If only two messages with lower priority enter the router in a certain period of time, half of the depth of the cache is idle and wasted. Since the incoming traffic may change over time, this problem cannot be solved by adjusting the preset maximum buffer depth.
If the total depth of each queue is set to be larger than the total cache depth, the problem that excessive caches are idle and wasted can be effectively avoided. For example, in the preemption configuration mode in cache allocation, a router sets 4 priority queues, and sets a maximum cache depth T < total cache AT, and T × 4> total cache AT for each queue. Then the wasted cache will be less than half the depth when only two lower priority packets enter the router for a certain period of time. However, if 2T > AT, and the input bandwidth is greater than the output bandwidth AT this time, the total buffer will be depleted by these two lower priority messages. If the high-priority packet recovers the traffic at this time, a problem occurs that the high-priority packet cannot occupy the cache, resulting in discarding the high-priority packet.
Therefore, the mutual exclusion configuration mode in the related technology has the problem that the cache is idle, and the preemption configuration mode has the problem that the high-priority cache is occupied.
Based on the above problems in the related art, embodiments of the present application provide a data processing method, which aims to reserve a cache for a high-priority queue and provide a larger cache occupation opportunity for a low priority while reserving the cache for the high-priority queue through priority preemption, and solve the problems of the cache idleness in a mutex configuration manner and the occupation of a high-priority cache in a preemption configuration manner in the conventional queue cache allocation. By the data processing method, the internal cache of the equipment can be more effectively utilized, and the effect of priority-based service is achieved.
Fig. 1 is a schematic flow chart of an implementation of a data processing method according to an embodiment of the present application, as shown in fig. 1, the method includes:
step S101, the server determines a queue set to be cached according to the priority of the data packet to be cached sent by the terminal.
Here, the server may be a network server in a switching network system or a routing system, and the server may receive a data packet to be cached, which is sent by a terminal, through a router. The data packet to be buffered may be sent by the terminal in the form of a data packet, or may be sent by the terminal in the form of a data packet.
And after the server receives the data packet to be cached, performing characteristic analysis on the data packet to be cached, and determining the priority corresponding to the characteristics of the data packet to be cached. For example, the priority level of the video call is higher than the priority level of the web browsing, and if the highest priority corresponds to the priority level of 0, the priority level of the video call may be set to 0. Therefore, when the server receives the data packet of the video call, the priority level corresponding to the video call is 0 after the characteristic analysis is performed.
It should be noted that, in this embodiment, priority level 0 may be considered to be greater than (i.e., higher than) priority level 1, priority level 1 may be considered to be greater than priority level 2, and priority level 2 may be considered to be greater than priority level 3 … …
And after the server determines the priority of the data packet to be cached, the server determines a queue set to be cached according to the priority of the data packet to be cached.
In this embodiment, the set of queues to be buffered includes at least one queue to be buffered, where the queue to be buffered is used to buffer a data packet, for example, the data packet to be buffered. The queue to be buffered may already have a data packet buffered therein, or may be an empty queue in which no data packet is buffered.
In an embodiment of the present application, a server divides a cache space of a data processing system into a preset number of queues to be cached in advance, and each queue to be cached is used for caching a data packet meeting a certain condition. For example, the buffer space may be divided into queues to be buffered, in which preset data is buffered for data packets with different priorities, that is, a video call with a priority level of 0 corresponds to one queue to be buffered; the web browsing with the priority level of 4 corresponds to another queue to be cached.
In this embodiment, after determining the priority of the data packet to be buffered, at least one queue to be buffered that meets a preset condition may be determined from all queues to be buffered in the buffer space, so as to form the queue set to be buffered. The preset condition may be that the priority is greater than or equal to the priority of the data packet to be cached.
Step S102, determining a target buffer queue from the set of queues to be buffered according to the attribute information of each queue to be buffered in the set of queues to be buffered and the priority of the data packet to be buffered.
Here, the attribute information of the queue to be cached includes at least one of: the number of the buffered data packets in the queue to be buffered, and the memory space of the buffered data packets in the queue to be buffered. The number of the cached data packets can be counted by taking the number of the data packet fragments and the number of the data packet packets as a metering unit; the memory space of the buffered data packet may be counted by bit, byte, etc. as a metering unit.
In this embodiment, before determining the target buffer queue, it is required to determine whether the data packet to be buffered is allowed to be buffered in a certain queue to be buffered in the queue set to be buffered according to the attribute information of each queue to be buffered and the priority of the data packet to be buffered. And if the data packet to be cached is allowed, caching the data packet to be cached into a target cache queue. The target buffer queue is a queue to be buffered selected from the set of queues to be buffered. Therefore, for the data packet to be cached, the data packet can be cached to the target cache queue only if the conditions are met, and therefore the high-priority cache can be prevented from being occupied.
Step S103, buffering the data packet to be buffered into the target buffer queue.
Here, when it is determined that the data packet to be buffered is allowed to be buffered in a target buffer queue in the set of queues to be buffered, the data packet to be buffered is buffered in the target buffer queue, so as to ensure that the data packet to be buffered can be effectively transmitted.
According to the data processing method provided by the embodiment of the application, a server determines a queue set to be cached according to the priority of a data packet to be cached sent by a terminal; determining a target cache queue from the set of queues to be cached according to the attribute information of each queue to be cached in the set of queues to be cached and the priority of the data packet to be cached; and caching the data packet to be cached into the target cache queue. In this way, the target cache queue is determined according to the attribute information of each queue to be cached and the priority of the data packet to be cached, so that the purpose of preempting the cache queues by priority can be realized, and the problems of idle cache and occupied high-priority cache in the queue cache allocation can be solved.
Fig. 2 is a schematic flow chart of an implementation of a data processing method according to a second embodiment of the present application, and as shown in fig. 2, the method includes:
in step S201, the server compares the priority of each queue to be cached with the priority of the data packet to be cached.
Here, each of the queues to be buffered has a preset priority. For example, the buffer space may be divided into a plurality of queues to be buffered, each queue to be buffered being used for buffering different data packets, such as video call data packets, file transfer data packets, web browsing data packets, and the like. The queue to be cached for caching the video call data packet has a higher priority than the queue to be cached for caching the network browsing data packet. The priority of each queue to be buffered is fixed.
In this embodiment, after receiving the data packet to be cached, the server compares the priority of the data packet to be cached with the priorities of all queues to be cached in the system.
Step S202, determining the queue to be buffered with the priority greater than or equal to the priority of the data packet to be buffered as the queue set to be buffered.
After comparing the priority of the data packet to be buffered with the priorities of all queues to be buffered in the system, selecting a queue to be buffered, whose priority is greater than or equal to the priority of the data packet to be buffered, to form the set of queues to be buffered, that is, in the set of queues to be buffered, at least one queue to be buffered is included, and the priorities of all queues to be buffered in the set of queues to be buffered are greater than or equal to the priority of the data packet to be buffered.
Step S203, determining a target buffer queue from the set of queues to be buffered according to the attribute information of each queue to be buffered in the set of queues to be buffered and the priority of the data packet to be buffered.
Before determining the target buffer queue, it needs to determine whether the data packet to be buffered is allowed to be buffered in a certain queue to be buffered in the queue set to be buffered according to the attribute information of each queue to be buffered and the priority of the data packet to be buffered. And if the data packet to be cached is allowed, caching the data packet to be cached into a target cache queue. The target buffer queue is a queue to be buffered selected from the set of queues to be buffered. Therefore, for the data packet to be cached, the data packet can be cached to the target cache queue only if the conditions are met, and therefore the high-priority cache can be prevented from being occupied.
Step S204, the data packet to be buffered is buffered in the target buffer queue.
Here, when it is determined that the data packet to be buffered is allowed to be buffered in a target buffer queue in the set of queues to be buffered, the data packet to be buffered is buffered in the target buffer queue, so as to ensure that the data packet to be buffered can be effectively transmitted.
According to the data processing method provided by the embodiment of the application, the server compares the priority of each queue to be cached with the priority of the data packet to be cached; determining a queue to be cached with a priority greater than or equal to that of the data packet to be cached as the queue set to be cached; determining a target cache queue from the set of queues to be cached according to the attribute information of each queue to be cached in the set of queues to be cached and the priority of the data packet to be cached; and caching the data packet to be cached into the target cache queue. In this way, the target cache queue is determined according to the attribute information of each queue to be cached and the priority of the data packet to be cached, so that the purpose of preempting the cache queues by priority can be realized, and the problems of idle cache and occupied high-priority cache in the queue cache allocation can be solved.
Fig. 3 is a schematic flow chart of an implementation of a data processing method according to a third embodiment of the present application, and as shown in fig. 3, the method includes:
step S301, the server determines a queue set to be cached according to the priority of the data packet to be cached sent by the terminal.
Here, the queue set to be buffered has at least one queue to be buffered, and each queue to be buffered has a preset priority. For example, the buffer space may be divided into a plurality of queues to be buffered, each queue to be buffered being used for buffering different data packets.
In this embodiment, the priority of the data packet to be buffered may be compared with the priorities of all queues to be buffered in the system, and then the queue to be buffered having the priority greater than or equal to the priority of the data packet to be buffered is selected to form the queue set to be buffered.
Step S302, determining, for each queue to be cached in the set of queues to be cached, whether to allow the data packet to be cached in the corresponding queue to be cached according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information.
Here, the attribute information of the queue to be cached includes at least one of: the number of the buffered data packets in the queue to be buffered, and the memory space of the buffered data packets in the queue to be buffered. Correspondingly, the preset threshold corresponding to the attribute information includes at least one of: a quantity threshold, a memory space threshold.
Then, the process of step S302 in this embodiment includes the following three implementation manners:
the first method is as follows: when the attribute information of the queue to be cached is the number of cached data packets in the queue to be cached, and a preset threshold corresponding to the attribute information is a number threshold; the determining, for each queue to be cached in the set of queues to be cached, whether to allow the data packet to be cached in the corresponding queue to be cached according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information includes: for each queue to be cached in the queue set to be cached, if the number of the cached data packets in each queue to be cached is less than or equal to the number threshold of the corresponding queue to be cached, allowing the data packets to be cached to the corresponding queue to be cached.
The second method comprises the following steps: when the attribute information of the queue to be cached is the memory space of the cached data packet in the queue to be cached, and a preset threshold corresponding to the attribute information is a memory space threshold; the determining, for each queue to be cached in the set of queues to be cached, whether to allow the data packet to be cached in the corresponding queue to be cached according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information includes: and for each queue to be cached in the queue set to be cached, if the memory space of the cached data packet in each queue to be cached is less than or equal to the memory space threshold of the corresponding queue to be cached, allowing the data packet to be cached in the corresponding queue to be cached.
The third method comprises the following steps: when the attribute information of the queue to be cached is the number of cached data packets and the memory space of the cached data packets in the queue to be cached, and the preset threshold corresponding to the attribute information is a number threshold and a memory space threshold; the determining, for each queue to be cached in the set of queues to be cached, whether to allow the data packet to be cached in the corresponding queue to be cached according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information includes: and for each queue to be cached in the queue set to be cached, if the number of the cached data packets and the memory space of the cached data packets in each queue to be cached are both less than or equal to the number threshold and the memory space threshold of the corresponding queue to be cached, allowing the data packets to be cached in the corresponding queue to be cached.
In this embodiment, the number of cached data packets in the attribute information of the queue to be cached may be counted by taking the number of data packet fragments and the number of data packet packets as a measurement unit; the memory space of the buffered data packet may be counted by bit, byte, etc. as a metering unit.
In this embodiment, the preset threshold corresponding to the attribute information is determined according to the priority of the queue to be buffered, that is, each queue to be buffered corresponds to one preset threshold, and the preset thresholds of the queues to be buffered are different. When the priority of a certain queue to be cached is higher than the priorities of other queues to be cached, the preset threshold of the queue to be cached is also higher than the preset thresholds of the other queues to be cached.
For example, a preset threshold corresponding to an nth queue to be buffered in the set of queues to be buffered may be determined according to the preset priority of the nth queue to be buffered; if the preset priority of the Nth queue to be cached is higher than the preset priority of the Mth queue to be cached, the preset threshold corresponding to the Nth cache queue is larger than the preset threshold corresponding to the Mth queue to be cached.
It should be noted that, in this embodiment, the attribute information of all the queues to be cached in the queue set to be cached is compared with the corresponding preset threshold, and only when any one of the three manners is satisfied between the attribute information of all the queues to be cached and the corresponding preset threshold, it can be determined that the data packet to be cached is allowed to be cached in the corresponding queue to be cached, otherwise, the data packet to be cached is prohibited to be cached in the corresponding queue to be cached.
Step S303, for each queue to be buffered in the queue set to be buffered, if the determination result is allowable, determining a queue to be buffered, which has the same priority as the data packet to be buffered, from the queue set to be buffered as the target buffer queue.
Here, the priority of the target buffer queue is the same as the priority of the data packet to be buffered. The queue set to be buffered is determined according to the queue to be buffered with the priority greater than or equal to the priority of the data packet to be buffered. That is to say, the target buffer queue with the same priority as the data packet to be buffered is the queue to be buffered with the lowest priority in the set of queues to be buffered.
Therefore, step S303 can also be realized by:
step S3031, for each queue to be buffered in the queue set to be buffered, if the determination result is allowable, determining the queue to be buffered having the lowest priority from the queue set to be buffered as the target buffer queue.
In other embodiments, the method further comprises the steps of:
step S310, if the number of the buffered data packets of at least one queue to be buffered exists in the queue set to be buffered, and is greater than the number threshold of the corresponding queue to be buffered, discarding the data packets; and/or if the memory space of the cached data packet of at least one queue to be cached is larger than the memory space threshold value of the corresponding queue to be cached in the queue set to be cached, discarding the data packet.
Here, the case of determining to discard the data packet to be buffered includes the following three cases:
first, it is determined that the number of buffered packets of at least one queue to be buffered in the queue set to be buffered is greater than the number threshold of the corresponding queue to be buffered.
And secondly, judging that the memory space of the cached data packet of at least one queue to be cached exists in the queue set to be cached, and the memory space is larger than the memory space threshold value of the corresponding queue to be cached.
And thirdly, judging that the number of the cached data packets of at least one queue to be cached is greater than the number threshold of the corresponding queue to be cached in the queue set to be cached, and judging that the memory space of the cached data packets of at least one queue to be cached is greater than the memory space threshold of the corresponding queue to be cached in the queue set to be cached.
Step S304, buffering the data packet to be buffered into the target buffer queue.
Here, when it is determined that the data packet to be buffered is allowed to be buffered in a target buffer queue in the set of queues to be buffered, the data packet to be buffered is buffered in the target buffer queue, so as to ensure that the data packet to be buffered can be effectively transmitted.
According to the data processing method provided by the embodiment of the application, for each queue to be cached in the queue set to be cached, whether the data packet to be cached is allowed to be cached in the corresponding queue to be cached is determined according to the attribute information of each queue to be cached and the preset threshold corresponding to the attribute information. In this way, the attribute information of each queue to be cached and the corresponding preset threshold are judged to determine whether the data packet to be cached is allowed to enter the queue to be cached, and only if the data packet to be cached is allowed, the target cache queue is determined and the data packet to be cached is cached, so that the purpose of caching the cache queue by priority can be achieved, and the problems of idle cache and occupied high-priority cache in queue cache allocation can be solved.
Fig. 4 is a schematic flow chart of an implementation of a data processing method according to a fourth embodiment of the present application, and as shown in fig. 4, the method includes:
step S401, the server determines a queue set to be cached according to the priority of the data packet to be cached sent by the terminal.
Step S402, according to the attribute information of each queue to be cached in the queue set to be cached and the priority of the data packet to be cached, determining a target cache queue from the queue set to be cached.
Step S403, buffer the data packet to be buffered into the target buffer queue.
It should be noted that steps S401 to S403 are the same as steps S101 to S103, and the description of this embodiment is omitted.
Step S404, when the buffered data packet flows out of the queue to be buffered, determining that the priority of the queue to be buffered, in which the buffered data packet is buffered, is a buffer priority.
Here, when the buffered packet flows out of the queue to be buffered, the buffered packet may flow out of any one queue to be buffered in the system, that is, the queue to be buffered may be a queue to be buffered in the queue set to be buffered, or may not be a queue to be buffered in the queue set to be buffered, as long as there is a buffered packet flow out in any one queue to be buffered in the current data processing system, the operation of step S404 is executed, and the priority of the queue to be buffered in which the buffered packet is buffered is determined to be the buffering priority. That is, the priority of the queue to be buffered, which buffers the outgoing packet to be buffered, is determined, and the priority is determined as the buffering priority.
Step S405, determining the attribute information of the queue to be cached with the priority greater than the caching priority as target attribute information.
Here, first, the priority of all queues to be buffered in the system is compared with the size of the buffering priority; and then, determining the attribute information of the queue to be cached with the priority greater than the caching priority as target attribute information.
Step S406, updating the target attribute information.
In this embodiment, updating the attribute information of the queue to be cached, whose priority is greater than the cache priority, includes the following three conditions:
the first condition is as follows: and when the cached data packets flowing out of the queue to be cached are counted on the basis of the number, updating the number of the cached data packets in the queue to be cached.
Case two: and when the cached data packet flowing out of the queue to be cached is counted on the basis of the size of the memory space, updating the memory space of the cached data packet in the queue to be cached.
Case three: and when the cached data packets flowing out of the queue to be cached are counted on the basis of the number and the size of the memory space, updating the number and the memory space of the cached data packets in the queue to be cached.
It should be noted that, in this embodiment, when there is a buffered packet flowing out of the queue to be buffered, attribute information of all queues to be buffered, which have higher priority than the flowing out queue to be buffered, is updated.
In the data processing method provided in the embodiment of the present application, when the cached data packet flows out of the queue to be cached, the priority of the queue to be cached, in which the cached data packet is cached, is determined as the caching priority, the attribute information of the queue to be cached, in which the priority is greater than the caching priority, is determined as the target attribute information, and the target attribute information is updated. Therefore, when a data packet flows out of the queue to be cached, the attribute information of the queue to be cached can be timely updated, and the subsequent data packet to be cached entering the queue can be effectively cached.
Based on the above embodiments, the embodiments of the present application further provide a data processing method, where a method of priority-based preemption is used to reserve a cache for a high-priority queue and provide a larger chance of cache occupancy for a low priority, and solve the problems of cache idleness in a mutual exclusion configuration manner and occupancy of a high-priority cache in a preemption configuration manner in conventional queue cache allocation. By the method, the internal cache of the equipment can be more effectively utilized, and the effect of priority service is achieved.
The method of this example is explained in detail below.
In this embodiment, the n queues (corresponding to the queues to be buffered) having priority relationships are numbered 0 to n-1, where the highest priority queue is numbered 0, the next highest priority queue is numbered 1, the next highest priority queue is numbered 2 … …, and so on, and the lowest priority queue is numbered n-1.
Setting a counter cnt for each queuexAnd (x is 0 to n-1), and recording the occupied cache depth of the queue (corresponding to the attribute information). The counter may be in units of various buffer measurement units such as bits, bytes, number of fragments, number of packets, and the like.
Setting a discard threshold T for each queuex(x is 0 to n-1), the setting of the discard threshold (corresponding to the preset threshold) follows the following relationship: t is0>T1>……>Tn-1. That is, the drop threshold of the high priority queue needs to be greater than the drop threshold of the low priority queue.
When a message (corresponding to the data packet to be cached) arrives, if the priority of the message is i, all counters and discard thresholds of which x is less than or equal to i (x is 0-n-1) are judged, and if the cnt belongs to the priority, the counter and the discard threshold are judgedx<Tx(x is 0 to n-1, x is less than or equal to i), the message may enter the cache, otherwise, the message is discarded.
It should be noted that the smaller than number in the above expression can be changed to be smaller than or equal to the number, that is, in other embodiments, the above is describedcntx<TxMay also be cntx≤Tx。
And if the judgment result indicates that the message can enter the queue buffer, correspondingly increasing all counters with x less than or equal to i (x is 0-n-1) according to the counter unit and the message length. For example, if the message is in units of bits, the number of bits of the message is increased; the number of bytes of the message is increased according to the byte number of the message.
When a certain message (assuming that the priority of the message is j) is read out from the cache, and the cache is released, all counters meeting x ≤ j (x ═ 0-n-1) are correspondingly reduced according to the counter unit and the message length according to the rule of entering the cache.
The above scheme enables the high-priority message to occupy the cache space of the low-priority message, but the low-priority message cannot occupy the cache space of the high-priority message, and the relation of the cache space that can be occupied by each priority by using the above scheme is as shown in a schematic diagram of the relation of the cache space that can be occupied by each priority as shown in fig. 5.
As shown in fig. 5, the maximum T that can be occupied by the packet of each priorityxThe buffer space of (x ═ 0 to n-1), any priority queue can occupy more buffer space in case of missing some other priority traffic. Meanwhile, when the lower priority queue is congested, the high priority queue still has more space for the high priority queue to occupy, and the condition that the high priority message packet is lost due to empty buffer of the low priority queue can not occur.
Fig. 6 is a schematic diagram of a composition structure of a data processing apparatus according to an embodiment of the present application, and as shown in fig. 6, the data processing apparatus includes:
a queue depth counter module 601 for maintaining a corresponding buffer occupancy depth counter for each queue priority
An enqueue determining module 602, configured to, when a packet arrives, if the priority of the packet is i, determine all counters and discard thresholds of which x is less than or equal to i (x is 0 to n-1), and if cnt belongs to the counter and discard thresholds, determine that x is less than or equal to i (x is 0 to n-1)x<Tx(x is 0 to n-1, x is less than or equal to i), the message may enter the cache, otherwise, the message is discarded. According to the design requirements toThe smaller than number in the above expression may be changed to be equal to or smaller than the number.
And an enqueue depth calculating module 603, configured to correspondingly increase all counters with x being less than or equal to i (x being 0 to n-1) according to a counter unit and a packet length if the packet is determined to enter the cache.
The dequeue depth calculating module 604 is configured to, if the packet with the priority j needs to read out the cache, correspondingly decrease all counters with x being equal to or less than j (x being 0 to n-1) according to the counter unit and the packet length.
An embodiment of the present application provides a data processing method, which may be operated by using a queue polling device implemented by a sliding window, fig. 7 is a schematic diagram of an implementation flow of the data processing method provided in the embodiment of the present application, and as shown in fig. 7, the method includes the following steps:
step S701, when the system is powered on, initializing each queue depth counter to 0.
Step S702, detecting that a message enters.
Step S703, when the packet enters the system, determining whether all counters and discard thresholds with x being less than or equal to i (x being 0 to n-1) satisfy the expression cnt according to the priority i of the packetx<Tx(x=0~n-1,x≤i)。
The smaller than the number in the above expression may be changed to be smaller than or equal to the number according to design requirements.
Step S704, if the judgment result of step S702 is yes, the message is allowed to enqueue; if the message is enqueued, all counters with x less than or equal to i (x is 0-n-1) are correspondingly increased according to the counter unit and the message length.
Step S705, if the determination result in step S702 is no, discarding the message.
Fig. 8 is a schematic flow chart illustrating an implementation process of a packet reading process in the data processing method according to the embodiment of the present application, and as shown in fig. 8, the method includes the following steps:
step S801, it is checked that there is a message read.
Step S802, if there is a message dequeue, all counters with x less than or equal to j (x is 0-n-1) are reduced correspondingly according to the counter unit and the message length according to the message priority j.
The implementation of the technical solution of the embodiment of the present application is described in further detail below:
assuming that there are n priority queues, detailed by taking n as 5 as an example, the queue priority is Q in sequence from high to low0、Q1、Q2、Q3、Q4. Providing five cache depth counters cnt for five priority levels0、cnt1、cnt2、cnt3、cnt4. The depth counter takes the number of packets as an example, and the discarding threshold values of five priorities are T0、T1、T2、T3、T4And satisfy T0>T1>T2>T3>T4。
When a message with the priority of 4 enters the system, comparing the following expressions:
cnt0<T0;cnt1<T1;cnt2<T2;cnt3<T3;cnt4<T4。
when the above conditions are all satisfied, the message is judged to be enqueued, cnt0、cnt1、cnt2、cnt3、cnt4Respectively adding one; otherwise, the message is discarded, and the counter is unchanged.
When a message with the priority of 3 enters the system, comparing the following expressions:
cnt0<T0;cnt1<T1;cnt2<T2;cnt3<T3。
when the above conditions are all satisfied, the message is judged to be enqueued, cnt0、cnt1、cnt2、cnt3Respectively adding one; otherwise, the message is discarded, and the counter is unchanged.
When a message with the priority of 2 enters the system, comparing the following expressions:
cnt0<T0;cnt1<T1;cnt2<T2。
when the above conditions are all satisfied, the message is judged to be enqueued, cnt0、cnt1、cnt2Respectively adding one; otherwise, the message is discarded, and the counter is unchanged.
When a message with the priority of 1 enters the system, comparing the following expressions:
cnt0<T0;cnt1<T1。
when the above conditions are all satisfied, the message is judged to be enqueued, cnt0、cnt1Respectively adding one; otherwise, the message is discarded, and the counter is unchanged.
When a message with the priority of 0 enters the system, comparing the following expressions: cnt0<T0。
When the above conditions are satisfied, the message is judged to be enqueued, cnt0Adding one; otherwise, the message is discarded, and the counter is unchanged.
When messages in the system need to be dequeued:
when the message with the priority of 4 needs to be dequeued, cnt0、cnt1、cnt2、cnt3、cnt4Each reduced by one.
When the message with the priority of 3 needs to be dequeued, cnt0、cnt1、cnt2、cnt3Each reduced by one.
When the message with the priority of 2 needs to be dequeued, cnt0、cnt1、cnt2Each reduced by one.
When the message with the priority level of 1 needs to be dequeued, cnt0、cnt1Each reduced by one.
When the message with the priority of 0 needs to be dequeued, cnt0And subtracting one.
According to the data processing method provided by the embodiment of the application, by means of the priority preemption method, the cache is reserved for the high-priority queue, meanwhile, a larger cache occupation opportunity is provided for the low priority, and meanwhile, the problems that the cache in the exclusive configuration mode is idle in the traditional queue cache allocation and the high-priority cache in the preemption configuration mode is occupied are solved. Book (I)According to the method of the embodiment, the maximum T occupied by the message of each priorityxThe buffer space of (x ═ 0 to n-1), any priority queue can occupy more buffer space in case of missing some other priority traffic. Meanwhile, when the lower priority queue is congested, the high priority queue still has more space for the high priority queue to occupy, and the condition that the high priority message packet is lost due to empty buffer of the low priority queue can not occur.
Based on the foregoing embodiments, the present application provides a data processing apparatus, where the apparatus includes units and modules included in the units, and may be implemented by a processor in a data processing device; of course, it may also be implemented by logic circuitry; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 9 is a schematic diagram of a configuration of a data processing apparatus according to an embodiment of the present application, and as shown in fig. 9, the data processing apparatus 900 includes:
a first determining unit 901, configured to determine a queue set to be cached according to a priority of a data packet to be cached sent by a terminal;
a second determining unit 902, configured to determine a target buffer queue from the set of queues to be buffered according to the attribute information of each queue to be buffered in the set of queues to be buffered and the priority of the packet to be buffered;
a buffering unit 903, configured to buffer the data packet to be buffered into the target buffer queue.
In other embodiments, each of the queues to be buffered has a preset priority; correspondingly, the first determining unit comprises:
the comparison module is used for comparing the priority of each queue to be cached with the priority of the data packet to be cached;
the first determining module is configured to determine a queue to be buffered, of which the priority is greater than or equal to that of the data packet to be buffered, as the queue set to be buffered.
In other embodiments, the second determination unit includes:
a second determining module, configured to determine, for each queue to be cached in the set of queues to be cached, whether to allow the data packet to be cached in the corresponding queue to be cached according to attribute information of each queue to be cached and a preset threshold corresponding to the attribute information;
and a third determining module, configured to determine, for each queue to be buffered in the set of queues to be buffered, a queue to be buffered, which has the same priority as the packet to be buffered, from the set of queues to be buffered as the target buffer queue if the determination result is allowable.
In other embodiments, the attribute information of the queue to be cached includes at least one of the following queues to be cached: the number of the cached data packets and the memory space of the cached data packets, and correspondingly, the preset threshold corresponding to the attribute information includes at least one of the following: a quantity threshold and a memory space threshold;
correspondingly, the second determining module comprises:
a first control module, configured to, for each queue to be cached in the set of queues to be cached, allow the packet to be cached to a corresponding queue to be cached if the number of cached packets in each queue to be cached is less than or equal to the number threshold of the corresponding queue to be cached; and/or the second control module is configured to, for each queue to be cached in the set of queues to be cached, allow the packet to be cached in the corresponding queue to be cached if the memory space of the cached packet in each queue to be cached is less than or equal to the memory space threshold of the corresponding queue to be cached;
in other embodiments, the apparatus further comprises:
a third determining unit, configured to determine, when the buffered data packet flows out of the queue to be buffered, that a priority of the queue to be buffered, in which the buffered data packet is buffered, is a buffering priority;
a fourth determining unit, configured to determine, as target attribute information, attribute information of a queue to be cached whose priority is greater than the cache priority;
and the updating unit is used for updating the target attribute information.
In other embodiments, the apparatus further comprises:
a first discarding unit, configured to discard the to-be-buffered data packet if the number of buffered data packets of at least one to-be-buffered queue in the to-be-buffered queue set is greater than the number threshold of a corresponding to-be-buffered queue; and/or the presence of a gas in the gas,
a second discarding unit, configured to discard the to-be-buffered data packet if a memory space of the buffered data packet of at least one to-be-buffered queue is greater than the memory space threshold of the corresponding to-be-buffered queue in the to-be-buffered queue set.
It should be noted that, in the embodiment of the present application, if the data processing method is implemented in the form of a software functional module and sold or used as a standalone product, the data processing method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a terminal to execute all or part of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
Correspondingly, an embodiment of the present application provides a data processing apparatus, fig. 10 is a schematic diagram of a composition structure of the data processing apparatus provided in the embodiment of the present application, and as shown in fig. 10, the data processing apparatus 1000 at least includes: a processor 1001, a communication interface 1002, and a storage medium 1003 configured to store executable instructions, wherein:
the processor 1001 generally controls the overall operation of the data processing device 1000.
The communication interface 1002 may enable the data processing apparatus to communicate with other terminals or servers via a network.
The storage medium 1003 is configured to store instructions and applications executable by the processor 1001, and may also cache data to be processed or already processed by each module in the data processing apparatus 1000 and the processor 1001, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a terminal to execute all or part of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present invention, and all such changes or substitutions are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (10)
1. A method of data processing, the method comprising:
the server determines a queue set to be cached according to the priority of the data packet to be cached sent by the terminal;
determining a target cache queue from the set of queues to be cached according to the attribute information of each queue to be cached in the set of queues to be cached and the priority of the data packet to be cached;
and caching the data packet to be cached into the target cache queue.
2. The method according to claim 1, wherein each of the queues to be buffered has a preset priority; correspondingly, the server determines a queue set to be cached according to the priority of the data packet to be cached sent by the terminal, and the method comprises the following steps:
the server compares the priority of each queue to be cached with the priority of the data packet to be cached;
and determining the queue to be cached with the priority greater than or equal to that of the data packet to be cached as the queue set to be cached.
3. The method according to claim 1, wherein the determining a target buffer queue from the set of queues to be buffered according to the attribute information of each queue to be buffered in the set of queues to be buffered and the priority of the packet to be buffered comprises:
for each queue to be cached in the queue set to be cached, determining whether to allow the data packet to be cached in the corresponding queue to be cached according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information;
and for each queue to be cached in the queue set to be cached, if the determination result is allowable, determining the queue to be cached with the same priority as the data packet to be cached from the queue set to be cached as the target cache queue.
4. The method according to claim 3, wherein the attribute information of the queue to be buffered comprises at least one of the following queues to be buffered: the number of the cached data packets and the memory space of the cached data packets, and correspondingly, the preset threshold corresponding to the attribute information includes at least one of the following: a quantity threshold and a memory space threshold;
correspondingly, the determining, for each queue to be cached in the set of queues to be cached, whether to allow the data packet to be cached in the corresponding queue to be cached according to the attribute information of each queue to be cached and the preset threshold corresponding to the attribute information includes:
for each queue to be cached in the set of queues to be cached, if the number of the cached data packets in each queue to be cached is less than or equal to the number threshold of the corresponding queue to be cached, allowing the data packets to be cached in the corresponding queue to be cached;
and/or the presence of a gas in the gas,
and for each queue to be cached in the queue set to be cached, if the memory space of the cached data packet in each queue to be cached is less than or equal to the memory space threshold of the corresponding queue to be cached, allowing the data packet to be cached in the corresponding queue to be cached.
5. The method of claim 4, further comprising:
when the cached data packet flows out of the queue to be cached, determining the priority of the queue to be cached for caching the cached data packet as a caching priority;
determining the attribute information of the queue to be cached with the priority greater than the caching priority as target attribute information;
and updating the target attribute information.
6. The method of claim 5, further comprising:
if the number of the cached data packets of at least one queue to be cached is larger than the number threshold of the corresponding queue to be cached in the queue set to be cached, discarding the data packets to be cached;
and/or the presence of a gas in the gas,
if the memory space of the cached data packet of at least one queue to be cached is larger than the memory space threshold value of the corresponding queue to be cached in the queue set to be cached, discarding the data packet to be cached.
7. A data processing apparatus, characterized in that the apparatus comprises:
the first determining unit is used for determining a queue set to be cached according to the priority of a data packet to be cached sent by a terminal;
a second determining unit, configured to determine a target cache queue from the set of queues to be cached according to the attribute information of each queue to be cached in the set of queues to be cached and the priority of the packet to be cached;
and the buffer unit is used for buffering the data packet to be buffered into the target buffer queue.
8. The apparatus according to claim 7, wherein each of the queues to be buffered has a preset priority; correspondingly, the first determining unit comprises:
the comparison module is used for comparing the priority of each queue to be cached with the priority of the data packet to be cached;
the first determining module is configured to determine a queue to be buffered, of which the priority is greater than or equal to that of the data packet to be buffered, as the queue set to be buffered.
9. A data processing device, characterized in that it comprises at least: a processor and a storage medium configured to store executable instructions, wherein: the processor is configured to execute stored executable instructions;
the executable instructions are configured to perform the data processing method provided by any of the above claims 1 to 6.
10. A storage medium having stored therein computer-executable instructions configured to perform the data processing method provided in any one of claims 1 to 6.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811585237.5A CN111355673A (en) | 2018-12-24 | 2018-12-24 | Data processing method, device, equipment and storage medium |
PCT/CN2019/112792 WO2020134425A1 (en) | 2018-12-24 | 2019-10-23 | Data processing method, apparatus, and device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811585237.5A CN111355673A (en) | 2018-12-24 | 2018-12-24 | Data processing method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111355673A true CN111355673A (en) | 2020-06-30 |
Family
ID=71126853
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811585237.5A Pending CN111355673A (en) | 2018-12-24 | 2018-12-24 | Data processing method, device, equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111355673A (en) |
WO (1) | WO2020134425A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112202681A (en) * | 2020-09-18 | 2021-01-08 | 京信通信系统(中国)有限公司 | Data congestion processing method and device, computer equipment and storage medium |
CN113315720A (en) * | 2021-04-23 | 2021-08-27 | 深圳震有科技股份有限公司 | Data flow control method, system and equipment |
CN114979023A (en) * | 2022-07-26 | 2022-08-30 | 浙江大华技术股份有限公司 | Data transmission method, system, electronic equipment and storage medium |
CN115209166A (en) * | 2021-04-12 | 2022-10-18 | 北京字节跳动网络技术有限公司 | Message sending method, device, equipment and storage medium |
CN118277289A (en) * | 2024-06-03 | 2024-07-02 | 浙江力积存储科技有限公司 | Data output method, device, equipment and medium |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114567674B (en) * | 2022-02-25 | 2024-03-15 | 腾讯科技(深圳)有限公司 | Data processing method, device, computer equipment and readable storage medium |
CN115080468B (en) * | 2022-05-12 | 2024-06-14 | 珠海全志科技股份有限公司 | Non-blocking information transmission method and device |
CN115396384B (en) * | 2022-07-28 | 2023-11-28 | 广东技术师范大学 | Data packet scheduling method, system and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2314444A1 (en) * | 1999-08-02 | 2001-02-02 | At&T Corp. | Apparatus and method for providing a high-priority service for emergency messages on a network |
CN102594691A (en) * | 2012-02-23 | 2012-07-18 | 中兴通讯股份有限公司 | Method and device for processing message |
CN104199790A (en) * | 2014-08-21 | 2014-12-10 | 北京奇艺世纪科技有限公司 | Data processing method and device |
CN105763481A (en) * | 2014-12-19 | 2016-07-13 | 北大方正集团有限公司 | Information caching method and device |
WO2017000657A1 (en) * | 2015-06-30 | 2017-01-05 | 深圳市中兴微电子技术有限公司 | Cache management method and device, and computer storage medium |
CN108632169A (en) * | 2017-03-21 | 2018-10-09 | 中兴通讯股份有限公司 | A kind of method for ensuring service quality and field programmable gate array of fragment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102521151A (en) * | 2011-11-28 | 2012-06-27 | 华为技术有限公司 | Data caching method and device |
CN104079502B (en) * | 2014-06-27 | 2017-05-10 | 国家计算机网络与信息安全管理中心 | Multi-user multi-queue scheduling method |
CN107450971B (en) * | 2017-06-29 | 2021-01-29 | 北京五八信息技术有限公司 | Task processing method and device |
-
2018
- 2018-12-24 CN CN201811585237.5A patent/CN111355673A/en active Pending
-
2019
- 2019-10-23 WO PCT/CN2019/112792 patent/WO2020134425A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2314444A1 (en) * | 1999-08-02 | 2001-02-02 | At&T Corp. | Apparatus and method for providing a high-priority service for emergency messages on a network |
CN102594691A (en) * | 2012-02-23 | 2012-07-18 | 中兴通讯股份有限公司 | Method and device for processing message |
CN104199790A (en) * | 2014-08-21 | 2014-12-10 | 北京奇艺世纪科技有限公司 | Data processing method and device |
CN105763481A (en) * | 2014-12-19 | 2016-07-13 | 北大方正集团有限公司 | Information caching method and device |
WO2017000657A1 (en) * | 2015-06-30 | 2017-01-05 | 深圳市中兴微电子技术有限公司 | Cache management method and device, and computer storage medium |
CN108632169A (en) * | 2017-03-21 | 2018-10-09 | 中兴通讯股份有限公司 | A kind of method for ensuring service quality and field programmable gate array of fragment |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112202681A (en) * | 2020-09-18 | 2021-01-08 | 京信通信系统(中国)有限公司 | Data congestion processing method and device, computer equipment and storage medium |
CN112202681B (en) * | 2020-09-18 | 2022-07-29 | 京信网络系统股份有限公司 | Data congestion processing method and device, computer equipment and storage medium |
CN115209166A (en) * | 2021-04-12 | 2022-10-18 | 北京字节跳动网络技术有限公司 | Message sending method, device, equipment and storage medium |
CN113315720A (en) * | 2021-04-23 | 2021-08-27 | 深圳震有科技股份有限公司 | Data flow control method, system and equipment |
CN114979023A (en) * | 2022-07-26 | 2022-08-30 | 浙江大华技术股份有限公司 | Data transmission method, system, electronic equipment and storage medium |
CN118277289A (en) * | 2024-06-03 | 2024-07-02 | 浙江力积存储科技有限公司 | Data output method, device, equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
WO2020134425A1 (en) | 2020-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111355673A (en) | Data processing method, device, equipment and storage medium | |
KR100323258B1 (en) | Rate guarantees through buffer management | |
CA2575869C (en) | Hierarchal scheduler with multiple scheduling lanes | |
US20070070895A1 (en) | Scaleable channel scheduler system and method | |
US6795870B1 (en) | Method and system for network processor scheduler | |
US20030202517A1 (en) | Apparatus for controlling packet output | |
JP4447521B2 (en) | Packet scheduler and packet scheduling method | |
WO2002062013A2 (en) | Methods and systems providing fair queuing and priority scheduling to enhance quality of service in a network | |
EP4175232A1 (en) | Congestion control method and device | |
CN105162724A (en) | Data enqueue and dequeue method an queue management unit | |
EP1932284A2 (en) | Mechanism for managing access to resources in a heterogeneous data redirection device | |
US20120294315A1 (en) | Packet buffer comprising a data section and a data description section | |
EP3461085B1 (en) | Method and device for queue management | |
EP3823228A1 (en) | Message processing method and apparatus, communication device, and switching circuit | |
EP3907944A1 (en) | Congestion control measures in multi-host network adapter | |
JP2004242333A (en) | System, method, and logic for managing memory resources shared in high-speed exchange environment | |
CN110830388A (en) | Data scheduling method, device, network equipment and computer storage medium | |
US8879578B2 (en) | Reducing store and forward delay in distributed systems | |
JP4408376B2 (en) | System, method and logic for queuing packets to be written to memory for exchange | |
CN116889024A (en) | Data stream transmission method, device and network equipment | |
CN111092825B (en) | Method and device for transmitting message | |
EP1235393B1 (en) | Packet switch | |
JP4852138B2 (en) | System, method and logic for multicasting in fast exchange environment | |
CN111277513B (en) | PQ queue capacity expansion realization method, device, equipment and storage medium | |
CN110661724B (en) | Method and equipment for allocating cache |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200630 |