WO2020134425A1 - Data processing method, apparatus, and device, and storage medium - Google Patents

Data processing method, apparatus, and device, and storage medium Download PDF

Info

Publication number
WO2020134425A1
WO2020134425A1 PCT/CN2019/112792 CN2019112792W WO2020134425A1 WO 2020134425 A1 WO2020134425 A1 WO 2020134425A1 CN 2019112792 W CN2019112792 W CN 2019112792W WO 2020134425 A1 WO2020134425 A1 WO 2020134425A1
Authority
WO
WIPO (PCT)
Prior art keywords
cached
queue
cache
priority
data packet
Prior art date
Application number
PCT/CN2019/112792
Other languages
French (fr)
Chinese (zh)
Inventor
肖洁
廖庆磊
谢小龙
钱晓东
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2020134425A1 publication Critical patent/WO2020134425A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements

Definitions

  • the embodiments of the present application relate to the field of Internet technologies, and in particular, to a data processing method, device, equipment, and storage medium.
  • a buffer space is provided for each service priority.
  • a packet arrives, it first resolves the service level to which the packet belongs by parsing the characteristic field of the packet, maps the packet to the corresponding service priority, and then queries the queue cache of the priority to which the packet belongs. If the corresponding priority is cached If the occupancy is too large, the packet is discarded, otherwise the packet is written to the corresponding priority queue buffer.
  • the method of allocating caches independently for each queue is generally adopted, and the cache allocation is performed according to the relationship between the total depth of each queue's allocated cache and the actual total amount of cache.
  • Embodiments of the present application provide a data processing method, device, device, and storage medium.
  • an embodiment of the present application provides a data processing method.
  • the method includes: the server determines a set of queues to be cached according to a priority of a data packet to be cached sent by a terminal; according to each of the set of queues to be cached Attribute information of the queue to be cached and the priority of the data packet to be cached, determine a target cache queue from the set of queues to be cached; cache the data packet to be cached into the target cache queue.
  • an embodiment of the present application provides a data processing apparatus, the apparatus including:
  • the first determining unit is configured to determine the set of queues to be cached according to the priority of the received data packets to be cached;
  • the second determining unit is configured to determine the target cache queue from the queue-to-be-cached set based on the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached;
  • the cache unit is configured to cache the data packet to be cached into the target cache queue.
  • an embodiment of the present application provides a data processing device, the device at least includes: a processor and a storage medium configured to store executable instructions, wherein: the processor is configured to execute the stored executable instructions; The executable instruction is configured to perform the above data processing method.
  • an embodiment of the present application provides a storage medium in which computer-executable instructions are stored, and the computer-executable instructions are configured to execute the foregoing data processing method.
  • FIG. 1 is a schematic diagram of an implementation process of the data processing method provided in Embodiment 1 of the present application;
  • FIG. 2 is a schematic diagram of an implementation process of the data processing method provided in Embodiment 2 of the present application;
  • FIG. 3 is a schematic diagram of an implementation process of the data processing method provided in Embodiment 3 of the present application.
  • FIG. 4 is a schematic diagram of an implementation process of the data processing method provided in Embodiment 4 of the present application.
  • FIG. 5 is a schematic diagram of a relationship of cache space that can be occupied by each priority according to an embodiment of the present application
  • FIG. 6 is a schematic structural diagram of a data processing device provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an implementation process of a data processing method provided by an embodiment of this application.
  • FIG. 8 is a schematic diagram of an implementation process of a message reading process in the data processing method provided by an embodiment of this application;
  • FIG. 9 is a schematic structural diagram of a composition of a data processing device provided by an embodiment of this application.
  • FIG. 10 is a schematic structural diagram of a data processing device provided by an embodiment of the present application.
  • module means, “part” or “unit” used to represent elements is only for the benefit of the description of the present application, and has no specific meaning in itself. Therefore, “module”, “component” or “unit” can be used in a mixed manner.
  • a buffer space is provided for each service priority.
  • a message arrives, it first resolves the service level to which the message belongs by parsing the characteristic field of the message, maps the message to the corresponding service priority, and then queries the cache of the queue to which the message belongs, if the corresponding priority If the queue buffer occupancy is too large, the message is discarded, otherwise the message is written to the corresponding priority queue buffer.
  • the output port is idle, the highest priority queue with packets is selected according to the service level of the priority queue, and the queue is given priority.
  • the enqueue bandwidth of the high-priority queue is less than the port bandwidth, the high-priority queue will get the most timely service, ensuring the low-latency demand of the high-priority queue.
  • each queue is generally allocated independently, that is, the sum of the maximum caches that can be occupied by all priority queues does not exceed the system cache maximum. In this way, when the output bandwidth of the low-priority queue cannot be obtained and the buffer is occupied, it will not affect the enqueuing and dequeuing of packets of other priority queues.
  • a router sets up four priority queues, and sets the maximum cache depth T for each queue, and T*4 ⁇ total cache AT. If there are only two lower priority packets entering the router in a certain period of time, half of the depth buffer will be idle and wasted. Since the input traffic may change at any time, it is impossible to solve this problem by adjusting the preset maximum cache depth.
  • a router sets up four priority queues, and sets the maximum cache depth T ⁇ total cache AT and T*4>total cache AT for each queue. Then, when there are only two lower priority packets entering the router in a certain period of time, the wasted buffer will be less than half the depth. However, if 2T>AT, and the input bandwidth is greater than the output bandwidth, the total buffer will be exhausted by these two lower priority packets. If high-priority packets resume traffic at this time, the problem occurs that the high-priority packets cannot occupy the cache, causing high-priority packets to be discarded.
  • the mutual exclusion configuration method in the related art has the problem of idle cache, and the preemption configuration method has the problem of high priority cache being occupied.
  • embodiments of the present application provide a data processing method that, by preempting by priority, while reserving cache for high-priority queues, provides greater cache occupation opportunities for low-priority queues At the same time, it avoids the problem that the cache of the mutually exclusive configuration mode is idle in the traditional queue cache allocation and the problem of the high-priority cache occupied by the preemption configuration mode.
  • the internal cache of the device can be used more effectively, and the effect of prioritized service can be achieved.
  • FIG. 1 is a schematic flowchart of an implementation of a data processing method provided in Embodiment 1 of the present application. As shown in FIG. 1, the method includes steps S101-S103.
  • Step S101 The server determines the set of queues to be cached according to the priority of the received data packets to be cached.
  • the server may be a network server in a switching network system or a routing system, and the server may receive a data packet to be cached sent by a terminal through a router.
  • the data packet to be buffered may be sent by the terminal in the form of a data packet, or may be sent by the terminal in the form of a data packet.
  • the server can also receive data packets to be cached sent by other servers.
  • the server After receiving the data packet to be cached, the server performs feature analysis on the data packet to be cached to determine the priority corresponding to the characteristic of the data packet to be cached. For example, the priority of a video call is higher than the priority of web browsing. Assuming that the priority level corresponding to the highest priority is 0, you can set the priority level of a video call to 0. Therefore, when the server receives the data packet of the video call, after the feature analysis, the priority level corresponding to the video call is 0.
  • priority level 0 is greater than (i.e. higher than) priority level 1
  • priority level 1 is greater than priority level 2
  • priority level 2 is greater than priority level 3.
  • other levels of arrangement may also be used.
  • the server determines the set of queues to be cached according to the priority of the data packet to be cached.
  • the set of queues to be cached includes at least one queue to be cached, and the queue to be cached is used to cache data packets, for example, the data packets to be cached.
  • the queue to be cached may already have data packets cached, and the queue to be cached may also be an empty queue that does not cache any data packets.
  • the server may divide the cache space of the data processing system into a preset number of queues to be cached in advance, and each queue to be cached is used to cache data packets that meet certain conditions.
  • the cache space can be divided into a preset number of queues to be cached that buffer packets of different priorities, that is, a video call with a priority level of 0 corresponds to a queue to be cached; a network browsing with a priority level of 4 corresponds to Another queue to be cached.
  • At least one queue to be cached that satisfies a preset condition may be determined among all the queues to be cached in the cache space to form the set of queues to be cached.
  • the preset condition may be that the priority is greater than or equal to ( ⁇ ) the priority of the data packet to be cached.
  • Step S102 Determine a target cache queue from the queue-to-be-cached set according to the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached.
  • the attribute information of the queue to be cached includes at least one of the following: the number of cached data packets in the queue to be cached, and the memory space occupied by the cached data packets in the queue to be cached.
  • the number of cached data packets can be counted according to the number of data packet fragments and the number of data packets as a unit of measurement; the memory space occupied by the cached data packets can be measured in terms of bits, bytes, etc. statistics.
  • the target cache queue before determining the target cache queue, according to the attribute information of each queue to be cached and the priority of the data packet to be cached, it is determined whether the data packet to be cached is allowed to be cached to a certain set in the queue to be cached Waiting in the cache queue. If allowed, then cache the data packet to be cached into the target cache queue.
  • the target cache queue is a queue to be cached selected from the set of queues to be cached. In this way, the data packets to be cached can be cached to the target cache queue only if the conditions are met, so that high-priority caches can be avoided.
  • Step S103 Cache the data packet to be cached into the target cache queue.
  • the data packet to be cached is cached in the target cache queue to ensure that the data packet to be cached can be Effective transmission.
  • the server determines the set of queues to be cached according to the priority of the received data packets to be cached; according to the attribute information of each queue to be cached in the set of queues to be cached and the to-be-cached
  • the priority of the data packet determines the target cache queue from the set of queues to be cached; caches the data packet to be cached into the target cache queue.
  • FIG. 2 is a schematic flowchart of an implementation of the data processing method provided in Embodiment 2 of the present application. As shown in FIG. 2, the method includes steps S201-S204.
  • step S201 the server compares the priority of each queue to be cached with the priority of the data packet to be cached.
  • each of the queues to be cached has a preset priority.
  • the cache space may be divided into multiple queues to be cached, and each queue to be cached is used to cache different data packets, for example, video call data packets, file transfer data packets, and web browsing data packets.
  • the queue to be cached for buffering video call data packets has a higher priority than the queue to be buffered for buffering network browsing data packets.
  • the priority of each queue to be cached is fixed.
  • the server compares the priority of the data packet to be cached with the priority of all queues to be cached in the system.
  • Step S202 Determine a queue to be cached whose priority is greater than or equal to the priority of the data packet to be cached as the set of queues to be cached.
  • a queue to be cached with a priority greater than or equal to the priority of the data packet to be cached is selected to form the set of queues to be cached, That is to say, the set of queues to be cached contains at least one queue to be cached, and the priority of all queues to be cached in the set of queues to be cached is greater than or equal to the priority of the data packets to be cached.
  • Step S203 Determine the target cache queue from the queue-to-be-cached set according to the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached.
  • the target cache queue Before determining the target cache queue, according to the attribute information of each queue to be cached and the priority of the data packet to be cached, it is determined whether the data packet to be cached is allowed to be cached to a certain queue in the queue set to be cached Cache queue. If allowed, then cache the data packet to be cached into the target cache queue.
  • the target cache queue is a queue to be cached selected from the set of queues to be cached. In this way, the data packets to be cached can be cached to the target cache queue only if the conditions are met, so that high-priority caches can be avoided.
  • Step S204 Cache the data packet to be cached into the target cache queue.
  • the data packet to be cached is cached in the target cache queue to ensure that the data packet to be cached can be Effective transmission.
  • the server compares the priority of each queue to be cached with the priority of the data packet to be cached; the queue to be cached with a priority greater than or equal to the priority of the data packet to be cached, Determined as the set of queues to be cached; according to the attribute information of each queue to be cached in the set of queues to be cached and the priority of the data packet to be cached, a target cache queue is determined from the set of queues to be cached; Cache the data packet to be cached into the target cache queue.
  • the target cache queue is determined according to the attribute information of each queue to be cached and the priority of the data packet to be cached, the purpose of preempting the cache queue by priority can be achieved, thereby avoiding queue cache allocation The problem of idle cache and the problem of high priority cache being occupied.
  • FIG. 3 is a schematic flowchart of an implementation of the data processing method provided in Embodiment 3 of the present application. As shown in FIG. 3, the method includes steps S301-S304.
  • Step S301 The server determines the set of queues to be cached according to the priority of the data packets to be cached sent by the terminal.
  • the set of queues to be cached has at least one queue to be cached, and each of the queues to be cached has a preset priority.
  • the cache space may be divided into multiple queues to be cached, and each queue to be cached is used to cache different data packets.
  • the priority of the data packet to be cached can be compared with the priority of all the queues to be cached in the system, and then a queue to be cached with a priority greater than or equal to the priority of the data packet to be cached can be selected to form the to-be-cached Queue collection.
  • Step S302 For each queue to be cached in the queue-to-be-cached set, determine whether to allow the data packet to be cached according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information To the corresponding queue to be cached.
  • the attribute information of the queue to be cached includes at least one of the following: the number of cached data packets in the queue to be cached, and the memory space occupied by the cached data packets in the queue to be cached.
  • the preset threshold corresponding to the attribute information includes at least one of the following: a quantity threshold and a memory space threshold.
  • step S302 in this embodiment includes the following three implementation methods:
  • the attribute information of the queue to be cached is the number of cached data packets in the queue to be cached, and the preset threshold corresponding to the attribute information is the quantity threshold;
  • the corresponding queue to be cached includes: for each queue to be cached in the queue to be cached set, if the number of the cached data packets in each queue to be cached is less than or equal to the number of corresponding queues to be cached.
  • the threshold value allows the data packet to be cached to be cached in the corresponding queue to be cached.
  • the attribute information of the queue to be cached is the memory space occupied by the cached data packets in the queue to be cached, and the preset threshold corresponding to the attribute information is the memory space threshold;
  • the corresponding queue to be cached includes: for each queue to be cached in the queue to be cached set, if the memory space occupied by the cached data packet in each queue to be cached is less than or equal to the corresponding queue to be cached Threshold of the memory space, it is allowed to cache the data packet to be cached into the corresponding queue to be cached.
  • the attribute information of the queue to be cached is the number of cached data packets in the queue to be cached and the memory space occupied by the cached data packets, and the preset threshold corresponding to the attribute information is the quantity threshold and the memory space threshold ;
  • determine whether to allow caching of the to-be-cached data packet To the corresponding queue to be cached includes: for each queue to be cached in the queue to be cached set, if the number of buffered data packets in each queue to be cached is less than or equal to the threshold of the number of queues to be cached , And the memory space occupied by the cached data packet is less than or equal to the memory space threshold of the corresponding queue to be cached, it is allowed to cache the data packet to be cached into the corresponding queue to be cached.
  • the number of cached data packets in the attribute information of the queue to be cached may be counted according to the number of data packet fragments and the number of data packet packets as a unit of measurement; the cached data packet occupancy
  • the memory space can be calculated based on bits, bytes, etc. for the measurement unit.
  • the preset threshold corresponding to the attribute information is determined according to the priority of the queue to be cached, that is, each queue to be cached corresponds to a preset threshold, and the The threshold is different.
  • the preset threshold of the queue to be cached is also greater than the preset threshold of other queues to be cached.
  • the preset threshold corresponding to the Nth cache queue may be determined according to the preset priority of the Nth queue to be cached in the set of queues to be cached; wherein, if the Nth queue to be cached The preset priority of the queue is higher than the preset priority of the Mth queue to be cached, then the preset threshold corresponding to the Nth cache queue is greater than the preset threshold corresponding to the Mth buffer queue.
  • the attribute information of all queues to be cached in the queue set to be cached is compared with the corresponding preset threshold, and only the attribute information of all queues to be cached and the corresponding preset threshold satisfy the above three methods Can be determined to allow the data packet to be cached to the corresponding queue to be cached, as long as the attribute information of any queue to be cached and the corresponding preset threshold do not satisfy the three ways (Any one determined to be executed), it is forbidden to cache the data packet to be cached into the corresponding queue to be cached.
  • Step S303 For each queue to be cached in the queue-to-be-cached set, if the result is determined to be allowed, from the queue-to-be-cached set, a queue to be cached with the same priority as the data packet to be cached , Determined as the target cache queue.
  • the priority of the target cache queue is the same as the priority of the data packet to be cached.
  • the set of queues to be cached is determined according to the queues to be cached whose priority is greater than or equal to the priority of the data packets to be cached. Then, that is to say, the target cache queue having the same priority as the data packet to be cached is the queue to be cached with the lowest priority in the set of queues to be cached.
  • step S303 can also be achieved by the following steps:
  • Step S3031 For each queue to be cached in the queue-to-be-cached set, if the determination result is allowed, from the queue-to-be-cached set, the queue to be cached with the lowest priority is determined as the target cache queue.
  • the method further includes the following steps:
  • Step S310 if there are at least one queue to be cached in the set of queues to be cached, the number of the cached data packets is greater than the corresponding threshold value of the queue to be cached, then discard the data packets; and/or, if In the set of queues to be cached, if the memory space occupied by the cached data packet with at least one queue to be cached is greater than the memory space threshold of the corresponding queue to be cached, the data packet is discarded.
  • the determination that the data packet to be cached is discarded includes the following three types:
  • the number of cached data packets in which at least one queue to be cached is greater than a threshold corresponding to the number of queues to be cached.
  • the memory space occupied by the cached data packets of at least one queue to be cached is greater than the threshold of the memory space corresponding to the queue to be cached.
  • Step S304 Cache the data packet to be cached into the target cache queue.
  • the data packet to be cached is cached in the target cache queue to ensure that the data packet to be cached can be Effective transmission.
  • the data processing method for each queue to be cached in the queue queue to be cached, according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information, it is determined whether to allow The data packet to be cached is cached into a corresponding queue to be cached.
  • the attribute information of each queue to be cached and the corresponding preset threshold are judged to determine whether the data packet to be cached is allowed to enter the queue to be cached, the target cache queue is determined and cached only when permitted Cache data packets, therefore, the purpose of prioritizing cache queue preemption can be achieved, thereby avoiding the problem of cache idleness in queue cache allocation and the problem of high priority cache being occupied.
  • FIG. 4 is a schematic flowchart of an implementation of a data processing method provided in Embodiment 4 of the present application. As shown in FIG. 4, the method includes steps S401-S406.
  • Step S401 The server determines the set of queues to be cached according to the priority of the received data packets to be cached.
  • Step S402 Determine the target cache queue from the queue-to-be-cached set according to the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached.
  • Step S403 Cache the data packet to be cached into the target cache queue.
  • Steps S401 to S403 are the same as the above steps S101 to S103, and details are not described in this embodiment.
  • Step S404 when the cached data packet flows out of the cache queue, it is determined that the priority of the cache queue that caches the cached data packet is the cache priority.
  • the cached data packet flowing out of the cache queue may be a cached data packet flowing out of any cache queue in the system, that is to say, the cache queue may be the queue to be cached in the aforementioned queue to be cached set, It may not be the queue to be cached in the aforementioned queue to be cached set, as long as there is a cached data packet outflow in any one of the cache queues in the current data processing system, the action of step S404 is performed to determine the cache of the cached data packet
  • the priority of the cache queue is the cache priority. That is, the priority of the buffer queue that buffers the outgoing buffered data packet is determined, and the priority is determined as the buffer priority.
  • Step S405 Determine the attribute information of the cache queue whose priority is greater than or equal to the cache priority as the target attribute information.
  • the priority of all cache queues in the system is compared with the size of the cache priority; then, the attribute information of the cache queue whose priority is greater than the cache priority is determined as the target attribute information.
  • Step S406 Update the target attribute information.
  • updating the attribute information of the cache queue whose priority is greater than or equal to the cache priority includes the following three cases:
  • Case 1 When the number of cached data packets flowing out of the cache queue is counted on a statistical basis, the number of cached data packets in the cache queue is updated.
  • Case 2 When the cached data packets flowing out of the cache queue are counted based on the size of the memory space, the memory space occupied by the cached data packets in the cache queue is updated.
  • Case 3 When the number of cached data packets flowing out of the cache queue is counted based on statistics and the size of the memory space, the number of cached data packets in the cache queue and the memory space occupied are updated.
  • the attribute information of all cache queues whose priority is higher than and equal to the outgoing cache queue is updated.
  • the priority of the cache queue that caches the cached data packet is determined as the cache priority, and the priority is greater than the cache priority of the cache
  • the attribute information of the queue is determined as the target attribute information, and the target attribute information is updated. In this way, it can be ensured that when a data packet flows out of the cache queue, the attribute information of the cache queue is updated in time to ensure that the data packets to be cached that subsequently enter the queue can be effectively cached.
  • the data processing method provided by the embodiments of the present application through the method of priority preemption, while reserving cache for high-priority queues, provides greater cache occupation opportunities for low-priority queues while avoiding traditional In the queue cache allocation, the mutual exclusion configuration mode cache is idle and the preemption configuration mode high priority cache is occupied. In this way, the internal cache of the device can be used more effectively, and the effect of prioritized services can be achieved.
  • an embodiment of the present application further provides a data processing method, including the following steps one and two.
  • Step 1 Receive the data packet to be cached, and determine the target cache queue corresponding to the priority according to the priority of the data packet to be cached;
  • the processing device for implementing the method of this embodiment may be any device that has a buffer queue processing requirement, for example, it may be a server, and it may not be a terminal.
  • the data packet to be buffered received by the device may be sent by the terminal, or may be sent by another server.
  • the processing device is preset with two or more cache queues, and each cache queue corresponds to a priority level, and is used to receive cache data packets corresponding to the priority levels.
  • Step 2 Determine whether to cache the data packet to be cached into the target cache queue according to the attribute information of all cache queues whose priority is not lower than the target cache queue, the attribute information is used to indicate that the corresponding cache queue is currently Cache situation.
  • the attribute information of the cache queue includes the number of cached data packets in the queue
  • the second step is implemented in the following manner:
  • the cache queue of the queue refers to: all cache queues including the target cache queue whose priority is not lower than the target cache queue.
  • the attribute information of the cache queue includes the number of cached data packets in the queue.
  • a threshold is set for each cache queue to indicate the maximum number of cache queues that can be cached.
  • a first difference threshold and a second difference threshold can also be set for each cache queue. Both the first difference threshold and the second difference threshold can be used to reflect the reserved space of the current cache queue. The difference is that A difference threshold is used when judging whether to buffer the data packet, and a second difference threshold is used when judging whether to discard the data packet.
  • the first difference threshold and the second difference threshold set for the same cache queue may be the same or different.
  • the above “equal to difference threshold” judgment step only needs to keep one of the first condition or the second condition, that is, when the first When the judgment in one condition is “greater than or equal to”, the judgment in the second condition may be “less than”, and when the judgment in the first condition is “greater than”, the judgment in the second condition may be “less than or equal to”.
  • high-priority caches can be avoided, and on the other hand, priority caching of high-priority packets can be better ensured. For example, when the number of data packets cached in a high-priority cache queue is close to the number threshold, the cache of low-priority packets can be blocked to avoid the high-priority cache being occupied, but when the data cached in the high-priority cache queue When the number of packets is not close to the number threshold, the cache in the high-priority cache queue can still be occupied by low-priority packets to avoid the high-priority cache being idle.
  • the attribute information of the cache queue may further include memory space occupied by cached data packets in the queue;
  • the second step is implemented in the following manner:
  • the priority is not lower than the number of cached data packets of each cache queue of the target cache queue, if the difference between the memory space occupied by the cached data packets of each cache queue and the memory space threshold of the corresponding cache queue is greater than Or equal to the third difference threshold (third condition) of the corresponding cache queue, it is determined that the data packet to be cached is cached in the target cache queue, if the memory space occupied by the cached data packet of any cache queue is equal to If the difference in the memory space threshold of the corresponding cache queue is less than or equal to the fourth difference threshold (fourth condition) of the corresponding cache queue, it is determined that the data packet to be cached is not cached in the target cache queue; wherein, the A cache queue whose priority is not lower than the target cache queue refers to: all cache queues including the target cache queue whose priority is not lower than the target cache queue.
  • the attribute information of the cache queue includes the memory space occupied by the cached data packets in the queue.
  • a memory space threshold is set for each cache queue to indicate the maximum memory space that the cache queue can cache.
  • a third difference threshold and a fourth difference threshold can also be set for each cache queue. Both the third difference threshold and the fourth difference threshold can be used to reflect the reserved space of the current cache queue. The difference is that The three difference threshold is used when judging whether to buffer the data packet, and the fourth difference threshold is used when judging whether to discard the data packet.
  • the third difference threshold and the fourth difference threshold set for the same cache queue may be the same or different.
  • step two when it is determined that the to-be-cached data packet is cached in the target cache queue, the method further includes: updating all priority levels not lower than the target cache queue Attribute information of the cache queue of the; where the cache queue with a priority not lower than the target cache queue refers to: all cache queues including the target cache queue with a priority not lower than the target cache queue .
  • the preset threshold (number threshold or memory space threshold) of each cache queue is less than the total cache, and the sum of the preset thresholds of all cache queues is greater than the total cache.
  • all cache queues share the same cache area, so after caching data packets, the attribute information of all cache queues whose priority is not lower than the target cache queue is updated to reflect the current cache occupancy overall and comprehensively , So as to ensure that the data packets to be cached subsequently entered the queue can be effectively cached.
  • the method further includes: updating any attribute information of the cache queue whose priority is not lower than the cache queue when the cache data packet in any cache queue is dequeued; wherein, the priority A cache queue not lower than the exit cache queue refers to: all cache queues including the exit cache queue whose priority is not lower than the exit cache queue.
  • the cache queue is the out-cache queue.
  • the attribute information of all cache queues with a priority not lower than the cache queue is updated to reflect the current situation in a comprehensive and comprehensive manner Cache occupancy, so as to ensure that subsequent packets to be cached can be effectively cached.
  • n queues with a priority relationship are numbered 0 to n-1, wherein the highest priority queue number is 0, the second highest priority queue number is 1, and again The high priority queue number is 2... and so on, the lowest priority queue number is n-1.
  • the counter can be in units of cache measurement units such as bits, bytes, number of fragments, or number of packets.
  • the less-than sign in the above expression can also be changed to less than or equal to the sign, that is, in other embodiments, the above-mentioned cnt x ⁇ T x can also be cnt x ⁇ T x .
  • the counters of all the queues with x ⁇ i are increased correspondingly according to the counter unit and the message length. For example, if the message is in units of bits, it increases according to the number of bits in the message; if the message is in units of bytes, it increases according to the number of bytes in the message.
  • FIG. 6 is a schematic diagram of the composition structure of the data processing device provided in the embodiment of the present application.
  • the data processing device includes a queue depth counter module 601, an enqueue judgment module 602, an enqueue depth calculation module 603, and an outgoing Team depth calculation module 604, in which:
  • the queue depth counter module 601 is set to maintain a corresponding buffer occupation depth counter for each priority queue
  • the less than sign in the above expression can also be changed to less than or equal to sign, accordingly, the greater than or equal sign can be changed to greater than sign;
  • FIG. 7 is a schematic flowchart of an implementation process of the data processing method provided by an embodiment of the present application, as shown in FIG. 7 The method includes the following steps S701-S704.
  • step S701 when the system is powered on, each queue depth counter is initialized to 0.
  • step S702 it is detected that a message enters.
  • the less than sign in the above expression can also be changed to less than or equal to sign.
  • step S705 if the judgment result in step S702 is NO, the message is discarded.
  • FIG. 8 is a schematic diagram of an implementation process of a message reading process in a data processing method provided by an embodiment of the present application. As shown in FIG. 8, the method includes the following steps S801-S802.
  • step S801 it is detected that a message is read.
  • n 5 is used as an example to describe in detail.
  • the priority of the queues is Q 0 , Q 1 , Q 2 , Q 3 , and Q 4 from high to low.
  • Five buffer depth counters cnt 0 , cnt 1 , cnt 2 , cnt 3 , cnt 4 are provided for the five priority levels.
  • the depth counter uses the number of packets as the statistical unit, and the drop thresholds of the five priority queues are T 0 , T 1 , T 2 , T 3 , and T 4 , respectively, and satisfy T 0 >T 1 >T 2 >T 3 >T 4 .
  • the message enters the queue Q 4 , and cnt 0 , cnt 1 , cnt 2 , cnt 3 , and cnt 4 are increased by one respectively; if one of the above conditions is not met, the message is discarded and the counter remains unchanged.
  • the message enters the queue Q 3 , and cnt 0 , cnt 1 , cnt 2 , and cnt 3 are increased by one respectively; if one of the above conditions is not met, the message is discarded and the counter is unchanged.
  • the message enters the queue Q 2 , and cnt 0 , cnt 1 , and cnt 2 are increased by one respectively; if one of the above conditions is not met, the message is discarded and the counter is unchanged.
  • the message enters the queue Q 1 , and cnt 0 and cnt 1 are increased by one respectively; if one of the above conditions is not met, the message is discarded and the counter is unchanged.
  • the message enters the queue Q 0 and cnt 0 is increased by one; if the above conditions are not met, the message is discarded and the counter remains unchanged.
  • cnt 0 , cnt 1 , cnt 2 , cnt 3 , and cnt 4 are each decremented by one.
  • cnt 0 , cnt 1 , cnt 2 , and cnt 3 are each decremented by one.
  • the data processing method provided by the embodiment of the present application by preempting the priority method, while reserving the cache for the high-priority queue, provides a greater chance of cache occupation for the low-priority queue, and avoids the traditional queue cache allocation
  • Mutually exclusive configuration mode cache is idle and preemption configuration mode high priority cache is occupied.
  • the high-priority queue still has more space for it to occupy, and the low-priority queue will not consume the cache and cause high-priority packets to be lost.
  • embodiments of the present application provide a data processing apparatus including each unit included and each module included in each unit, which may be implemented by a processor in a data processing device;
  • the processor may be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGA field programmable gate array
  • FIG. 9 is a schematic structural diagram of a data processing apparatus provided by an embodiment of the present application.
  • the data processing apparatus 900 includes a first determining unit 901, a second determining unit 902, and a cache unit 903, where:
  • the first determining unit 901 is configured to determine the set of queues to be cached according to the priority of the received data packets to be cached;
  • the second determining unit 902 is configured to determine the target cache queue from the queue-to-be-cached set according to the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached ;
  • the cache unit 903 is configured to cache the data packet to be cached into the target cache queue.
  • each of the queues to be cached has a preset priority; correspondingly, the first determination unit 901 includes a comparison module and a first determination module, where:
  • the comparison module is configured to compare the priority of each queue to be cached with the priority of the data packet to be cached;
  • the first determining module is configured to determine a queue to be cached with a priority greater than or equal to the priority of the data packet to be cached as the set of queues to be cached.
  • the second determination unit 902 includes a second determination module and a third determination module, where:
  • the second determining module is configured to determine, for each queue to be cached in the set of queues to be cached, based on the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information, whether to allow all The data packet to be cached is cached into the corresponding queue to be cached;
  • the third determination module is set to each queue to be cached in the queue-to-be-cached set, if the determination result is allowed, from the queue-to-be-cached set, priority will be given to the data packet to be cached
  • the queue to be cached with the same level is determined as the target cache queue.
  • the attribute information of the queue to be cached includes at least one of the following in the queue to be cached: the number of cached data packets and the memory space occupied by the cached data packets, corresponding to the attribute
  • the preset threshold corresponding to the information includes at least one of the following: quantity threshold and memory space threshold;
  • the second determination module includes a first control module and/or a second control module, wherein:
  • the first control module is configured to, for each queue to be cached in the queue set to be cached, if the number of the cached data packets in each queue to be cached is less than or equal to the number of corresponding queues to be cached Threshold, it is allowed to cache the data packet to be cached into the corresponding queue to be cached;
  • the second control module is set for each queue to be cached in the queue to be cached set, if the memory space occupied by the cached data packet in each queue to be cached is less than or equal to the corresponding queue to be cached Threshold of the memory space, it is allowed to cache the data packet to be cached into the corresponding queue to be cached.
  • the apparatus further includes a third determination unit, a fourth determination unit, and an update unit, where:
  • the third determining unit is configured to determine that the priority of the cache queue to cache the cached data packet is the cache priority when the cached data packet flows out of the cache queue;
  • the fourth determining unit is configured to determine the attribute information of the cache queue whose priority is greater than or equal to the cache priority as the target attribute information;
  • the update unit is configured to update the target attribute information.
  • the apparatus further includes a first discarding unit and/or a second discarding unit, wherein:
  • the first discarding unit is configured to discard the data to be cached if the number of the cached data packets in at least one queue to be cached in the set of queues to be cached is greater than the corresponding threshold value of the number of queues to be cached package;
  • the second discarding unit is configured to discard the memory space of the cached data packet where at least one queue to be cached occupies more than a memory space threshold corresponding to the queue to be cached in the set of queues to be cached Packets to be cached.
  • the embodiments of the present application if the above data processing method is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the technical solutions of the embodiments of the present invention may be embodied in the form of software products in essence or part of contributions to related technologies.
  • the computer software products are stored in a storage medium and include several instructions to make A terminal executes all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage media include various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read-only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk. In this way, the embodiments of the present invention are not limited to any specific combination of hardware and software.
  • FIG. 10 is a schematic structural diagram of the composition of the data processing device provided by the embodiment of the present application.
  • the data processing device 1000 at least includes: a processor 1001 , A communication interface 1002 and a storage medium 1003 configured to store executable instructions, wherein:
  • the processor 1001 is configured to control the overall operation of the data processing device 1000.
  • the communication interface 1002 is configured to enable the data processing device to communicate with other terminals or servers through the network.
  • the storage medium 1003 is configured to store instructions and applications executable by the processor 1001, and can also cache data to be processed or processed by the processor 1001 and each module in the data processing device 1000, which can be accessed through flash memory (FLASH) or random access Memory (Random Access Memory, RAM) implementation.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units is only a division of logical functions.
  • the displayed or discussed components are coupled to each other, or directly coupled, or the communication connection may be through some interfaces, and the indirect coupling or communication connection of the device or unit may be electrical, mechanical, or other forms of.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer-readable storage medium.
  • the execution includes The steps of the foregoing method embodiments; and the foregoing storage media include various media that can store program codes, such as a mobile storage device, a read-only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • ROM Read Only Memory
  • the above integrated unit of the present invention is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the technical solutions of the embodiments of the present invention may be embodied in the form of software products in essence or part of contributions to related technologies.
  • the computer software products are stored in a storage medium and include several instructions to make A terminal executes all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage media include various media that can store program codes, such as mobile storage devices, ROM, magnetic disks, or optical disks.

Abstract

Embodiments of the present application provide a data processing method, apparatus, and device, and a storage medium. The method comprises: a server determines, according to the priorities of received data packets to be cached, a set of queues available for caching; determine a target cache queue from said set of queues according to attribute information of each queue available for caching in said set of queues and the priorities of said data packets; cache said data packets to the target cache queue.

Description

一种数据处理方法、装置、设备及存储介质Data processing method, device, equipment and storage medium 技术领域Technical field
本申请实施例涉及互联网技术领域,尤指一种数据处理方法、装置、设备及存储介质。The embodiments of the present application relate to the field of Internet technologies, and in particular, to a data processing method, device, equipment, and storage medium.
背景技术Background technique
在目前的网络通讯中,往往存在多种服务级别的通讯需求,为了保证及时性高的通讯需求,一般采用优先级队列的方式处理报文的存储与转发。In the current network communication, there are often communication requirements of multiple service levels. In order to ensure the communication requirements with high timeliness, the storage and forwarding of messages are generally handled by priority queues.
在这些转发方案里,为每个服务优先级提供一个缓存空间。报文到达时,首先通过解析报文的特征字段分辨报文所属的服务级别,将报文映射到相应的服务优先级上,然后查询报文所属优先级的队列缓存情况,如果相应优先级缓存占用量过大,则丢弃该报文,否则将报文写入相应优先级队列缓存。在每个优先级队列缓存的分配上,一般采取每个队列独立分配缓存的方式,根据每个队列分配缓存的总深度和实际缓存总量的关系进行缓存分配。In these forwarding schemes, a buffer space is provided for each service priority. When a packet arrives, it first resolves the service level to which the packet belongs by parsing the characteristic field of the packet, maps the packet to the corresponding service priority, and then queries the queue cache of the priority to which the packet belongs. If the corresponding priority is cached If the occupancy is too large, the packet is discarded, otherwise the packet is written to the corresponding priority queue buffer. In the allocation of caches for each priority queue, the method of allocating caches independently for each queue is generally adopted, and the cache allocation is performed according to the relationship between the total depth of each queue's allocated cache and the actual total amount of cache.
但是,现有技术的方案会出现缓存闲置和高优先级缓存被占用的问题。However, the solutions in the prior art have problems of cache idleness and high-priority cache occupation.
发明概述Summary of the invention
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。The following is an overview of the topics detailed in this article. This summary is not intended to limit the scope of protection of the claims.
本申请实施例提供一种数据处理方法、装置、设备及存储介质。Embodiments of the present application provide a data processing method, device, device, and storage medium.
第一方面,本申请实施例提供一种数据处理方法,所述方法包括:服务器根据终端发送的待缓存数据包的优先级,确定待缓存队列集合;根据所述待缓存队列集合中的每一待缓存队列的属性信息和所述待缓存数据包的优先级,从所述待缓存队列集合中确定目标缓存队列;将所述待缓存数据包缓存至所述目标缓存队列中。In a first aspect, an embodiment of the present application provides a data processing method. The method includes: the server determines a set of queues to be cached according to a priority of a data packet to be cached sent by a terminal; according to each of the set of queues to be cached Attribute information of the queue to be cached and the priority of the data packet to be cached, determine a target cache queue from the set of queues to be cached; cache the data packet to be cached into the target cache queue.
第二方面,本申请实施例提供一种数据处理装置,所述装置包括:In a second aspect, an embodiment of the present application provides a data processing apparatus, the apparatus including:
第一确定单元,设置为根据接收的待缓存数据包的优先级,确定待缓存 队列集合;The first determining unit is configured to determine the set of queues to be cached according to the priority of the received data packets to be cached;
第二确定单元,设置为根据所述待缓存队列集合中的每一待缓存队列的属性信息和所述待缓存数据包的优先级,从所述待缓存队列集合中确定目标缓存队列;以及The second determining unit is configured to determine the target cache queue from the queue-to-be-cached set based on the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached; and
缓存单元,设置为将所述待缓存数据包缓存至所述目标缓存队列中。The cache unit is configured to cache the data packet to be cached into the target cache queue.
第三方面,本申请实施例提供一种数据处理设备,所述设备至少包括:处理器和配置为存储可执行指令的存储介质,其中:所述处理器配置为执行存储的可执行指令;所述可执行指令配置为执行上述数据处理方法。In a third aspect, an embodiment of the present application provides a data processing device, the device at least includes: a processor and a storage medium configured to store executable instructions, wherein: the processor is configured to execute the stored executable instructions; The executable instruction is configured to perform the above data processing method.
第四方面,本申请实施例提供一种存储介质,所述存储介质中存储有计算机可执行指令,所述计算机可执行指令配置为执行上述数据处理方法。According to a fourth aspect, an embodiment of the present application provides a storage medium in which computer-executable instructions are stored, and the computer-executable instructions are configured to execute the foregoing data processing method.
在阅读并理解了附图和详细描述后,可以明白其他方面。After reading and understanding the drawings and detailed description, other aspects can be understood.
附图概述Brief description of the drawings
在附图(其不一定是按比例绘制的)中,相似的附图标记可在不同的视图中描述相似的部件。具有不同字母后缀的相似附图标记可表示相似部件的不同示例。In the drawings (which are not necessarily drawn to scale), similar reference numbers may describe similar components in different views. Similar reference numbers with different letter suffixes may indicate different examples of similar parts.
图1为本申请实施例一所提供的数据处理方法的实现流程示意图;FIG. 1 is a schematic diagram of an implementation process of the data processing method provided in Embodiment 1 of the present application;
图2为本申请实施例二所提供的数据处理方法的实现流程示意图;2 is a schematic diagram of an implementation process of the data processing method provided in Embodiment 2 of the present application;
图3为本申请实施例三所提供的数据处理方法的实现流程示意图;3 is a schematic diagram of an implementation process of the data processing method provided in Embodiment 3 of the present application;
图4为本申请实施例四所提供的数据处理方法的实现流程示意图;4 is a schematic diagram of an implementation process of the data processing method provided in Embodiment 4 of the present application;
图5为本申请实施例每个优先级可占用的缓存空间关系示意图;FIG. 5 is a schematic diagram of a relationship of cache space that can be occupied by each priority according to an embodiment of the present application;
图6为本申请实施例所提供的数据处理装置的组成结构示意图;FIG. 6 is a schematic structural diagram of a data processing device provided by an embodiment of the present application;
图7为本申请实施例所提供的数据处理方法的实现流程示意图;7 is a schematic diagram of an implementation process of a data processing method provided by an embodiment of this application;
图8为本申请实施例所提供的数据处理方法中报文读出过程的实现流程示意图;8 is a schematic diagram of an implementation process of a message reading process in the data processing method provided by an embodiment of this application;
图9为本申请实施例所提供的数据处理装置的组成结构示意图;9 is a schematic structural diagram of a composition of a data processing device provided by an embodiment of this application;
图10为本申请实施例所提供的数据处理设备的组成结构示意图。FIG. 10 is a schematic structural diagram of a data processing device provided by an embodiment of the present application.
详述Elaborate
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本申请的说明,其本身没有特定的意义。因此,“模块”、“部件”或“单元”可以混合地使用。In the subsequent description, the use of suffixes such as “module”, “part” or “unit” used to represent elements is only for the benefit of the description of the present application, and has no specific meaning in itself. Therefore, "module", "component" or "unit" can be used in a mixed manner.
在目前的网络通讯中,往往存在多种服务级别的通讯需求,如语音通话、视频通话、网络浏览、文件传输等,有的通讯需求对数据包和报文转发的及时性要求较高,如视频通话;有的通讯需求对数据包和报文转发的及时性要求较低,如网络浏览。但从路由转发的角度看,所有的通讯报文都是混杂进入路由器等待处理的,因此为了保证及时性高的通讯需求,一般采用优先级队列的方式处理报文的存储与转发。In the current network communication, there are often multiple service level communication requirements, such as voice calls, video calls, web browsing, file transfers, etc. Some communication requirements have high requirements for the timeliness of data packets and message forwarding, such as Video call; some communication requirements have lower requirements for the timeliness of data packet and message forwarding, such as web browsing. However, from the perspective of routing and forwarding, all communication packets are mixed into the router and are waiting to be processed. Therefore, in order to ensure timely communication requirements, the priority queue is generally used to process the storage and forwarding of packets.
在这些转发方案里,为每个服务优先级提供一个缓存空间。报文到达时,首先通过解析报文的特征字段分辨报文所属的服务级别,将报文映射到相应的服务优先级上,然后查询报文所属优先级的队列的缓存情况,如果相应优先级队列缓存占用量过大,则丢弃该报文,否则将报文写入相应优先级队列缓存。当输出端口空闲时,根据优先级队列的服务级别,选取有报文的最高优先级队列,优先出队。当高优先级队列入队带宽小于端口带宽时,高优先级队列将得到最及时的服务,保证高优先级队列的低延时需求。In these forwarding schemes, a buffer space is provided for each service priority. When a message arrives, it first resolves the service level to which the message belongs by parsing the characteristic field of the message, maps the message to the corresponding service priority, and then queries the cache of the queue to which the message belongs, if the corresponding priority If the queue buffer occupancy is too large, the message is discarded, otherwise the message is written to the corresponding priority queue buffer. When the output port is idle, the highest priority queue with packets is selected according to the service level of the priority queue, and the queue is given priority. When the enqueue bandwidth of the high-priority queue is less than the port bandwidth, the high-priority queue will get the most timely service, ensuring the low-latency demand of the high-priority queue.
在优先级队列缓存的分配上,一般采取每个队列独立分配缓存的方式,即,所有优先级队列可占用的缓存最大值之和不超过系统缓存最大值。这样当低优先级队列无法获得输出带宽导致缓存被占用时,不会影响其他优先级队列的报文入队和出队。In the allocation of priority queue caches, each queue is generally allocated independently, that is, the sum of the maximum caches that can be occupied by all priority queues does not exceed the system cache maximum. In this way, when the output bandwidth of the low-priority queue cannot be obtained and the buffer is occupied, it will not affect the enqueuing and dequeuing of packets of other priority queues.
但是,当路由器输入没有高优先级报文时,就会浪费一部分缓存能力。例如,缓存分配中的互斥配置方式,某路由器设置了4个优先级队列,并为每个队列设置最大缓存深度T,且T*4<总缓存AT。则假如某段时间仅存在两个较低优先级的报文进入路由器,则将会有一半深度的缓存被闲置浪费。由于输入流量是可能随时变化的,因此无法通过调整预设的最大缓存深度来解决这个问题。However, when there is no high-priority packet input by the router, a part of the cache capacity will be wasted. For example, in the mutually exclusive configuration mode of cache allocation, a router sets up four priority queues, and sets the maximum cache depth T for each queue, and T*4<total cache AT. If there are only two lower priority packets entering the router in a certain period of time, half of the depth buffer will be idle and wasted. Since the input traffic may change at any time, it is impossible to solve this problem by adjusting the preset maximum cache depth.
如果将所有队列的总深度设置为大于总缓存深度,则可以有效避免过多 缓存被闲置浪费的问题。例如,缓存分配中的抢占配置方式,某路由器设置了4个优先级队列,并为每个队列设置最大缓存深度T<总缓存AT,且T*4>总缓存AT。则当某段时间仅存在两个较低优先级报文进入路由器时,被浪费的缓存将小于一半深度。但是,假如2T>AT,且此时输入带宽大于输出带宽,总缓存将被这两个较低优先级报文耗尽。假如此时高优先级报文恢复流量,则会发生此高优先级报文无法占用缓存,导致高优先级报文被丢弃的问题。If you set the total depth of all queues to be greater than the total cache depth, you can effectively avoid the problem of excessive cache being wasted. For example, in the preemptive configuration method in cache allocation, a router sets up four priority queues, and sets the maximum cache depth T<total cache AT and T*4>total cache AT for each queue. Then, when there are only two lower priority packets entering the router in a certain period of time, the wasted buffer will be less than half the depth. However, if 2T>AT, and the input bandwidth is greater than the output bandwidth, the total buffer will be exhausted by these two lower priority packets. If high-priority packets resume traffic at this time, the problem occurs that the high-priority packets cannot occupy the cache, causing high-priority packets to be discarded.
由此可见,相关技术中的互斥配置方式存在缓存闲置的问题,抢占配置方式存在高优先级缓存被占用的问题。It can be seen that the mutual exclusion configuration method in the related art has the problem of idle cache, and the preemption configuration method has the problem of high priority cache being occupied.
基于相关技术中所存在的上述问题,本申请实施例提供一种数据处理方法,通过分优先级抢占,在为高优先级队列预留缓存的同时,为低优先级提供更大的缓存占用机会,同时避免传统队列缓存分配中互斥配置方式缓存闲置的问题和抢占配置方式高优先级缓存被占用的问题。通过所述数据处理方法,可以更加有效的利用设备内部的缓存,并达到分优先级服务的效果。Based on the above-mentioned problems in the related art, embodiments of the present application provide a data processing method that, by preempting by priority, while reserving cache for high-priority queues, provides greater cache occupation opportunities for low-priority queues At the same time, it avoids the problem that the cache of the mutually exclusive configuration mode is idle in the traditional queue cache allocation and the problem of the high-priority cache occupied by the preemption configuration mode. Through the data processing method, the internal cache of the device can be used more effectively, and the effect of prioritized service can be achieved.
图1为本申请实施例一所提供的数据处理方法的实现流程示意图,如图1所示,该方法包括步骤S101-S103。FIG. 1 is a schematic flowchart of an implementation of a data processing method provided in Embodiment 1 of the present application. As shown in FIG. 1, the method includes steps S101-S103.
步骤S101,服务器根据接收的待缓存数据包的优先级,确定待缓存队列集合。Step S101: The server determines the set of queues to be cached according to the priority of the received data packets to be cached.
例如,所述服务器可以为交换网络系统或者路由系统中的网络服务器,所述服务器可以通过路由器接收终端发送的待缓存数据包。所述待缓存数据包可以是终端以数据包的形式发送的,也可以是终端以数据报文的形式发送的。此外,服务器也可以接收其他服务器发送的待缓存数据包。For example, the server may be a network server in a switching network system or a routing system, and the server may receive a data packet to be cached sent by a terminal through a router. The data packet to be buffered may be sent by the terminal in the form of a data packet, or may be sent by the terminal in the form of a data packet. In addition, the server can also receive data packets to be cached sent by other servers.
当服务器接收到所述待缓存数据包之后,对所述待缓存数据包进行特征分析,确定与所述待缓存数据包的特征对应的优先级。举例来说,视频通话的优先级高于网络浏览的优先级,假设最高优先级对应的优先级等级为0,那么,可以将视频通话的优先级等级设置为0。因此,当服务器接收到视频通话的数据包时,进行特征分析后得到与视频通话对应的优先级等级为0。After receiving the data packet to be cached, the server performs feature analysis on the data packet to be cached to determine the priority corresponding to the characteristic of the data packet to be cached. For example, the priority of a video call is higher than the priority of web browsing. Assuming that the priority level corresponding to the highest priority is 0, you can set the priority level of a video call to 0. Therefore, when the server receives the data packet of the video call, after the feature analysis, the priority level corresponding to the video call is 0.
在本实施例中,优先级等级0大于(即高于)优先级等级1,优先级等 级1大于优先级等级2,优先级等级2大于优先级等级3……。在其他实施例中也可以采用其他等级排布方式。In this embodiment, priority level 0 is greater than (i.e. higher than) priority level 1, priority level 1 is greater than priority level 2, and priority level 2 is greater than priority level 3.... In other embodiments, other levels of arrangement may also be used.
当服务器确定了待缓存数据包的优先级之后,服务器根据待缓存数据包的优先级确定待缓存队列集合。After the server determines the priority of the data packet to be cached, the server determines the set of queues to be cached according to the priority of the data packet to be cached.
本实施例中,所述待缓存队列集合中包括至少一个待缓存队列,所述待缓存队列用于缓存数据包,例如所述待缓存数据包。所述待缓存队列中可以已经缓存有数据包,所述待缓存队列也可以为未缓存任何数据包的空队列。In this embodiment, the set of queues to be cached includes at least one queue to be cached, and the queue to be cached is used to cache data packets, for example, the data packets to be cached. The queue to be cached may already have data packets cached, and the queue to be cached may also be an empty queue that does not cache any data packets.
在一示例性实施方式中,服务器可以预先将数据处理系统的缓存空间划分成预设数目的待缓存队列,每一待缓存队列用于缓存满足一定条件的数据包。举例来说,可以将缓存空间划分成预设数目的缓存不同优先级数据包的待缓存队列,即,优先级等级为0的视频通话对应一个待缓存队列;优先级等级为4的网络浏览对应另一个待缓存队列。In an exemplary embodiment, the server may divide the cache space of the data processing system into a preset number of queues to be cached in advance, and each queue to be cached is used to cache data packets that meet certain conditions. For example, the cache space can be divided into a preset number of queues to be cached that buffer packets of different priorities, that is, a video call with a priority level of 0 corresponds to a queue to be cached; a network browsing with a priority level of 4 corresponds to Another queue to be cached.
本实施例中,在确定了待缓存数据包的优先级之后,可以在缓存空间中的全部待缓存队列中,确定至少一个满足预设条件的待缓存队列,形成所述待缓存队列集合。其中,所述预设条件可以为优先级大于等于(≥)所述待缓存数据包的优先级。In this embodiment, after determining the priority of the data packet to be cached, at least one queue to be cached that satisfies a preset condition may be determined among all the queues to be cached in the cache space to form the set of queues to be cached. The preset condition may be that the priority is greater than or equal to (≥) the priority of the data packet to be cached.
步骤S102,根据所述待缓存队列集合中的每一待缓存队列的属性信息和所述待缓存数据包的优先级,从所述待缓存队列集合中确定目标缓存队列。Step S102: Determine a target cache queue from the queue-to-be-cached set according to the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached.
在本实施例中,所述待缓存队列的属性信息包括以下至少之一:待缓存队列中的已缓存数据包的数量、待缓存队列中的已缓存数据包占用的内存空间。其中,所述已缓存数据包的数量可以根据数据包分片个数、数据包的包个数为计量单位统计;所述已缓存数据包占用的内存空间可以根据比特、字节等为计量单元统计。In this embodiment, the attribute information of the queue to be cached includes at least one of the following: the number of cached data packets in the queue to be cached, and the memory space occupied by the cached data packets in the queue to be cached. Wherein, the number of cached data packets can be counted according to the number of data packet fragments and the number of data packets as a unit of measurement; the memory space occupied by the cached data packets can be measured in terms of bits, bytes, etc. statistics.
本实施例中,在确定目标缓存队列之前,根据每一待缓存队列的属性信息和所述待缓存数据包的优先级,确定该待缓存数据包是否被允许缓存至待缓存队列集合中的某一待缓存队列中。如果允许,再将所述待缓存数据包缓存至目标缓存队列中。所述目标缓存队列是从所述待缓存队列集合中选择的一个待缓存队列。这样,对于待缓存数据包,只有满足条件才能被缓存至目 标缓存队列,因而可以避免高优先级缓存被占用。In this embodiment, before determining the target cache queue, according to the attribute information of each queue to be cached and the priority of the data packet to be cached, it is determined whether the data packet to be cached is allowed to be cached to a certain set in the queue to be cached Waiting in the cache queue. If allowed, then cache the data packet to be cached into the target cache queue. The target cache queue is a queue to be cached selected from the set of queues to be cached. In this way, the data packets to be cached can be cached to the target cache queue only if the conditions are met, so that high-priority caches can be avoided.
步骤S103,将所述待缓存数据包缓存至所述目标缓存队列中。Step S103: Cache the data packet to be cached into the target cache queue.
这里,当确定待缓存数据包允许被缓存至待缓存队列集合中的目标缓存队列中时,将所述待缓存数据包缓存至所述目标缓存队列中,以保证所述待缓存数据包能够被有效传输。Here, when it is determined that the data packet to be cached is allowed to be cached in the target cache queue in the set of queues to be cached, the data packet to be cached is cached in the target cache queue to ensure that the data packet to be cached can be Effective transmission.
本申请实施例提供的数据处理方法,服务器根据接收的待缓存数据包的优先级,确定待缓存队列集合;根据所述待缓存队列集合中的每一待缓存队列的属性信息和所述待缓存数据包的优先级,从所述待缓存队列集合中确定目标缓存队列;将所述待缓存数据包缓存至所述目标缓存队列中。这样,由于是根据每一待缓存队列的属性信息和所述待缓存数据包的优先级,确定目标缓存队列,因此,可以实现分优先级进行缓存队列抢占的目的,从而能够避免队列缓存分配中缓存闲置的问题和高优先级缓存被占用的问题。In the data processing method provided by the embodiment of the present application, the server determines the set of queues to be cached according to the priority of the received data packets to be cached; according to the attribute information of each queue to be cached in the set of queues to be cached and the to-be-cached The priority of the data packet determines the target cache queue from the set of queues to be cached; caches the data packet to be cached into the target cache queue. In this way, because the target cache queue is determined according to the attribute information of each queue to be cached and the priority of the data packet to be cached, the purpose of preempting the cache queue by priority can be achieved, thereby avoiding queue cache allocation The problem of idle cache and the problem of high priority cache being occupied.
图2为本申请实施例二所提供的数据处理方法的实现流程示意图,如图2所示,该方法包括步骤S201-S204。FIG. 2 is a schematic flowchart of an implementation of the data processing method provided in Embodiment 2 of the present application. As shown in FIG. 2, the method includes steps S201-S204.
步骤S201,服务器比较每一待缓存队列的优先级与待缓存数据包的优先级的高低。In step S201, the server compares the priority of each queue to be cached with the priority of the data packet to be cached.
这里,每一所述待缓存队列具有预设的优先级。举例来说,可以将缓存空间划分为多个待缓存队列,每一待缓存队列用于缓存不同的数据包,例如,视频通话数据包、文件传输数据包、网络浏览数据包等。用于缓存视频通话数据包的待缓存队列,相对于用于缓存网络浏览数据包的待缓存队列具有较高的优先级。每一待缓存队列的优先级固定。Here, each of the queues to be cached has a preset priority. For example, the cache space may be divided into multiple queues to be cached, and each queue to be cached is used to cache different data packets, for example, video call data packets, file transfer data packets, and web browsing data packets. The queue to be cached for buffering video call data packets has a higher priority than the queue to be buffered for buffering network browsing data packets. The priority of each queue to be cached is fixed.
本实施例中,服务器在接收到待缓存数据包之后,是将待缓存数据包的优先级与系统中的全部待缓存队列的优先级进行比较。In this embodiment, after receiving the data packet to be cached, the server compares the priority of the data packet to be cached with the priority of all queues to be cached in the system.
步骤S202,将优先级大于等于所述待缓存数据包的优先级的待缓存队列,确定为所述待缓存队列集合。Step S202: Determine a queue to be cached whose priority is greater than or equal to the priority of the data packet to be cached as the set of queues to be cached.
这里,在将待缓存数据包的优先级与系统中的全部待缓存队列的优先级进行比较之后,选择优先级大于等于待缓存数据包的优先级的待缓存队列形成所述待缓存队列集合,也就是说,在所述待缓存队列集合中,是包含有至 少一个待缓存队列,且所述待缓存队列集合中的全部待缓存队列的优先级是大于等于待缓存数据包的优先级的。Here, after comparing the priority of the data packet to be cached with the priority of all the queues to be cached in the system, a queue to be cached with a priority greater than or equal to the priority of the data packet to be cached is selected to form the set of queues to be cached, That is to say, the set of queues to be cached contains at least one queue to be cached, and the priority of all queues to be cached in the set of queues to be cached is greater than or equal to the priority of the data packets to be cached.
步骤S203,根据所述待缓存队列集合中的每一待缓存队列的属性信息和所述待缓存数据包的优先级,从所述待缓存队列集合中确定目标缓存队列。Step S203: Determine the target cache queue from the queue-to-be-cached set according to the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached.
这里,在确定目标缓存队列之前,先根据每一待缓存队列的属性信息和所述待缓存数据包的优先级,确定该待缓存数据包是否被允许缓存至待缓存队列集合中的某一待缓存队列中。如果允许,再将所述待缓存数据包缓存至目标缓存队列中。所述目标缓存队列是从所述待缓存队列集合中选择的一个待缓存队列。这样,对于待缓存数据包,只有满足条件才能被缓存至目标缓存队列,因而可以避免高优先级缓存被占用。Here, before determining the target cache queue, according to the attribute information of each queue to be cached and the priority of the data packet to be cached, it is determined whether the data packet to be cached is allowed to be cached to a certain queue in the queue set to be cached Cache queue. If allowed, then cache the data packet to be cached into the target cache queue. The target cache queue is a queue to be cached selected from the set of queues to be cached. In this way, the data packets to be cached can be cached to the target cache queue only if the conditions are met, so that high-priority caches can be avoided.
步骤S204,将所述待缓存数据包缓存至所述目标缓存队列中。Step S204: Cache the data packet to be cached into the target cache queue.
这里,当确定待缓存数据包允许被缓存至待缓存队列集合中的目标缓存队列中时,将所述待缓存数据包缓存至所述目标缓存队列中,以保证所述待缓存数据包能够被有效传输。Here, when it is determined that the data packet to be cached is allowed to be cached in the target cache queue in the set of queues to be cached, the data packet to be cached is cached in the target cache queue to ensure that the data packet to be cached can be Effective transmission.
本申请实施例提供的数据处理方法,服务器比较每一待缓存队列的优先级与待缓存数据包的优先级的高低;将优先级大于等于所述待缓存数据包的优先级的待缓存队列,确定为所述待缓存队列集合;根据所述待缓存队列集合中的每一待缓存队列的属性信息和所述待缓存数据包的优先级,从所述待缓存队列集合中确定目标缓存队列;将所述待缓存数据包缓存至所述目标缓存队列中。这样,由于是根据每一待缓存队列的属性信息和所述待缓存数据包的优先级,确定目标缓存队列,因此,可以实现分优先级进行缓存队列抢占的目的,从而能够避免队列缓存分配中缓存闲置的问题和高优先级缓存被占用的问题。In the data processing method provided by the embodiment of the present application, the server compares the priority of each queue to be cached with the priority of the data packet to be cached; the queue to be cached with a priority greater than or equal to the priority of the data packet to be cached, Determined as the set of queues to be cached; according to the attribute information of each queue to be cached in the set of queues to be cached and the priority of the data packet to be cached, a target cache queue is determined from the set of queues to be cached; Cache the data packet to be cached into the target cache queue. In this way, because the target cache queue is determined according to the attribute information of each queue to be cached and the priority of the data packet to be cached, the purpose of preempting the cache queue by priority can be achieved, thereby avoiding queue cache allocation The problem of idle cache and the problem of high priority cache being occupied.
图3为本申请实施例三所提供的数据处理方法的实现流程示意图,如图3所示,该方法包括步骤S301-S304。FIG. 3 is a schematic flowchart of an implementation of the data processing method provided in Embodiment 3 of the present application. As shown in FIG. 3, the method includes steps S301-S304.
步骤S301,服务器根据终端发送的待缓存数据包的优先级,确定待缓存队列集合。Step S301: The server determines the set of queues to be cached according to the priority of the data packets to be cached sent by the terminal.
这里,待缓存队列集合具有至少一个待缓存队列,每一所述待缓存队列 具有预设的优先级。举例来说,可以将缓存空间划分为多个待缓存队列,每一待缓存队列用于缓存不同的数据包。Here, the set of queues to be cached has at least one queue to be cached, and each of the queues to be cached has a preset priority. For example, the cache space may be divided into multiple queues to be cached, and each queue to be cached is used to cache different data packets.
本实施例中,可以将待缓存数据包的优先级与系统中的全部待缓存队列的优先级进行比较,然后选择优先级大于等于待缓存数据包的优先级的待缓存队列形成所述待缓存队列集合。In this embodiment, the priority of the data packet to be cached can be compared with the priority of all the queues to be cached in the system, and then a queue to be cached with a priority greater than or equal to the priority of the data packet to be cached can be selected to form the to-be-cached Queue collection.
步骤S302,针对所述待缓存队列集合中的每一待缓存队列,根据每一待缓存队列的属性信息和与所述属性信息对应的预设阈值,确定是否允许将所述待缓存数据包缓存至对应的待缓存队列中。Step S302: For each queue to be cached in the queue-to-be-cached set, determine whether to allow the data packet to be cached according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information To the corresponding queue to be cached.
这里,所述待缓存队列的属性信息包括以下至少之一:待缓存队列中的已缓存数据包的数量、待缓存队列中的已缓存数据包占用的内存空间。对应地,与所述属性信息对应的预设阈值包括以下至少之一:数量阈值、内存空间阈值。Here, the attribute information of the queue to be cached includes at least one of the following: the number of cached data packets in the queue to be cached, and the memory space occupied by the cached data packets in the queue to be cached. Correspondingly, the preset threshold corresponding to the attribute information includes at least one of the following: a quantity threshold and a memory space threshold.
那么,本实施例中步骤S302的过程,包括以下三种实现方式:Then, the process of step S302 in this embodiment includes the following three implementation methods:
方式一:所述待缓存队列的属性信息为待缓存队列中的已缓存数据包的数量,与所述属性信息对应的预设阈值为数量阈值;Method 1: The attribute information of the queue to be cached is the number of cached data packets in the queue to be cached, and the preset threshold corresponding to the attribute information is the quantity threshold;
所述针对所述待缓存队列集合中的每一待缓存队列,根据每一待缓存队列的属性信息和与所述属性信息对应的预设阈值,确定是否允许将所述待缓存数据包缓存至对应的待缓存队列中,包括:针对所述待缓存队列集合中的每一待缓存队列,如果所述每一待缓存队列中的所述已缓存数据包的数量小于等于对应待缓存队列的数量阈值,则允许将所述待缓存数据包缓存至对应的待缓存队列中。For each queue to be cached in the queue set to be cached, according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information, it is determined whether to allow the cached data packet to be cached to The corresponding queue to be cached includes: for each queue to be cached in the queue to be cached set, if the number of the cached data packets in each queue to be cached is less than or equal to the number of corresponding queues to be cached The threshold value allows the data packet to be cached to be cached in the corresponding queue to be cached.
方式二:所述待缓存队列的属性信息为待缓存队列中的已缓存数据包占用的内存空间,与所述属性信息对应的预设阈值为内存空间阈值;Method 2: The attribute information of the queue to be cached is the memory space occupied by the cached data packets in the queue to be cached, and the preset threshold corresponding to the attribute information is the memory space threshold;
所述针对所述待缓存队列集合中的每一待缓存队列,根据每一待缓存队列的属性信息和与所述属性信息对应的预设阈值,确定是否允许将所述待缓存数据包缓存至对应的待缓存队列中,包括:针对所述待缓存队列集合中的每一待缓存队列,如果所述每一待缓存队列中的所述已缓存数据包占用的内存空间小于等于对应待缓存队列的内存空间阈值,则允许将所述待缓存数据 包缓存至对应的待缓存队列中。For each queue to be cached in the queue set to be cached, according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information, it is determined whether to allow the cached data packet to be cached to The corresponding queue to be cached includes: for each queue to be cached in the queue to be cached set, if the memory space occupied by the cached data packet in each queue to be cached is less than or equal to the corresponding queue to be cached Threshold of the memory space, it is allowed to cache the data packet to be cached into the corresponding queue to be cached.
方式三:所述待缓存队列的属性信息为待缓存队列中的已缓存数据包的数量和已缓存数据包占用的内存空间,与所述属性信息对应的预设阈值为数量阈值和内存空间阈值;Method 3: The attribute information of the queue to be cached is the number of cached data packets in the queue to be cached and the memory space occupied by the cached data packets, and the preset threshold corresponding to the attribute information is the quantity threshold and the memory space threshold ;
所述针对所述待缓存队列集合中的每一待缓存队列,根据每一待缓存队列的属性信息,和与所述属性信息对应的预设阈值,确定是否允许将所述待缓存数据包缓存至对应的待缓存队列中,包括:针对所述待缓存队列集合中的每一待缓存队列,如果所述每一待缓存队列中的已缓存数据包的数量小于等于对应待缓存队列的数量阈值,且已缓存数据包占用的内存空间小于等于对应待缓存队列的内存空间阈值,则允许将所述待缓存数据包缓存至对应的待缓存队列中。For each to-be-cached queue in the to-be-cached queue set, according to the attribute information of each to-be-cached queue and a preset threshold corresponding to the attribute information, determine whether to allow caching of the to-be-cached data packet To the corresponding queue to be cached includes: for each queue to be cached in the queue to be cached set, if the number of buffered data packets in each queue to be cached is less than or equal to the threshold of the number of queues to be cached , And the memory space occupied by the cached data packet is less than or equal to the memory space threshold of the corresponding queue to be cached, it is allowed to cache the data packet to be cached into the corresponding queue to be cached.
本实施例中,所述待缓存队列的属性信息中的已缓存数据包的数量可以是根据数据包分片个数、数据包的包个数为计量单位统计的;所述已缓存数据包占用的内存空间可以是根据比特、字节等为计量单元统计的。In this embodiment, the number of cached data packets in the attribute information of the queue to be cached may be counted according to the number of data packet fragments and the number of data packet packets as a unit of measurement; the cached data packet occupancy The memory space can be calculated based on bits, bytes, etc. for the measurement unit.
本实施例中,与所述属性信息对应的预设阈值是根据对应待缓存队列的优先级确定的,也就是说,每一待缓存队列对应一个预设阈值,且每一待缓存队列的预设阈值不同。当某一待缓存队列的优先级高于其他待缓存队列的优先级时,该待缓存队列的预设阈值也大于其他待缓存队列的预设阈值。In this embodiment, the preset threshold corresponding to the attribute information is determined according to the priority of the queue to be cached, that is, each queue to be cached corresponds to a preset threshold, and the The threshold is different. When the priority of a queue to be cached is higher than the priority of other queues to be cached, the preset threshold of the queue to be cached is also greater than the preset threshold of other queues to be cached.
举例来说,可以根据所述待缓存队列集合中第N个待缓存队列的预设优先级,确定与所述第N个缓存队列对应的预设阈值;其中,如果所述第N个待缓存队列的预设优先级高于第M个待缓存队列的预设优先级,则所述第N个缓存队列对应的预设阈值大于所述第M个待缓存队列对应的预设阈值。For example, the preset threshold corresponding to the Nth cache queue may be determined according to the preset priority of the Nth queue to be cached in the set of queues to be cached; wherein, if the Nth queue to be cached The preset priority of the queue is higher than the preset priority of the Mth queue to be cached, then the preset threshold corresponding to the Nth cache queue is greater than the preset threshold corresponding to the Mth buffer queue.
本实施例中,是将待缓存队列集合中的全部待缓存队列的属性信息与对应的预设阈值进行比较,只有全部待缓存队列的属性信息与对应的预设阈值之间满足上述三种方式中的任意一种时,才能确定允许将所述待缓存数据包缓存至对应的待缓存队列中,只要有任一待缓存队列的属性信息与对应的预设阈值之间不满足三种方式中的任一种(确定执行的任意一种),则禁止将所述待缓存数据包缓存至对应的待缓存队列中。In this embodiment, the attribute information of all queues to be cached in the queue set to be cached is compared with the corresponding preset threshold, and only the attribute information of all queues to be cached and the corresponding preset threshold satisfy the above three methods Can be determined to allow the data packet to be cached to the corresponding queue to be cached, as long as the attribute information of any queue to be cached and the corresponding preset threshold do not satisfy the three ways (Any one determined to be executed), it is forbidden to cache the data packet to be cached into the corresponding queue to be cached.
步骤S303,针对所述待缓存队列集合中的每一待缓存队列,如果确定结果为允许时,从所述待缓存队列集合中,将与所述待缓存数据包的优先级相同的待缓存队列,确定为所述目标缓存队列。Step S303: For each queue to be cached in the queue-to-be-cached set, if the result is determined to be allowed, from the queue-to-be-cached set, a queue to be cached with the same priority as the data packet to be cached , Determined as the target cache queue.
这里,所述目标缓存队列的优先级与所述待缓存数据包的优先级相同。由于所述待缓存队列集合是根据优先级大于等于所述待缓存数据包的优先级的待缓存队列确定的。那么,也就是说,与所述待缓存数据包的优先级相同的目标缓存队列,是所述待缓存队列集合中具有最低优先级的待缓存队列。Here, the priority of the target cache queue is the same as the priority of the data packet to be cached. The set of queues to be cached is determined according to the queues to be cached whose priority is greater than or equal to the priority of the data packets to be cached. Then, that is to say, the target cache queue having the same priority as the data packet to be cached is the queue to be cached with the lowest priority in the set of queues to be cached.
因此,步骤S303还可以通过以下步骤实现:Therefore, step S303 can also be achieved by the following steps:
步骤S3031,针对所述待缓存队列集合中的每一待缓存队列,如果确定结果为允许时,从所述待缓存队列集合中,将具有最低优先级的待缓存队列,确定为所述目标缓存队列。Step S3031: For each queue to be cached in the queue-to-be-cached set, if the determination result is allowed, from the queue-to-be-cached set, the queue to be cached with the lowest priority is determined as the target cache queue.
在其他实施例中,所述方法还包括以下步骤:In other embodiments, the method further includes the following steps:
步骤S310,如果在所述待缓存队列集合中,存在至少一个待缓存队列的所述已缓存数据包的数量大于对应待缓存队列的数量阈值,则丢弃所述数据包;和/或,如果在所述待缓存队列集合中,存在至少一个待缓存队列的所述已缓存数据包占用的内存空间大于对应待缓存队列的内存空间阈值,则丢弃所述数据包。Step S310, if there are at least one queue to be cached in the set of queues to be cached, the number of the cached data packets is greater than the corresponding threshold value of the queue to be cached, then discard the data packets; and/or, if In the set of queues to be cached, if the memory space occupied by the cached data packet with at least one queue to be cached is greater than the memory space threshold of the corresponding queue to be cached, the data packet is discarded.
这里,确定丢弃所述待缓存数据包的情况包括以下三种:Here, the determination that the data packet to be cached is discarded includes the following three types:
第一种,判断在所述待缓存队列集合中,存在至少一个待缓存队列的已缓存数据包的数量大于对应该待缓存队列的数量阈值。In the first type, it is judged that in the set of queues to be cached, the number of cached data packets in which at least one queue to be cached is greater than a threshold corresponding to the number of queues to be cached.
第二种,判断在所述待缓存队列集合中,存在至少一个待缓存队列的已缓存数据包占用的内存空间大于对应该待缓存队列的内存空间阈值。Second, it is determined that in the set of queues to be cached, the memory space occupied by the cached data packets of at least one queue to be cached is greater than the threshold of the memory space corresponding to the queue to be cached.
第三种,判断在所述待缓存队列集合中,存在至少一个待缓存队列的已缓存数据包的数量大于对应该待缓存队列的数量阈值,并且,该待缓存队列的已缓存数据包占用的内存空间大于对应该待缓存队列的内存空间阈值。Thirdly, it is judged that in the set of queues to be cached, there is at least one queue to be cached whose number of cached data packets is greater than the threshold corresponding to the number of queues to be cached, and The memory space is greater than the memory space threshold corresponding to the queue to be cached.
步骤S304,将所述待缓存数据包缓存至所述目标缓存队列中。Step S304: Cache the data packet to be cached into the target cache queue.
这里,当确定待缓存数据包允许被缓存至待缓存队列集合中的目标缓存队列中时,将所述待缓存数据包缓存至所述目标缓存队列中,以保证所述待 缓存数据包能够被有效传输。Here, when it is determined that the data packet to be cached is allowed to be cached in the target cache queue in the set of queues to be cached, the data packet to be cached is cached in the target cache queue to ensure that the data packet to be cached can be Effective transmission.
本申请实施例提供的数据处理方法,针对所述待缓存队列集合中的每一待缓存队列,根据每一待缓存队列的属性信息和与所述属性信息对应的预设阈值,确定是否允许将所述待缓存数据包缓存至对应的待缓存队列中。这样,由于是通过对每一待缓存队列的属性信息和对应的预设阈值进行判断,确定是否允许待缓存数据包进入待缓存队列,只有在允许的情况下,才确定目标缓存队列并缓存待缓存数据包,因此,可以实现分优先级进行缓存队列抢占的目的,从而能够避免队列缓存分配中缓存闲置的问题和高优先级缓存被占用的问题。According to the data processing method provided in the embodiment of the present application, for each queue to be cached in the queue queue to be cached, according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information, it is determined whether to allow The data packet to be cached is cached into a corresponding queue to be cached. In this way, because the attribute information of each queue to be cached and the corresponding preset threshold are judged to determine whether the data packet to be cached is allowed to enter the queue to be cached, the target cache queue is determined and cached only when permitted Cache data packets, therefore, the purpose of prioritizing cache queue preemption can be achieved, thereby avoiding the problem of cache idleness in queue cache allocation and the problem of high priority cache being occupied.
图4为本申请实施例四所提供的数据处理方法的实现流程示意图,如图4所示,该方法包括步骤S401-S406。FIG. 4 is a schematic flowchart of an implementation of a data processing method provided in Embodiment 4 of the present application. As shown in FIG. 4, the method includes steps S401-S406.
步骤S401,服务器根据接收的待缓存数据包的优先级,确定待缓存队列集合。Step S401: The server determines the set of queues to be cached according to the priority of the received data packets to be cached.
步骤S402,根据所述待缓存队列集合中的每一待缓存队列的属性信息和所述待缓存数据包的优先级,从所述待缓存队列集合中确定目标缓存队列。Step S402: Determine the target cache queue from the queue-to-be-cached set according to the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached.
步骤S403,将所述待缓存数据包缓存至所述目标缓存队列中。Step S403: Cache the data packet to be cached into the target cache queue.
步骤S401至步骤S403与上述步骤S101至步骤S103相同,本实施例不再赘述。Steps S401 to S403 are the same as the above steps S101 to S103, and details are not described in this embodiment.
步骤S404,当已缓存数据包流出缓存队列时,确定缓存所述已缓存数据包的缓存队列的优先级为缓存优先级。Step S404, when the cached data packet flows out of the cache queue, it is determined that the priority of the cache queue that caches the cached data packet is the cache priority.
这里,所述已缓存数据包流出缓存队列,可以是系统中的任意一个缓存队列中的已缓存数据包流出,也就是说,所述缓存队列可以为前述待缓存队列集合中的待缓存队列,也可以不是前述待缓存队列集合中的待缓存队列,只要是当前数据处理系统中的任意一个缓存队列中存在已缓存数据包流出,均执行步骤S404的动作,确定缓存所述已缓存数据包的缓存队列的优先级为缓存优先级。即,确定缓存该流出的缓存数据包的缓存队列的优先级,将该优先级确定为所述缓存优先级。Here, the cached data packet flowing out of the cache queue may be a cached data packet flowing out of any cache queue in the system, that is to say, the cache queue may be the queue to be cached in the aforementioned queue to be cached set, It may not be the queue to be cached in the aforementioned queue to be cached set, as long as there is a cached data packet outflow in any one of the cache queues in the current data processing system, the action of step S404 is performed to determine the cache of the cached data packet The priority of the cache queue is the cache priority. That is, the priority of the buffer queue that buffers the outgoing buffered data packet is determined, and the priority is determined as the buffer priority.
步骤S405,将优先级大于等于所述缓存优先级的缓存队列的属性信息确 定为目标属性信息。Step S405: Determine the attribute information of the cache queue whose priority is greater than or equal to the cache priority as the target attribute information.
这里,首先,比较系统中的全部缓存队列的优先级与所述缓存优先级的大小;然后,将优先级大于所述缓存优先级的缓存队列的属性信息确定为目标属性信息。Here, first, the priority of all cache queues in the system is compared with the size of the cache priority; then, the attribute information of the cache queue whose priority is greater than the cache priority is determined as the target attribute information.
步骤S406,更新所述目标属性信息。Step S406: Update the target attribute information.
本实施例中,更新优先级大于等于所述缓存优先级的缓存队列的属性信息,包括以下三种情况:In this embodiment, updating the attribute information of the cache queue whose priority is greater than or equal to the cache priority includes the following three cases:
情况一:当流出缓存队列的已缓存数据包是以数量为统计基础进行统计的,则更新缓存队列中的已缓存数据包的数量。Case 1: When the number of cached data packets flowing out of the cache queue is counted on a statistical basis, the number of cached data packets in the cache queue is updated.
情况二:当流出缓存队列的已缓存数据包是以内存空间大小为统计基础进行统计的,则更新缓存队列中的已缓存数据包占用的内存空间。Case 2: When the cached data packets flowing out of the cache queue are counted based on the size of the memory space, the memory space occupied by the cached data packets in the cache queue is updated.
情况三:当流出缓存队列的已缓存数据包是以数量和内存空间大小为统计基础进行统计的,则更新缓存队列中的已缓存数据包的数量和占用的内存空间。Case 3: When the number of cached data packets flowing out of the cache queue is counted based on statistics and the size of the memory space, the number of cached data packets in the cache queue and the memory space occupied are updated.
本实施例中,当存在已缓存数据包流出缓存队列时,是对于优先级高于以及等于所流出的缓存队列的全部缓存队列的属性信息进行更新。In this embodiment, when there are cached data packets flowing out of the cache queue, the attribute information of all cache queues whose priority is higher than and equal to the outgoing cache queue is updated.
本申请实施例提供的数据处理方法,当已缓存数据包流出缓存队列时,确定缓存所述已缓存数据包的缓存队列的优先级为缓存优先级,将优先级大于所述缓存优先级的缓存队列的属性信息确定为目标属性信息,更新所述目标属性信息。这样,可以保证在有数据包流出缓存队列时,及时对缓存队列的属性信息进行更新,以保证后续进入队列的待缓存数据包能够被有效缓存。In the data processing method provided by the embodiment of the present application, when the cached data packet flows out of the cache queue, the priority of the cache queue that caches the cached data packet is determined as the cache priority, and the priority is greater than the cache priority of the cache The attribute information of the queue is determined as the target attribute information, and the target attribute information is updated. In this way, it can be ensured that when a data packet flows out of the cache queue, the attribute information of the cache queue is updated in time to ensure that the data packets to be cached that subsequently enter the queue can be effectively cached.
基于上述实施例,本申请实施例提供的数据处理方法,通过分优先级抢占的方法,在为高优先级队列预留缓存的同时,为低优先级提供更大的缓存占用机会,同时避免传统队列缓存分配中互斥配置方式缓存闲置的问题和抢占配置方式高优先级缓存被占用的问题。通过这种方法,可以更加有效的利用设备内部的缓存,并达到分优先级服务的效果。Based on the above embodiments, the data processing method provided by the embodiments of the present application, through the method of priority preemption, while reserving cache for high-priority queues, provides greater cache occupation opportunities for low-priority queues while avoiding traditional In the queue cache allocation, the mutual exclusion configuration mode cache is idle and the preemption configuration mode high priority cache is occupied. In this way, the internal cache of the device can be used more effectively, and the effect of prioritized services can be achieved.
在一示例性实施例中,本申请实施例还提供一种数据处理方法,包括以下步骤一和步骤二。In an exemplary embodiment, an embodiment of the present application further provides a data processing method, including the following steps one and two.
步骤一,接收待缓存数据包,根据所述待缓存数据包的优先级确定对应优先级的目标缓存队列;Step 1: Receive the data packet to be cached, and determine the target cache queue corresponding to the priority according to the priority of the data packet to be cached;
实现本实施方法的处理装置可以是任何有缓存队列处理需求的装置,例如可以是服务器,也不排除可以是终端。所述装置接收的待缓存数据包可以是终端发送的,也可以是其他服务器发送的。The processing device for implementing the method of this embodiment may be any device that has a buffer queue processing requirement, for example, it may be a server, and it may not be a terminal. The data packet to be buffered received by the device may be sent by the terminal, or may be sent by another server.
所述处理装置上预先设置有两个或两个以上的缓存队列,每个缓存队列对应一个优先级,用于接收对应优先级的缓存数据包。The processing device is preset with two or more cache queues, and each cache queue corresponds to a priority level, and is used to receive cache data packets corresponding to the priority levels.
步骤二,根据优先级不低于所述目标缓存队列的所有缓存队列的属性信息确定是否将所述待缓存数据包缓存至所述目标缓存队列中,所述属性信息用于表示对应缓存队列当前缓存情况。Step 2: Determine whether to cache the data packet to be cached into the target cache queue according to the attribute information of all cache queues whose priority is not lower than the target cache queue, the attribute information is used to indicate that the corresponding cache queue is currently Cache situation.
通过对高于或等于目标缓存队列的其他所有缓存队列的当前缓存情况进行判断,以决定是否允许缓存该待缓存数据包,优先保证高优先级的缓存队列的工作,避免低优先级的缓存数据包过多占用总缓存。By judging the current cache conditions of all other cache queues that are higher than or equal to the target cache queue, to decide whether to allow the cached data packet to be cached, priority is given to ensuring the work of high priority cache queues, and avoiding low priority cache data Excessive packets occupy the total cache.
在一示例性实施方式中,所述缓存队列的属性信息包括队列中已缓存数据包的数量;In an exemplary embodiment, the attribute information of the cache queue includes the number of cached data packets in the queue;
所述步骤二采用以下方式实现:The second step is implemented in the following manner:
分别判断优先级不低于所述目标缓存队列的每个缓存队列的已缓存数据包的数量,如果每个缓存队列的已缓存数据包的数量与对应缓存队列的数量阈值之差均大于或等于对应缓存队列的第一差值阈值(第一条件),则确定将所述待缓存数据包缓存至所述目标缓存队列中,如果任意一个缓存队列的已缓存数据包的数量与对应缓存队列的数量阈值之差小于或等于第二差值阈值(第二条件),则确定不将所述待缓存数据包缓存至所述目标缓存队列中;其中,所述优先级不低于所述目标缓存队列的缓存队列是指:包括所述目标缓存队列在内的所有优先级不低于所述目标缓存队列的缓存队列。Determine that the priority is not lower than the number of cached data packets in each cache queue of the target cache queue, if the difference between the number of cached data packets in each cache queue and the threshold value of the corresponding cache queue is greater than or equal to Corresponding to the first difference threshold (first condition) of the cache queue, it is determined that the data packet to be cached is cached in the target cache queue, if the number of cached data packets of any one cache queue is the same as that of the corresponding cache queue If the difference between the number thresholds is less than or equal to the second difference threshold (second condition), it is determined not to cache the data packet to be cached into the target cache queue; wherein, the priority is not lower than the target cache The cache queue of the queue refers to: all cache queues including the target cache queue whose priority is not lower than the target cache queue.
在本实施方式中,缓存队列的属性信息包括队列中已缓存数据包的数量,对应地,为每个缓存队列设置一个数量阈值,用于表示该缓存队列能够缓存的最大数量。同时还可以为每个缓存队列设置第一差值阈值和第二差值阈值,所述第一差值阈值和第二差值阈值均可用于反映当前缓存队列的预留空间, 区别在于,第一差值阈值是在判断是否缓存数据包时使用,第二差值阈值是在判断是否丢弃数据包时使用。为同一个缓存队列设置的第一差值阈值和第二差值阈值可以相同也可以不同。当为同一缓存队列设置的第一差值阈值和第二差值阈值相同时,上述“等于差值阈值”的判断步骤仅在第一条件或第二条件中仅保留一个即可,即当第一条件中的判断为“大于等于”时,第二条件中的判断可以为“小于”,当第一条件中的判断为“大于”时,第二条件中的判断可以为“小于等于”。In this embodiment, the attribute information of the cache queue includes the number of cached data packets in the queue. Correspondingly, a threshold is set for each cache queue to indicate the maximum number of cache queues that can be cached. At the same time, a first difference threshold and a second difference threshold can also be set for each cache queue. Both the first difference threshold and the second difference threshold can be used to reflect the reserved space of the current cache queue. The difference is that A difference threshold is used when judging whether to buffer the data packet, and a second difference threshold is used when judging whether to discard the data packet. The first difference threshold and the second difference threshold set for the same cache queue may be the same or different. When the first difference threshold and the second difference threshold set for the same cache queue are the same, the above “equal to difference threshold” judgment step only needs to keep one of the first condition or the second condition, that is, when the first When the judgment in one condition is “greater than or equal to”, the judgment in the second condition may be “less than”, and when the judgment in the first condition is “greater than”, the judgment in the second condition may be “less than or equal to”.
通过设置差值阈值以及上述缓存机制,一方面可以避免高优先级缓存闲置,另一方面也可以更好的保证高优先级数据包的优先缓存。例如当某个高优先级缓存队列中缓存的数据包数量已经接近数量阈值时,就可阻止低优先级数据包的缓存,避免高优先级缓存被占用,但当高优先缓存队列中缓存的数据包数量未接近数量阈值时,高优先级缓存队列中的缓存仍可以被低优先级数据包占用,避免高优先级缓存闲置。By setting the difference threshold and the above-mentioned caching mechanism, on the one hand, high-priority caches can be avoided, and on the other hand, priority caching of high-priority packets can be better ensured. For example, when the number of data packets cached in a high-priority cache queue is close to the number threshold, the cache of low-priority packets can be blocked to avoid the high-priority cache being occupied, but when the data cached in the high-priority cache queue When the number of packets is not close to the number threshold, the cache in the high-priority cache queue can still be occupied by low-priority packets to avoid the high-priority cache being idle.
在一示例性实施例中,所述缓存队列的属性信息还可以包括队列中已缓存数据包占用的内存空间;In an exemplary embodiment, the attribute information of the cache queue may further include memory space occupied by cached data packets in the queue;
此时,所述步骤二采用以下方式实现:At this time, the second step is implemented in the following manner:
分别判断优先级不低于所述目标缓存队列的每个缓存队列的已缓存数据包的数量,如果每个缓存队列的已缓存数据包占用的内存空间与对应缓存队列的内存空间阈值之差大于或等于对应缓存队列的第三差值阈值(第三条件),则确定将所述待缓存数据包缓存至所述目标缓存队列中,如果任意一个缓存队列的已缓存数据包占用的内存空间与对应缓存队列的内存空间阈值之差小于或等于对应缓存队列的第四差值阈值(第四条件),则确定不将所述待缓存数据包缓存至所述目标缓存队列中;其中,所述优先级不低于所述目标缓存队列的缓存队列是指:包括所述目标缓存队列在内的所有优先级不低于所述目标缓存队列的缓存队列。Determine that the priority is not lower than the number of cached data packets of each cache queue of the target cache queue, if the difference between the memory space occupied by the cached data packets of each cache queue and the memory space threshold of the corresponding cache queue is greater than Or equal to the third difference threshold (third condition) of the corresponding cache queue, it is determined that the data packet to be cached is cached in the target cache queue, if the memory space occupied by the cached data packet of any cache queue is equal to If the difference in the memory space threshold of the corresponding cache queue is less than or equal to the fourth difference threshold (fourth condition) of the corresponding cache queue, it is determined that the data packet to be cached is not cached in the target cache queue; wherein, the A cache queue whose priority is not lower than the target cache queue refers to: all cache queues including the target cache queue whose priority is not lower than the target cache queue.
在本实施方式中,缓存队列的属性信息包括队列中已缓存数据包占用的内存空间,对应地,为每个缓存队列设置一个内存空间阈值,用于表示该缓存队列能够缓存的最大内存空间。同时还可以为每个缓存队列设置第三差值阈值和第四差值阈值,所述第三差值阈值和第四差值阈值均可用于反映当前 缓存队列的预留空间,区别在于,第三差值阈值是在判断是否缓存数据包时使用,第四差值阈值是在判断是否丢弃数据包时使用。为同一个缓存队列设置的第三差值阈值和第四差值阈值可以相同也可以不同。当为同一缓存队列设置的第三差值阈值和第四差值阈值相同时,上述“等于差值阈值”的判断步骤仅在第三条件或第四条件中仅保留一个即可。In this embodiment, the attribute information of the cache queue includes the memory space occupied by the cached data packets in the queue. Correspondingly, a memory space threshold is set for each cache queue to indicate the maximum memory space that the cache queue can cache. At the same time, a third difference threshold and a fourth difference threshold can also be set for each cache queue. Both the third difference threshold and the fourth difference threshold can be used to reflect the reserved space of the current cache queue. The difference is that The three difference threshold is used when judging whether to buffer the data packet, and the fourth difference threshold is used when judging whether to discard the data packet. The third difference threshold and the fourth difference threshold set for the same cache queue may be the same or different. When the third difference threshold and the fourth difference threshold set for the same cache queue are the same, the above determination step of “equal to the difference threshold” only needs to keep one of the third condition or the fourth condition.
在一示例性实施例中,在步骤二中,当确定将所述待缓存数据包缓存至所述目标缓存队列中后,所述方法还包括:更新所有优先级不低于所述目标缓存队列的缓存队列的属性信息;其中,所述优先级不低于所述目标缓存队列的缓存队列是指:包括所述目标缓存队列在内的所有优先级不低于所述目标缓存队列的缓存队列。In an exemplary embodiment, in step two, when it is determined that the to-be-cached data packet is cached in the target cache queue, the method further includes: updating all priority levels not lower than the target cache queue Attribute information of the cache queue of the; where the cache queue with a priority not lower than the target cache queue refers to: all cache queues including the target cache queue with a priority not lower than the target cache queue .
在本实施例中,每个缓存队列的预设阈值(数量阈值或内存空间阈值)均小于总缓存,且所有缓存队列的预设阈值之和大于总缓存。也就是说,所有缓存队列共享同一片缓存区域,因此当缓存数据包后,对所有优先级不低于所述目标缓存队列的缓存队列的属性信息进行更新,以整体、全面反映当前缓存占用情况,这样才能保证后续进入队列的待缓存数据包能够被有效缓存。In this embodiment, the preset threshold (number threshold or memory space threshold) of each cache queue is less than the total cache, and the sum of the preset thresholds of all cache queues is greater than the total cache. In other words, all cache queues share the same cache area, so after caching data packets, the attribute information of all cache queues whose priority is not lower than the target cache queue is updated to reflect the current cache occupancy overall and comprehensively , So as to ensure that the data packets to be cached subsequently entered the queue can be effectively cached.
在一示例性实施例中,所述方法还包括:任意一缓存队列中的缓存数据包出队时,更新所有优先级不低于出缓存队列的缓存队列的属性信息;其中,所述优先级不低于出缓存队列的缓存队列是指:包括所述出缓存队列在内的所有优先级不低于所述出缓存队列的缓存队列。In an exemplary embodiment, the method further includes: updating any attribute information of the cache queue whose priority is not lower than the cache queue when the cache data packet in any cache queue is dequeued; wherein, the priority A cache queue not lower than the exit cache queue refers to: all cache queues including the exit cache queue whose priority is not lower than the exit cache queue.
缓存数据包从哪个缓存队列流出,该缓存队列即为出缓存队列。如前所述,由于所有缓存队列共享同一片缓存区域,因此当数据包出队后,对所有优先级不低于所述出缓存队列的缓存队列的属性信息进行更新,以整体、全面反映当前缓存占用情况,这样才能保证后续进入队列的待缓存数据包能够被有效缓存。From which cache queue the cached data packet flows out, the cache queue is the out-cache queue. As mentioned above, since all cache queues share the same cache area, when a data packet is dequeued, the attribute information of all cache queues with a priority not lower than the cache queue is updated to reflect the current situation in a comprehensive and comprehensive manner Cache occupancy, so as to ensure that subsequent packets to be cached can be effectively cached.
以下对本申请实施例的方法进行详细解释。The method of the embodiment of the present application is explained in detail below.
本实施例中,首先,为n个具有优先级关系的队列(对应上述待缓存队列)编号为0~n-1,其中最高优先级队列编号为0,次高优先级队列编号为1,再次高优先级队列编号为2……以此类推,最低优先级队列编号为n-1。In this embodiment, first, n queues with a priority relationship (corresponding to the queues to be cached) are numbered 0 to n-1, wherein the highest priority queue number is 0, the second highest priority queue number is 1, and again The high priority queue number is 2... and so on, the lowest priority queue number is n-1.
为每个队列设置一个计数器cnt x(x=0~n-1),记录该队列已占用的缓存深度(对应上述属性信息)。计数器可以以比特、字节、分片个数或包个数等缓存计量单位为单位。 Set a counter cnt x (x=0~n-1) for each queue, and record the buffer depth occupied by the queue (corresponding to the above attribute information). The counter can be in units of cache measurement units such as bits, bytes, number of fragments, or number of packets.
为每个队列设置一个丢弃阈值T x(x=0~n-1),丢弃阈值(对应上述预设阈值)的设置遵循以下关系:T 0>T 1>……>T n-1。即,高优先级队列的丢弃阈值大于低优先级队列的丢弃阈值。 Set a discarding threshold T x (x=0~n-1) for each queue. The setting of the discarding threshold (corresponding to the preset threshold above) follows the following relationship: T 0 >T 1 >......>T n-1 . That is, the drop threshold of the high priority queue is greater than the drop threshold of the low priority queue.
当一个报文(对应上述待缓存数据包)到达时,假如该报文所属优先级为i,判断所有x≤i(x=0~n-1)的队列的计数器和丢弃阈值,如果cnt x<T x(x=0~n-1,x≤i),则该报文可以进入缓存,只要有一个cnt x大于或等于对应的<T x,则丢弃该报文。 When a message (corresponding to the above-mentioned data packet to be buffered) arrives, if the priority of the message belongs to i, determine the counter and discard threshold of all queues with x≤i (x=0~n-1), if cnt x <T x (x=0~n-1, x≤i), then the message can enter the cache, as long as there is a cnt x greater than or equal to the corresponding <T x , the message is discarded.
在其他实施例中,以上表达式内小于号也可更改为小于等于号,即,在其他实施例中,上述cnt x<T x也可以为cnt x≤T xIn other embodiments, the less-than sign in the above expression can also be changed to less than or equal to the sign, that is, in other embodiments, the above-mentioned cnt x <T x can also be cnt x ≤ T x .
假如判断结果表示报文可以进入队列缓存,则将所有x≤i(x=0~n-1)的队列的计数器根据计数器单位和报文长度做相应增加。例如,报文是以比特为单位,则按照报文比特数增加;报文是以字节为单位,则按照报文字节数增加。If the judgment result indicates that the message can enter the queue buffer, the counters of all the queues with x≤i (x=0~n-1) are increased correspondingly according to the counter unit and the message length. For example, if the message is in units of bits, it increases according to the number of bits in the message; if the message is in units of bytes, it increases according to the number of bytes in the message.
当某一报文(假设该报文优先级为j)从缓存中读出,释放缓存时,同样按进入缓存的规则,将所有满足x≤j(x=0~n-1)的队列的计数器根据计数器单位和报文长度做相应减少。When a certain message (assuming that the message priority is j) is read from the cache and the cache is released, all queues satisfying x≤j (x=0~n-1) are also processed according to the rules for entering the cache The counter is reduced accordingly according to the counter unit and the message length.
以上方案使高优先级报文可以占用低优先级报文的缓存空间,但低优先级报文无法占用高优先级的缓存空间。使用以上方案,每个优先级可占用的缓存空间关系如图5所示。The above scheme enables high-priority messages to occupy the cache space of low-priority messages, but low-priority messages cannot occupy the cache space of high-priority messages. Using the above scheme, the relationship of cache space that can be occupied by each priority is shown in Figure 5.
如图5所示,每个优先级的报文最大可以占用T x(x=0~n-1)的缓存空间,即每个缓存队列的T x小于缓存最大空间即总缓存,所有缓存队列的T x之和可以大于缓存最大空间。任意一个优先级队列都可以在其他一部分优先级流量缺失的情况下更多的占用缓存空间。同时,当较低优先级队列拥塞时,高优先级队列仍有更多的空间供其占用,不会出现低优先级队列耗空缓存导致高优先级报文丢包的情况发生。 5, each priority packets may occupy a maximum T x (x = 0 ~ n -1) of the cache space, i.e., each cache buffer queue is less than the maximum T x of the total buffer space that is, all the buffer queue the T x can be greater than the sum of the maximum cache space. Any priority queue can occupy more cache space when some other priority traffic is missing. At the same time, when the lower-priority queue is congested, the high-priority queue still has more space for it to occupy, and the low-priority queue will not consume the cache and cause high-priority packets to be lost.
图6为本申请实施例所提供的数据处理装置的组成结构示意图,如图6所示,所述数据处理装置包括队列深度计数器模块601、入队判断模块602、入队深度计算模块603以及出队深度计算模块604,其中:FIG. 6 is a schematic diagram of the composition structure of the data processing device provided in the embodiment of the present application. As shown in FIG. 6, the data processing device includes a queue depth counter module 601, an enqueue judgment module 602, an enqueue depth calculation module 603, and an outgoing Team depth calculation module 604, in which:
所述队列深度计数器模块601,设置为为每个优先级队列维护相应的缓存占用深度计数器;The queue depth counter module 601 is set to maintain a corresponding buffer occupation depth counter for each priority queue;
所述入队判断模块602,设置为当一个报文到达时,假如该报文所属优先级为i,判断所有x≤i(x=0~n-1)的队列的计数器和丢弃阈值,如果cnt x<T x(x=0~n-1,x≤i),则该报文可以进入缓存,如果cnt x≥T x,则丢弃该报文。根据设计需要,以上表达式内小于号也可更改为小于等于号,相应地,大于等于号可以更改为大于号; The enqueue judging module 602 is set to, when a message arrives, if the priority of the message belongs to i, judge the counters and discard thresholds of all queues with x≤i (x=0~n-1), if cnt x <T x (x=0~n-1, x≤i), then the message can enter the cache, if cnt x ≥T x , the message is discarded. According to the design needs, the less than sign in the above expression can also be changed to less than or equal to sign, accordingly, the greater than or equal sign can be changed to greater than sign;
所述入队深度计算模块603,设置为假如判断报文可以进入缓存,则将所有x≤i(x=0~n-1)的计数器根据计数器单位和报文长度做相应增加;The enqueue depth calculation module 603 is set to increase all counters of x≤i (x=0~n-1) according to the counter unit and the length of the message if it is judged that the message can enter the cache;
所述出队深度计算模块604,设置为假如优先级为j的报文要读出缓存,则将所有x≤j(x=0~n-1)的队列的计数器根据计数器单位和报文长度做相应减少。The dequeuing depth calculation module 604 is configured to set the counters of all queues of x≤j (x=0~n-1) according to the counter unit and the length of the message if the message with priority j is to be read out of the cache Do a corresponding reduction.
本申请实施例还提供一种数据处理方法,该方法可以供使用滑窗实现的队列轮询装置运行,图7为本申请实施例所提供的数据处理方法的实现流程示意图,如图7所示,所述方法包括以下步骤S701-S704。An embodiment of the present application also provides a data processing method, which can be operated by a queue polling device implemented using a sliding window. FIG. 7 is a schematic flowchart of an implementation process of the data processing method provided by an embodiment of the present application, as shown in FIG. 7 The method includes the following steps S701-S704.
步骤S701,系统上电时,初始化每个队列深度计数器为0。In step S701, when the system is powered on, each queue depth counter is initialized to 0.
步骤S702,检测到有报文进入。In step S702, it is detected that a message enters.
步骤S703,当报文进入系统,根据该报文优先级i判断所有x≤i(x=0~n-1)的队列的计数器和丢弃阈值是否满足表达式cnt x<T x(x=0~n-1,x≤i)。 Step S703, when the message enters the system, it is judged according to the message priority i whether the counters and discarding thresholds of all queues with x≤i (x=0~n-1) satisfy the expression cnt x <T x (x=0 ~n-1, x≤i).
根据设计需要,以上表达式内小于号也可更改为小于等于号。According to the design requirements, the less than sign in the above expression can also be changed to less than or equal to sign.
步骤S704,如果步骤S703的判断结果为是,则同意该报文入队;如果该报文入队,则将所有x≤i(x=0~n-1)的队列的计数器根据计数器单位和报文长度做相应增加。In step S704, if the judgment result in step S703 is yes, the message is enqueued; if the message is enqueued, the counters of all queues with x≤i (x=0~n-1) are combined according to the counter unit and The message length is increased accordingly.
步骤S705,如果步骤S702的判断结果为否,丢弃该报文。In step S705, if the judgment result in step S702 is NO, the message is discarded.
图8为本申请实施例所提供的数据处理方法中报文读出过程的实现流程示意图,如图8所示,所述方法包括以下步骤S801-S802。FIG. 8 is a schematic diagram of an implementation process of a message reading process in a data processing method provided by an embodiment of the present application. As shown in FIG. 8, the method includes the following steps S801-S802.
步骤S801,检查到有报文读出。In step S801, it is detected that a message is read.
步骤S802,如果有报文出队,根据该报文优先级j将所有x≤j(x=0~n-1)的队列的计数器根据计数器单位和报文长度做相应减少。In step S802, if a message is dequeued, the counters of all queues with x≤j (x=0~n-1) are reduced according to the counter unit and the message length according to the message priority j.
下面对本申请实施例的技术方案的实施作进一步的详细描述:The implementation of the technical solution of the embodiments of the present application is described in further detail below:
假设存在n个优先级队列,现以n=5为例详细描述,队列优先级从高到低依次为Q 0、Q 1、Q 2、Q 3、Q 4。为五个优先级提供五个缓存深度计数器cnt 0、cnt 1、cnt 2、cnt 3、cnt 4。在本例中,深度计数器以包个数为统计单位,五个优先级队列的丢弃阈值分别为T 0、T 1、T 2、T 3、T 4,且满足T 0>T 1>T 2>T 3>T 4Suppose that there are n priority queues. Now, n=5 is used as an example to describe in detail. The priority of the queues is Q 0 , Q 1 , Q 2 , Q 3 , and Q 4 from high to low. Five buffer depth counters cnt 0 , cnt 1 , cnt 2 , cnt 3 , cnt 4 are provided for the five priority levels. In this example, the depth counter uses the number of packets as the statistical unit, and the drop thresholds of the five priority queues are T 0 , T 1 , T 2 , T 3 , and T 4 , respectively, and satisfy T 0 >T 1 >T 2 >T 3 >T 4 .
当优先级为4的报文进入系统时,比较以下表达式:When packets with priority 4 enter the system, compare the following expressions:
cnt 0<T 0;cnt 1<T 1;cnt 2<T 2;cnt 3<T 3;cnt 4<T 4cnt 0 <T 0 ; cnt 1 <T 1 ; cnt 2 <T 2 ; cnt 3 <T 3 ; cnt 4 <T 4 .
当以上条件全部满足时,报文入队列Q 4,cnt 0、cnt 1、cnt 2、cnt 3、cnt 4分别加一;如果以上条件有一个不满足,则丢弃该报文,计数器不变。 When all the above conditions are met, the message enters the queue Q 4 , and cnt 0 , cnt 1 , cnt 2 , cnt 3 , and cnt 4 are increased by one respectively; if one of the above conditions is not met, the message is discarded and the counter remains unchanged.
当优先级为3的报文进入系统时,比较以下表达式:When packets with priority 3 enter the system, compare the following expressions:
cnt 0<T 0;cnt 1<T 1;cnt 2<T 2;cnt 3<T 3cnt 0 <T 0 ; cnt 1 <T 1 ; cnt 2 <T 2 ; cnt 3 <T 3 .
当以上条件全部满足时,报文入队列Q 3,cnt 0、cnt 1、cnt 2、cnt 3分别加一;如果以上条件有一个不满足,则丢弃该报文,计数器不变。 When all the above conditions are met, the message enters the queue Q 3 , and cnt 0 , cnt 1 , cnt 2 , and cnt 3 are increased by one respectively; if one of the above conditions is not met, the message is discarded and the counter is unchanged.
当优先级为2的报文进入系统时,比较以下表达式:When packets with priority 2 enter the system, compare the following expressions:
cnt 0<T 0;cnt 1<T 1;cnt 2<T 2cnt 0 <T 0 ; cnt 1 <T 1 ; cnt 2 <T 2 .
当以上条件全部满足时,报文入队列Q 2,cnt 0、cnt 1、cnt 2分别加一;如果以上条件有一个不满足,则丢弃该报文,计数器不变。 When all the above conditions are met, the message enters the queue Q 2 , and cnt 0 , cnt 1 , and cnt 2 are increased by one respectively; if one of the above conditions is not met, the message is discarded and the counter is unchanged.
当优先级为1的报文进入系统时,比较以下表达式:When packets with priority 1 enter the system, compare the following expressions:
cnt 0<T 0;cnt 1<T 1cnt 0 <T 0 ; cnt 1 <T 1 .
当以上条件全部满足时,报文入队列Q 1,cnt 0、cnt 1分别加一;如果以上条件有一个不满足,则丢弃该报文,计数器不变。 When all the above conditions are met, the message enters the queue Q 1 , and cnt 0 and cnt 1 are increased by one respectively; if one of the above conditions is not met, the message is discarded and the counter is unchanged.
当优先级为0的报文进入系统时,比较以下表达式:cnt 0<T 0When a message with a priority of 0 enters the system, the following expression is compared: cnt 0 <T 0 .
当以上条件满足时,报文入队列Q 0,cnt 0加一;如果以上条件不满足,则丢弃该报文,计数器不变。 When the above conditions are met, the message enters the queue Q 0 and cnt 0 is increased by one; if the above conditions are not met, the message is discarded and the counter remains unchanged.
当系统内有报文要出队时,例如:When there is a message in the system to leave the team, for example:
当优先级为4的报文要从队列Q 4出队时,cnt 0、cnt 1、cnt 2、cnt 3、cnt 4分别减一。 When a packet with a priority of 4 is to be dequeued from queue Q 4 , cnt 0 , cnt 1 , cnt 2 , cnt 3 , and cnt 4 are each decremented by one.
当优先级为3的报文要从队列Q 3出队时,cnt 0、cnt 1、cnt 2、cnt 3分别减一。 When packets with priority 3 are to be dequeued from queue Q 3 , cnt 0 , cnt 1 , cnt 2 , and cnt 3 are each decremented by one.
当优先级为2的报文要从队列Q 2出队时,cnt 0、cnt 1、cnt 2分别减一。 When packets with priority 2 are to be dequeued from queue Q 2 , cnt 0 , cnt 1 and cnt 2 are decremented by one respectively.
当优先级为1的报文要从队列Q 1出队时,cnt 0、cnt 1分别减一。 When packets with priority 1 are to be dequeued from queue Q 1 , cnt 0 and cnt 1 are decremented by one respectively.
当优先级为0的报文要从队列Q 0出队时,cnt 0减一。 When packets with priority 0 are to be dequeued from queue Q 0 , cnt 0 is decreased by one.
本申请实施例所提供的数据处理方法,通过分优先级抢占的方法,在为高优先级队列预留缓存的同时,为低优先级提供更大的缓存占用机会,同时避免传统队列缓存分配中互斥配置方式缓存闲置的问题和抢占配置方式高优先级缓存被占用的问题。本申请实施例的方法,每个优先级的报文最大可以占用T x(x=0~n-1)的缓存空间,任意一个优先级队列都可以在其他一部分优先级流量缺失的情况下更多的占用缓存空间。同时,当较低优先级队列拥塞时,高优先级队列仍有更多的空间供其占用,不会出现低优先级队列耗空缓存导致高优先级报文丢包的情况发生。 The data processing method provided by the embodiment of the present application, by preempting the priority method, while reserving the cache for the high-priority queue, provides a greater chance of cache occupation for the low-priority queue, and avoids the traditional queue cache allocation Mutually exclusive configuration mode cache is idle and preemption configuration mode high priority cache is occupied. According to the method of the embodiment of the present application, each priority packet can occupy a maximum of T x (x=0~n-1) cache space, and any priority queue can be changed when some other priority traffic is missing. Take up more cache space. At the same time, when the lower-priority queue is congested, the high-priority queue still has more space for it to occupy, and the low-priority queue will not consume the cache and cause high-priority packets to be lost.
基于前述的实施例,本申请实施例提供一种数据处理装置,该装置包括所包括的每个单元、以及每个单元所包括的每个模块,可以通过数据处理设备中的处理器来实现;当然也可通过逻辑电路实现;在实施的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。Based on the foregoing embodiments, embodiments of the present application provide a data processing apparatus including each unit included and each module included in each unit, which may be implemented by a processor in a data processing device; Of course, it can also be realized by a logic circuit; in the implementation process, the processor may be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
图9为本申请实施例所提供的数据处理装置的组成结构示意图,如图9所示,该数据处理装置900包括第一确定单元901、第二确定单元902以及缓存单元903,其中:FIG. 9 is a schematic structural diagram of a data processing apparatus provided by an embodiment of the present application. As shown in FIG. 9, the data processing apparatus 900 includes a first determining unit 901, a second determining unit 902, and a cache unit 903, where:
所述第一确定单元901,设置为根据接收的待缓存数据包的优先级,确定待缓存队列集合;The first determining unit 901 is configured to determine the set of queues to be cached according to the priority of the received data packets to be cached;
所述第二确定单元902,设置为根据所述待缓存队列集合中的每一待缓存队列的属性信息和所述待缓存数据包的优先级,从所述待缓存队列集合中确定目标缓存队列;The second determining unit 902 is configured to determine the target cache queue from the queue-to-be-cached set according to the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached ;
所述缓存单元903,设置为将所述待缓存数据包缓存至所述目标缓存队列中。The cache unit 903 is configured to cache the data packet to be cached into the target cache queue.
在一示例性实施例中,每一所述待缓存队列具有预设的优先级;对应地,所述第一确定单元901包括比较模块和第一确定模块,其中:In an exemplary embodiment, each of the queues to be cached has a preset priority; correspondingly, the first determination unit 901 includes a comparison module and a first determination module, where:
所述比较模块,设置为比较每一所述待缓存队列的优先级与所述待缓存数据包的优先级的高低;The comparison module is configured to compare the priority of each queue to be cached with the priority of the data packet to be cached;
所述第一确定模块,设置为将优先级大于等于所述待缓存数据包的优先级的待缓存队列,确定为所述待缓存队列集合。The first determining module is configured to determine a queue to be cached with a priority greater than or equal to the priority of the data packet to be cached as the set of queues to be cached.
在一示例性实施例中,所述第二确定单元902包括第二确定模块和第三确定模块,其中:In an exemplary embodiment, the second determination unit 902 includes a second determination module and a third determination module, where:
所述第二确定模块,设置为针对所述待缓存队列集合中的每一待缓存队列,根据每一待缓存队列的属性信息和与所述属性信息对应的预设阈值,确定是否允许将所述待缓存数据包缓存至对应的待缓存队列中;The second determining module is configured to determine, for each queue to be cached in the set of queues to be cached, based on the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information, whether to allow all The data packet to be cached is cached into the corresponding queue to be cached;
所述第三确定模块,设置为针对所述待缓存队列集合中的每一待缓存队列,如果确定结果为允许时,从所述待缓存队列集合中,将与所述待缓存数据包的优先级相同的待缓存队列,确定为所述目标缓存队列。The third determination module is set to each queue to be cached in the queue-to-be-cached set, if the determination result is allowed, from the queue-to-be-cached set, priority will be given to the data packet to be cached The queue to be cached with the same level is determined as the target cache queue.
在一示例性实施例中,所述待缓存队列的属性信息包括待缓存队列中的以下至少之一:已缓存数据包的数量和已缓存数据包占用的内存空间,对应地,与所述属性信息对应的预设阈值包括以下至少之一:数量阈值和内存空间阈值;In an exemplary embodiment, the attribute information of the queue to be cached includes at least one of the following in the queue to be cached: the number of cached data packets and the memory space occupied by the cached data packets, corresponding to the attribute The preset threshold corresponding to the information includes at least one of the following: quantity threshold and memory space threshold;
对应地,所述第二确定模块包括第一控制模块和/或第二控制模块,其中:Correspondingly, the second determination module includes a first control module and/or a second control module, wherein:
所述第一控制模块,设置为针对所述待缓存队列集合中的每一待缓存队列,如果所述每一待缓存队列中的所述已缓存数据包的数量小于等于对应待缓存队列的数量阈值,则允许将所述待缓存数据包缓存至对应的待缓存队列中;The first control module is configured to, for each queue to be cached in the queue set to be cached, if the number of the cached data packets in each queue to be cached is less than or equal to the number of corresponding queues to be cached Threshold, it is allowed to cache the data packet to be cached into the corresponding queue to be cached;
所述第二控制模块,设置为针对所述待缓存队列集合中的每一待缓存队列,如果所述每一待缓存队列中的所述已缓存数据包占用的内存空间小于等于对应待缓存队列的内存空间阈值,则允许将所述待缓存数据包缓存至对应的待缓存队列中。The second control module is set for each queue to be cached in the queue to be cached set, if the memory space occupied by the cached data packet in each queue to be cached is less than or equal to the corresponding queue to be cached Threshold of the memory space, it is allowed to cache the data packet to be cached into the corresponding queue to be cached.
在一示例性实施例中,所述装置还包括第三确定单元、第四确定单元和更新单元,其中:In an exemplary embodiment, the apparatus further includes a third determination unit, a fourth determination unit, and an update unit, where:
所述第三确定单元,设置为当已缓存数据包流出缓存队列时,确定缓存所述已缓存数据包的缓存队列的优先级为缓存优先级;The third determining unit is configured to determine that the priority of the cache queue to cache the cached data packet is the cache priority when the cached data packet flows out of the cache queue;
所述第四确定单元,设置为将优先级大于等于所述缓存优先级的缓存队列的属性信息确定为目标属性信息;The fourth determining unit is configured to determine the attribute information of the cache queue whose priority is greater than or equal to the cache priority as the target attribute information;
所述更新单元,设置为更新所述目标属性信息。The update unit is configured to update the target attribute information.
在一示例性实施例中,所述装置还包括第一丢弃单元和/或第二丢弃单元,其中:In an exemplary embodiment, the apparatus further includes a first discarding unit and/or a second discarding unit, wherein:
所述第一丢弃单元,设置为如果在所述待缓存队列集合中,存在至少一个待缓存队列的所述已缓存数据包的数量大于对应待缓存队列的数量阈值,则丢弃所述待缓存数据包;The first discarding unit is configured to discard the data to be cached if the number of the cached data packets in at least one queue to be cached in the set of queues to be cached is greater than the corresponding threshold value of the number of queues to be cached package;
所述第二丢弃单元,设置为如果在所述待缓存队列集合中,存在至少一个待缓存队列的所述已缓存数据包占用的内存空间大于对应待缓存队列的内存空间阈值,则丢弃所述待缓存数据包。The second discarding unit is configured to discard the memory space of the cached data packet where at least one queue to be cached occupies more than a memory space threshold corresponding to the queue to be cached in the set of queues to be cached Packets to be cached.
需要说明的是,本申请实施例中,如果以软件功能模块的形式实现上述的数据处理方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台终端执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本发明实施例不限制于任何特定的硬件和软件结合。It should be noted that, in the embodiments of the present application, if the above data processing method is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of software products in essence or part of contributions to related technologies. The computer software products are stored in a storage medium and include several instructions to make A terminal executes all or part of the methods described in various embodiments of the present invention. The foregoing storage media include various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read-only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk. In this way, the embodiments of the present invention are not limited to any specific combination of hardware and software.
对应的,本申请实施例提供一种数据处理设备,图10为本申请实施例所提供的数据处理设备的组成结构示意图,如图10所示,所述数据处理设备1000至少包括:处理器1001、通信接口1002和配置为存储可执行指令的存储介质1003,其中:Correspondingly, an embodiment of the present application provides a data processing device. FIG. 10 is a schematic structural diagram of the composition of the data processing device provided by the embodiment of the present application. As shown in FIG. 10, the data processing device 1000 at least includes: a processor 1001 , A communication interface 1002 and a storage medium 1003 configured to store executable instructions, wherein:
处理器1001设置为可以控制所述数据处理设备1000的总体操作。The processor 1001 is configured to control the overall operation of the data processing device 1000.
通信接口1002设置为使数据处理设备通过网络与其他终端或服务器通信。The communication interface 1002 is configured to enable the data processing device to communicate with other terminals or servers through the network.
存储介质1003设置为存储由处理器1001可执行的指令和应用,还可以缓存待处理器1001以及数据处理设备1000中每个模块待处理或已经处理的数据,可以通过闪存(FLASH)或随机访问存储器(Random Access Memory,RAM)实现。The storage medium 1003 is configured to store instructions and applications executable by the processor 1001, and can also cache data to be processed or processed by the processor 1001 and each module in the data processing device 1000, which can be accessed through flash memory (FLASH) or random access Memory (Random Access Memory, RAM) implementation.
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本发明的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本发明的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。It should be understood that “one embodiment” or “one embodiment” mentioned throughout the specification means that a specific feature, structure, or characteristic related to the embodiment is included in at least one embodiment of the present invention. Therefore, “in one embodiment” or “in one embodiment” appearing throughout the specification does not necessarily refer to the same embodiment. In addition, these specific features, structures, or characteristics may be combined in one or more embodiments in any suitable manner. It should be understood that, in various embodiments of the present invention, the size of the sequence numbers of the above processes does not mean the order of execution order, and the execution order of each process should be determined by its function and inherent logic, and should not correspond to the embodiments of the present invention. The implementation process constitutes no limitation. The sequence numbers of the above embodiments of the present invention are for description only, and do not represent the advantages and disadvantages of the embodiments.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that in this article, the terms "include", "include" or any other variant thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device that includes a series of elements includes not only those elements, It also includes other elements that are not explicitly listed, or include elements inherent to this process, method, article, or device. Without more restrictions, the element defined by the sentence "include one..." does not exclude that there are other identical elements in the process, method, article or device that includes the element.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特 征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided in this application, it should be understood that the disclosed device and method may be implemented in other ways. The device embodiments described above are only schematic. For example, the division of the units is only a division of logical functions. In actual implementation, there may be other division methods, such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored, or not implemented. In addition, the displayed or discussed components are coupled to each other, or directly coupled, or the communication connection may be through some interfaces, and the indirect coupling or communication connection of the device or unit may be electrical, mechanical, or other forms of.
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。The units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units; they may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; the above integration The unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。或者,本发明上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台终端执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。Those of ordinary skill in the art may understand that all or part of the steps to implement the above method embodiments may be completed by program instructions related hardware. The foregoing program may be stored in a computer-readable storage medium. When the program is executed, the execution includes The steps of the foregoing method embodiments; and the foregoing storage media include various media that can store program codes, such as a mobile storage device, a read-only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk. Alternatively, if the above integrated unit of the present invention is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of software products in essence or part of contributions to related technologies. The computer software products are stored in a storage medium and include several instructions to make A terminal executes all or part of the methods described in various embodiments of the present invention. The foregoing storage media include various media that can store program codes, such as mobile storage devices, ROM, magnetic disks, or optical disks.
以上所述,仅为本发明的实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。The above are only the embodiments of the present invention, but the protection scope of the present invention is not limited to this. Any person skilled in the art can easily think of changes or replacements within the technical scope disclosed by the present invention. Covered within the protection scope of the present invention.

Claims (15)

  1. 一种数据处理方法,包括:A data processing method, including:
    服务器根据接收的待缓存数据包的优先级,确定待缓存队列集合;The server determines the set of queues to be cached according to the priority of the received data packets to be cached;
    根据所述待缓存队列集合中的每一待缓存队列的属性信息和所述待缓存数据包的优先级,从所述待缓存队列集合中确定目标缓存队列;Determine the target cache queue from the queue-to-be-cached set according to the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached;
    将所述待缓存数据包缓存至所述目标缓存队列中。Cache the data packet to be cached into the target cache queue.
  2. 根据权利要求1所述的方法,其中,每一所述待缓存队列具有预设的优先级;The method according to claim 1, wherein each queue to be cached has a preset priority;
    所述服务器根据终端发送的待缓存数据包的优先级,确定待缓存队列集合,包括:The server determines the set of queues to be cached according to the priority of the data packets to be cached sent by the terminal, including:
    所述服务器比较每一所述待缓存队列的优先级与所述待缓存数据包的优先级的高低;The server compares the priority of each queue to be cached with the priority of the data packet to be cached;
    将优先级大于等于所述待缓存数据包的优先级的待缓存队列,确定为所述待缓存队列集合。A queue to be cached with a priority greater than or equal to the priority of the data packet to be cached is determined as the queue to be cached set.
  3. 根据权利要求1所述的方法,其中,所述根据所述待缓存队列集合中的每一待缓存队列的属性信息和所述待缓存数据包的优先级,从所述待缓存队列集合中确定目标缓存队列,包括:The method according to claim 1, wherein said determining from the set of queues to be cached is based on the attribute information of each queue to be cached in the set of queues to be cached and the priority of the data packet to be cached Target cache queue, including:
    针对所述待缓存队列集合中的每一待缓存队列,根据每一待缓存队列的属性信息和与所述属性信息对应的预设阈值,确定是否允许将所述待缓存数据包缓存至对应的待缓存队列中;For each queue to be cached in the set of queues to be cached, according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information, it is determined whether to allow the cached data packet to be cached to the corresponding To be cached in the queue;
    针对所述待缓存队列集合中的每一待缓存队列,如果确定结果为允许时,从所述待缓存队列集合中,将与所述待缓存数据包的优先级相同的待缓存队列确定为所述目标缓存队列。For each to-be-cached queue in the to-be-cached queue set, if the determination result is allowed, from the to-be-cached queue set, the to-be-cached queue having the same priority as the to-be-cached data packet is determined to be all Describe the target cache queue.
  4. 根据权利要求3所述的方法,其中,The method of claim 3, wherein
    所述待缓存队列的属性信息包括已缓存数据包的数量,所述与所述属性信息对应的预设阈值包括数量阈值;The attribute information of the queue to be cached includes the number of cached data packets, and the preset threshold corresponding to the attribute information includes a number threshold;
    所述针对所述待缓存队列集合中的每一待缓存队列,根据每一待缓存队 列的属性信息和与所述属性信息对应的预设阈值,确定是否允许将所述待缓存数据包缓存至对应的待缓存队列中,包括:For each queue to be cached in the queue set to be cached, according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information, it is determined whether to allow the cached data packet to be cached to The corresponding queue to be cached includes:
    针对所述待缓存队列集合中的每一待缓存队列,如果所述每一待缓存队列中的所述已缓存数据包的数量小于等于对应待缓存队列的数量阈值,则允许将所述待缓存数据包缓存至对应的待缓存队列中;For each to-be-cached queue in the to-be-cached queue set, if the number of the cached data packets in each to-be-cached queue is less than or equal to the corresponding threshold value of the to-be-cached queue, the to-be-cached is allowed The data packet is cached in the corresponding queue to be cached;
    和/或,and / or,
    所述待缓存队列的属性信息包括已缓存数据包占用的内存空间,与所述属性信息对应的预设阈值包括内存空间阈值;The attribute information of the queue to be cached includes the memory space occupied by the cached data packets, and the preset threshold corresponding to the attribute information includes the memory space threshold;
    所述针对所述待缓存队列集合中的每一待缓存队列,根据每一待缓存队列的属性信息和与所述属性信息对应的预设阈值,确定是否允许将所述待缓存数据包缓存至对应的待缓存队列中,包括:For each queue to be cached in the queue set to be cached, according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information, it is determined whether to allow the cached data packet to be cached to The corresponding queue to be cached includes:
    针对所述待缓存队列集合中的每一待缓存队列,如果所述每一待缓存队列中的所述已缓存数据包占用的内存空间小于等于对应待缓存队列的内存空间阈值,则允许将所述待缓存数据包缓存至对应的待缓存队列中。For each to-be-cached queue in the to-be-cached queue set, if the memory space occupied by the cached data packet in each to-be-cached queue is less than or equal to the memory space threshold of the corresponding to-be-cached queue, the The data packet to be cached is cached into the corresponding queue to be cached.
  5. 根据权利要求4所述的方法,所述方法还包括:The method according to claim 4, further comprising:
    当已缓存数据包流出缓存队列时,确定缓存所述已缓存数据包的缓存队列的优先级为缓存优先级;When the cached data packet flows out of the cache queue, it is determined that the priority of the cache queue to cache the cached data packet is the cache priority;
    将优先级大于等于所述缓存优先级的缓存队列的属性信息确定为目标属性信息;Determining the attribute information of the cache queue whose priority is greater than or equal to the cache priority as the target attribute information;
    更新所述目标属性信息。Update the target attribute information.
  6. 根据权利要求4所述的方法,所述方法还包括:The method according to claim 4, further comprising:
    如果在所述待缓存队列集合中,存在至少一个待缓存队列的所述已缓存数据包的数量大于对应待缓存队列的数量阈值,则丢弃所述待缓存数据包;If there is at least one queue to be cached in the set of queues to be cached, the number of the cached data packets is greater than the corresponding threshold of the number of queues to be cached, then the data packet to be cached is discarded;
    和/或,and / or,
    如果在所述待缓存队列集合中,存在至少一个待缓存队列的所述已缓存数据包占用的内存空间大于对应待缓存队列的内存空间阈值,则丢弃所述待缓存数据包。If in the set of queues to be cached, the memory space occupied by the cached data packets of at least one queue to be cached is greater than the memory space threshold of the corresponding queue to be cached, then the data packets to be cached are discarded.
  7. 一种数据处理方法,包括:A data processing method, including:
    接收待缓存数据包,根据所述待缓存数据包的优先级确定对应优先级的目标缓存队列;Receiving the data packet to be cached, and determining the target cache queue corresponding to the priority according to the priority of the data packet to be cached;
    根据优先级不低于所述目标缓存队列的所有缓存队列的属性信息确定是否将所述待缓存数据包缓存至所述目标缓存队列中,所述属性信息用于表示对应缓存队列当前缓存情况。Determine whether to cache the data packet to be cached into the target cache queue according to attribute information of all cache queues whose priority is not lower than the target cache queue, and the attribute information is used to indicate the current cache situation of the corresponding cache queue.
  8. 根据权利要求7所述的方法,其中,所述缓存队列的属性信息包括队列中已缓存数据包的数量;The method according to claim 7, wherein the attribute information of the cache queue includes the number of cached data packets in the queue;
    所述根据优先级不低于所述目标缓存队列的所有缓存队列的属性信息确定是否将所述待缓存数据包缓存至所述目标缓存队列中,包括:The determining whether to cache the data packet to be cached into the target cache queue according to the attribute information of all cache queues whose priority is not lower than the target cache queue includes:
    分别判断优先级不低于所述目标缓存队列的每个缓存队列的已缓存数据包的数量,如果每个缓存队列的已缓存数据包的数量与对应缓存队列的数量阈值之差均大于或等于对应缓存队列的第一差值阈值,则确定将所述待缓存数据包缓存至所述目标缓存队列中,如果任意一个缓存队列的已缓存数据包的数量与对应缓存队列的数量阈值之差小于或等于第二差值阈值,则确定不将所述待缓存数据包缓存至所述目标缓存队列中;其中,所述优先级不低于所述目标缓存队列的缓存队列是指:包括所述目标缓存队列在内的所有优先级不低于所述目标缓存队列的缓存队列。Determine that the priority is not lower than the number of cached data packets in each cache queue of the target cache queue, if the difference between the number of cached data packets in each cache queue and the threshold value of the corresponding cache queue is greater than or equal to Corresponding to the first difference threshold of the cache queue, it is determined that the data packet to be cached is cached in the target cache queue, if the difference between the number of cached data packets of any cache queue and the corresponding threshold value of the number of corresponding cache queues is less than Or equal to the second difference threshold, it is determined that the data packet to be cached is not cached into the target cache queue; wherein, the cache queue whose priority is not lower than the target cache queue refers to: including the All priority levels including the target cache queue are not lower than the cache queue of the target cache queue.
  9. 根据权利要求7所述的方法,其中,所述缓存队列的属性信息包括队列中已缓存数据包占用的内存空间;The method according to claim 7, wherein the attribute information of the cache queue includes memory space occupied by cached data packets in the queue;
    所述根据优先级不低于所述目标缓存队列的所有缓存队列的属性信息确定是否将所述待缓存数据包缓存至所述目标缓存队列中,包括:The determining whether to cache the data packet to be cached into the target cache queue according to the attribute information of all cache queues whose priority is not lower than the target cache queue includes:
    分别判断优先级不低于所述目标缓存队列的每个缓存队列的已缓存数据包的数量,如果每个缓存队列的已缓存数据包占用的内存空间与对应缓存队列的内存空间阈值之差大于或等于对应缓存队列的第三差值阈值,则确定将所述待缓存数据包缓存至所述目标缓存队列中,如果任意一个缓存队列的已缓存数据包占用的内存空间与对应缓存队列的内存空间阈值之差小于或等于对应缓存队列的第四差值阈值,则确定不将所述待缓存数据包缓存至所述目 标缓存队列中;其中,所述优先级不低于所述目标缓存队列的缓存队列是指:包括所述目标缓存队列在内的所有优先级不低于所述目标缓存队列的缓存队列。Determine that the priority is not lower than the number of cached data packets of each cache queue of the target cache queue, if the difference between the memory space occupied by the cached data packets of each cache queue and the memory space threshold of the corresponding cache queue is greater than Or equal to the third difference threshold of the corresponding cache queue, it is determined that the data packet to be cached is cached in the target cache queue, if the memory space occupied by the cached data packet of any cache queue and the memory of the corresponding cache queue If the difference of the space threshold is less than or equal to the fourth difference threshold of the corresponding cache queue, it is determined not to cache the data packet to be cached into the target cache queue; wherein, the priority is not lower than the target cache queue The cache queue refers to: all cache queues including the target cache queue with a priority not lower than the target cache queue.
  10. 根据权利要求7所述的方法,当确定将所述待缓存数据包缓存至所述目标缓存队列中后,所述方法还包括:更新所有优先级不低于所述目标缓存队列的缓存队列的属性信息;其中,所述优先级不低于所述目标缓存队列的缓存队列是指:包括所述目标缓存队列在内的所有优先级不低于所述目标缓存队列的缓存队列。The method according to claim 7, after determining that the data packet to be cached is cached in the target cache queue, the method further comprises: updating all cache queues whose priority is not lower than the target cache queue Attribute information; wherein, the cache queue whose priority is not lower than the target cache queue refers to: all cache queues including the target cache queue whose priority is not lower than the target cache queue.
  11. 根据权利要求7所述的方法,所述方法还包括:The method according to claim 7, further comprising:
    任意一缓存队列中的缓存数据包出队时,更新所有优先级不低于出缓存队列的缓存队列的属性信息;其中,所述优先级不低于出缓存队列的缓存队列是指:包括所述出缓存队列在内的所有优先级不低于所述出缓存队列的缓存队列。When any cached data packet in the cache queue is dequeued, the attribute information of all cache queues whose priority is not lower than the cache queue is updated; wherein, the cache queue whose priority is not lower than the cache queue refers to: including all All priority levels including the cache queue are not lower than the cache queue of the cache queue.
  12. 一种数据处理装置,包括:A data processing device, including:
    第一确定单元,设置为根据接收的待缓存数据包的优先级,确定待缓存队列集合;The first determining unit is configured to determine the set of queues to be cached according to the priority of the received data packets to be cached;
    第二确定单元,设置为根据所述待缓存队列集合中的每一待缓存队列的属性信息和所述待缓存数据包的优先级,从所述待缓存队列集合中确定目标缓存队列;以及The second determining unit is configured to determine the target cache queue from the queue-to-be-cached set based on the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached; and
    缓存单元,设置为将所述待缓存数据包缓存至所述目标缓存队列中。The cache unit is configured to cache the data packet to be cached into the target cache queue.
  13. 根据权利要求12所述的装置,其中,每一所述待缓存队列具有预设的优先级;The device according to claim 12, wherein each queue to be cached has a preset priority;
    所述第一确定单元包括:The first determining unit includes:
    比较模块,设置为比较每一所述待缓存队列的优先级与所述待缓存数据包的优先级的高低;以及A comparison module configured to compare the priority of each queue to be cached with the priority of the data packet to be cached; and
    第一确定模块,设置为将优先级大于等于所述待缓存数据包的优先级的待缓存队列,确定为所述待缓存队列集合。The first determining module is configured to determine a queue to be cached whose priority is greater than or equal to the priority of the data packet to be cached as the set of queues to be cached.
  14. 一种数据处理设备,所述设备至少包括:处理器和配置为存储可执行指令的存储介质,其中:所述处理器配置为执行存储的可执行指令;A data processing device, the device at least includes: a processor and a storage medium configured to store executable instructions, wherein: the processor is configured to execute the stored executable instructions;
    所述可执行指令配置为执行上述权利要求1至6或7至11中任一项所提供的数据处理方法。The executable instruction is configured to execute the data processing method provided in any one of claims 1 to 6 or 7 to 11 above.
  15. 一种存储介质,所述存储介质中存储有计算机可执行指令,所述计算机可执行指令配置为执行上述权利要求1至6或7至11中任一项所提供的数据处理方法。A storage medium in which computer-executable instructions are stored, the computer-executable instructions being configured to perform the data processing method provided in any one of claims 1 to 6 or 7 to 11 above.
PCT/CN2019/112792 2018-12-24 2019-10-23 Data processing method, apparatus, and device, and storage medium WO2020134425A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811585237.5 2018-12-24
CN201811585237.5A CN111355673A (en) 2018-12-24 2018-12-24 Data processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2020134425A1 true WO2020134425A1 (en) 2020-07-02

Family

ID=71126853

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/112792 WO2020134425A1 (en) 2018-12-24 2019-10-23 Data processing method, apparatus, and device, and storage medium

Country Status (2)

Country Link
CN (1) CN111355673A (en)
WO (1) WO2020134425A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114567674A (en) * 2022-02-25 2022-05-31 腾讯科技(深圳)有限公司 Data processing method and device, computer equipment and readable storage medium
CN115080468A (en) * 2022-05-12 2022-09-20 珠海全志科技股份有限公司 Non-blocking information transmission method and device
CN115396384A (en) * 2022-07-28 2022-11-25 广东技术师范大学 Data packet scheduling method, system and storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112202681B (en) * 2020-09-18 2022-07-29 京信网络系统股份有限公司 Data congestion processing method and device, computer equipment and storage medium
CN115209166A (en) * 2021-04-12 2022-10-18 北京字节跳动网络技术有限公司 Message sending method, device, equipment and storage medium
CN113315720B (en) * 2021-04-23 2023-02-28 深圳震有科技股份有限公司 Data flow control method, system and equipment
CN114979023A (en) * 2022-07-26 2022-08-30 浙江大华技术股份有限公司 Data transmission method, system, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130282984A1 (en) * 2011-11-28 2013-10-24 Huawei Technologies Co., Ltd. Data caching method and apparatus
CN104079502A (en) * 2014-06-27 2014-10-01 国家计算机网络与信息安全管理中心 Multi-user multi-queue scheduling method
CN104199790A (en) * 2014-08-21 2014-12-10 北京奇艺世纪科技有限公司 Data processing method and device
CN107450971A (en) * 2017-06-29 2017-12-08 北京五八信息技术有限公司 Task processing method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2314444A1 (en) * 1999-08-02 2001-02-02 At&T Corp. Apparatus and method for providing a high-priority service for emergency messages on a network
CN102594691B (en) * 2012-02-23 2019-02-15 中兴通讯股份有限公司 A kind of method and device handling message
CN105763481A (en) * 2014-12-19 2016-07-13 北大方正集团有限公司 Information caching method and device
CN106330760A (en) * 2015-06-30 2017-01-11 深圳市中兴微电子技术有限公司 Method and device of buffer management
CN108632169A (en) * 2017-03-21 2018-10-09 中兴通讯股份有限公司 A kind of method for ensuring service quality and field programmable gate array of fragment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130282984A1 (en) * 2011-11-28 2013-10-24 Huawei Technologies Co., Ltd. Data caching method and apparatus
CN104079502A (en) * 2014-06-27 2014-10-01 国家计算机网络与信息安全管理中心 Multi-user multi-queue scheduling method
CN104199790A (en) * 2014-08-21 2014-12-10 北京奇艺世纪科技有限公司 Data processing method and device
CN107450971A (en) * 2017-06-29 2017-12-08 北京五八信息技术有限公司 Task processing method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114567674A (en) * 2022-02-25 2022-05-31 腾讯科技(深圳)有限公司 Data processing method and device, computer equipment and readable storage medium
CN114567674B (en) * 2022-02-25 2024-03-15 腾讯科技(深圳)有限公司 Data processing method, device, computer equipment and readable storage medium
CN115080468A (en) * 2022-05-12 2022-09-20 珠海全志科技股份有限公司 Non-blocking information transmission method and device
CN115396384A (en) * 2022-07-28 2022-11-25 广东技术师范大学 Data packet scheduling method, system and storage medium
CN115396384B (en) * 2022-07-28 2023-11-28 广东技术师范大学 Data packet scheduling method, system and storage medium

Also Published As

Publication number Publication date
CN111355673A (en) 2020-06-30

Similar Documents

Publication Publication Date Title
WO2020134425A1 (en) Data processing method, apparatus, and device, and storage medium
EP2466824B1 (en) Service scheduling method and device
US6496516B1 (en) Ring interface and ring network bus flow control system
US8411574B2 (en) Starvation free flow control in a shared memory switching device
US11637786B1 (en) Multi-destination traffic handling optimizations in a network device
EP4175232A1 (en) Congestion control method and device
US20150215226A1 (en) Device and Method for Packet Processing with Memories Having Different Latencies
WO2020029819A1 (en) Message processing method and apparatus, communication device, and switching circuit
US10050896B2 (en) Management of an over-subscribed shared buffer
US20040032830A1 (en) System and method for shaping traffic from a plurality of data streams using hierarchical queuing
EP3907944A1 (en) Congestion control measures in multi-host network adapter
US8879578B2 (en) Reducing store and forward delay in distributed systems
WO2021143913A1 (en) Congestion control method, apparatus and system, and storage medium
CN114531488A (en) High-efficiency cache management system facing Ethernet exchanger
CN109391559B (en) Network device
US20230013331A1 (en) Adaptive Buffering in a Distributed System with Latency/Adaptive Tail Drop
WO2021209016A1 (en) Method for processing message in network device, and related device
CN113765796B (en) Flow forwarding control method and device
WO2022174444A1 (en) Data stream transmission method and apparatus, and network device
WO2020200307A1 (en) Data package marking method and device, data transmission system
CN110708255B (en) Message control method and node equipment
US11658924B2 (en) Buffer allocation method, and device
WO2023193689A1 (en) Packet transmission method and apparatus, device, and computer-readable storage medium
WO2020114133A1 (en) Pq expansion implementation method, device, equipment and storage medium
JP2024518019A (en) Method and system for predictive analytics based buffer management - Patents.com

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19902020

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19902020

Country of ref document: EP

Kind code of ref document: A1