WO2020134425A1 - Procédé, appareil et dispositif de traitement de données et support de stockage - Google Patents

Procédé, appareil et dispositif de traitement de données et support de stockage Download PDF

Info

Publication number
WO2020134425A1
WO2020134425A1 PCT/CN2019/112792 CN2019112792W WO2020134425A1 WO 2020134425 A1 WO2020134425 A1 WO 2020134425A1 CN 2019112792 W CN2019112792 W CN 2019112792W WO 2020134425 A1 WO2020134425 A1 WO 2020134425A1
Authority
WO
WIPO (PCT)
Prior art keywords
cached
queue
cache
priority
data packet
Prior art date
Application number
PCT/CN2019/112792
Other languages
English (en)
Chinese (zh)
Inventor
肖洁
廖庆磊
谢小龙
钱晓东
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2020134425A1 publication Critical patent/WO2020134425A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements

Definitions

  • the embodiments of the present application relate to the field of Internet technologies, and in particular, to a data processing method, device, equipment, and storage medium.
  • a buffer space is provided for each service priority.
  • a packet arrives, it first resolves the service level to which the packet belongs by parsing the characteristic field of the packet, maps the packet to the corresponding service priority, and then queries the queue cache of the priority to which the packet belongs. If the corresponding priority is cached If the occupancy is too large, the packet is discarded, otherwise the packet is written to the corresponding priority queue buffer.
  • the method of allocating caches independently for each queue is generally adopted, and the cache allocation is performed according to the relationship between the total depth of each queue's allocated cache and the actual total amount of cache.
  • Embodiments of the present application provide a data processing method, device, device, and storage medium.
  • an embodiment of the present application provides a data processing method.
  • the method includes: the server determines a set of queues to be cached according to a priority of a data packet to be cached sent by a terminal; according to each of the set of queues to be cached Attribute information of the queue to be cached and the priority of the data packet to be cached, determine a target cache queue from the set of queues to be cached; cache the data packet to be cached into the target cache queue.
  • an embodiment of the present application provides a data processing apparatus, the apparatus including:
  • the first determining unit is configured to determine the set of queues to be cached according to the priority of the received data packets to be cached;
  • the second determining unit is configured to determine the target cache queue from the queue-to-be-cached set based on the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached;
  • the cache unit is configured to cache the data packet to be cached into the target cache queue.
  • an embodiment of the present application provides a data processing device, the device at least includes: a processor and a storage medium configured to store executable instructions, wherein: the processor is configured to execute the stored executable instructions; The executable instruction is configured to perform the above data processing method.
  • an embodiment of the present application provides a storage medium in which computer-executable instructions are stored, and the computer-executable instructions are configured to execute the foregoing data processing method.
  • FIG. 1 is a schematic diagram of an implementation process of the data processing method provided in Embodiment 1 of the present application;
  • FIG. 2 is a schematic diagram of an implementation process of the data processing method provided in Embodiment 2 of the present application;
  • FIG. 3 is a schematic diagram of an implementation process of the data processing method provided in Embodiment 3 of the present application.
  • FIG. 4 is a schematic diagram of an implementation process of the data processing method provided in Embodiment 4 of the present application.
  • FIG. 5 is a schematic diagram of a relationship of cache space that can be occupied by each priority according to an embodiment of the present application
  • FIG. 6 is a schematic structural diagram of a data processing device provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an implementation process of a data processing method provided by an embodiment of this application.
  • FIG. 8 is a schematic diagram of an implementation process of a message reading process in the data processing method provided by an embodiment of this application;
  • FIG. 9 is a schematic structural diagram of a composition of a data processing device provided by an embodiment of this application.
  • FIG. 10 is a schematic structural diagram of a data processing device provided by an embodiment of the present application.
  • module means, “part” or “unit” used to represent elements is only for the benefit of the description of the present application, and has no specific meaning in itself. Therefore, “module”, “component” or “unit” can be used in a mixed manner.
  • a buffer space is provided for each service priority.
  • a message arrives, it first resolves the service level to which the message belongs by parsing the characteristic field of the message, maps the message to the corresponding service priority, and then queries the cache of the queue to which the message belongs, if the corresponding priority If the queue buffer occupancy is too large, the message is discarded, otherwise the message is written to the corresponding priority queue buffer.
  • the output port is idle, the highest priority queue with packets is selected according to the service level of the priority queue, and the queue is given priority.
  • the enqueue bandwidth of the high-priority queue is less than the port bandwidth, the high-priority queue will get the most timely service, ensuring the low-latency demand of the high-priority queue.
  • each queue is generally allocated independently, that is, the sum of the maximum caches that can be occupied by all priority queues does not exceed the system cache maximum. In this way, when the output bandwidth of the low-priority queue cannot be obtained and the buffer is occupied, it will not affect the enqueuing and dequeuing of packets of other priority queues.
  • a router sets up four priority queues, and sets the maximum cache depth T for each queue, and T*4 ⁇ total cache AT. If there are only two lower priority packets entering the router in a certain period of time, half of the depth buffer will be idle and wasted. Since the input traffic may change at any time, it is impossible to solve this problem by adjusting the preset maximum cache depth.
  • a router sets up four priority queues, and sets the maximum cache depth T ⁇ total cache AT and T*4>total cache AT for each queue. Then, when there are only two lower priority packets entering the router in a certain period of time, the wasted buffer will be less than half the depth. However, if 2T>AT, and the input bandwidth is greater than the output bandwidth, the total buffer will be exhausted by these two lower priority packets. If high-priority packets resume traffic at this time, the problem occurs that the high-priority packets cannot occupy the cache, causing high-priority packets to be discarded.
  • the mutual exclusion configuration method in the related art has the problem of idle cache, and the preemption configuration method has the problem of high priority cache being occupied.
  • embodiments of the present application provide a data processing method that, by preempting by priority, while reserving cache for high-priority queues, provides greater cache occupation opportunities for low-priority queues At the same time, it avoids the problem that the cache of the mutually exclusive configuration mode is idle in the traditional queue cache allocation and the problem of the high-priority cache occupied by the preemption configuration mode.
  • the internal cache of the device can be used more effectively, and the effect of prioritized service can be achieved.
  • FIG. 1 is a schematic flowchart of an implementation of a data processing method provided in Embodiment 1 of the present application. As shown in FIG. 1, the method includes steps S101-S103.
  • Step S101 The server determines the set of queues to be cached according to the priority of the received data packets to be cached.
  • the server may be a network server in a switching network system or a routing system, and the server may receive a data packet to be cached sent by a terminal through a router.
  • the data packet to be buffered may be sent by the terminal in the form of a data packet, or may be sent by the terminal in the form of a data packet.
  • the server can also receive data packets to be cached sent by other servers.
  • the server After receiving the data packet to be cached, the server performs feature analysis on the data packet to be cached to determine the priority corresponding to the characteristic of the data packet to be cached. For example, the priority of a video call is higher than the priority of web browsing. Assuming that the priority level corresponding to the highest priority is 0, you can set the priority level of a video call to 0. Therefore, when the server receives the data packet of the video call, after the feature analysis, the priority level corresponding to the video call is 0.
  • priority level 0 is greater than (i.e. higher than) priority level 1
  • priority level 1 is greater than priority level 2
  • priority level 2 is greater than priority level 3.
  • other levels of arrangement may also be used.
  • the server determines the set of queues to be cached according to the priority of the data packet to be cached.
  • the set of queues to be cached includes at least one queue to be cached, and the queue to be cached is used to cache data packets, for example, the data packets to be cached.
  • the queue to be cached may already have data packets cached, and the queue to be cached may also be an empty queue that does not cache any data packets.
  • the server may divide the cache space of the data processing system into a preset number of queues to be cached in advance, and each queue to be cached is used to cache data packets that meet certain conditions.
  • the cache space can be divided into a preset number of queues to be cached that buffer packets of different priorities, that is, a video call with a priority level of 0 corresponds to a queue to be cached; a network browsing with a priority level of 4 corresponds to Another queue to be cached.
  • At least one queue to be cached that satisfies a preset condition may be determined among all the queues to be cached in the cache space to form the set of queues to be cached.
  • the preset condition may be that the priority is greater than or equal to ( ⁇ ) the priority of the data packet to be cached.
  • Step S102 Determine a target cache queue from the queue-to-be-cached set according to the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached.
  • the attribute information of the queue to be cached includes at least one of the following: the number of cached data packets in the queue to be cached, and the memory space occupied by the cached data packets in the queue to be cached.
  • the number of cached data packets can be counted according to the number of data packet fragments and the number of data packets as a unit of measurement; the memory space occupied by the cached data packets can be measured in terms of bits, bytes, etc. statistics.
  • the target cache queue before determining the target cache queue, according to the attribute information of each queue to be cached and the priority of the data packet to be cached, it is determined whether the data packet to be cached is allowed to be cached to a certain set in the queue to be cached Waiting in the cache queue. If allowed, then cache the data packet to be cached into the target cache queue.
  • the target cache queue is a queue to be cached selected from the set of queues to be cached. In this way, the data packets to be cached can be cached to the target cache queue only if the conditions are met, so that high-priority caches can be avoided.
  • Step S103 Cache the data packet to be cached into the target cache queue.
  • the data packet to be cached is cached in the target cache queue to ensure that the data packet to be cached can be Effective transmission.
  • the server determines the set of queues to be cached according to the priority of the received data packets to be cached; according to the attribute information of each queue to be cached in the set of queues to be cached and the to-be-cached
  • the priority of the data packet determines the target cache queue from the set of queues to be cached; caches the data packet to be cached into the target cache queue.
  • FIG. 2 is a schematic flowchart of an implementation of the data processing method provided in Embodiment 2 of the present application. As shown in FIG. 2, the method includes steps S201-S204.
  • step S201 the server compares the priority of each queue to be cached with the priority of the data packet to be cached.
  • each of the queues to be cached has a preset priority.
  • the cache space may be divided into multiple queues to be cached, and each queue to be cached is used to cache different data packets, for example, video call data packets, file transfer data packets, and web browsing data packets.
  • the queue to be cached for buffering video call data packets has a higher priority than the queue to be buffered for buffering network browsing data packets.
  • the priority of each queue to be cached is fixed.
  • the server compares the priority of the data packet to be cached with the priority of all queues to be cached in the system.
  • Step S202 Determine a queue to be cached whose priority is greater than or equal to the priority of the data packet to be cached as the set of queues to be cached.
  • a queue to be cached with a priority greater than or equal to the priority of the data packet to be cached is selected to form the set of queues to be cached, That is to say, the set of queues to be cached contains at least one queue to be cached, and the priority of all queues to be cached in the set of queues to be cached is greater than or equal to the priority of the data packets to be cached.
  • Step S203 Determine the target cache queue from the queue-to-be-cached set according to the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached.
  • the target cache queue Before determining the target cache queue, according to the attribute information of each queue to be cached and the priority of the data packet to be cached, it is determined whether the data packet to be cached is allowed to be cached to a certain queue in the queue set to be cached Cache queue. If allowed, then cache the data packet to be cached into the target cache queue.
  • the target cache queue is a queue to be cached selected from the set of queues to be cached. In this way, the data packets to be cached can be cached to the target cache queue only if the conditions are met, so that high-priority caches can be avoided.
  • Step S204 Cache the data packet to be cached into the target cache queue.
  • the data packet to be cached is cached in the target cache queue to ensure that the data packet to be cached can be Effective transmission.
  • the server compares the priority of each queue to be cached with the priority of the data packet to be cached; the queue to be cached with a priority greater than or equal to the priority of the data packet to be cached, Determined as the set of queues to be cached; according to the attribute information of each queue to be cached in the set of queues to be cached and the priority of the data packet to be cached, a target cache queue is determined from the set of queues to be cached; Cache the data packet to be cached into the target cache queue.
  • the target cache queue is determined according to the attribute information of each queue to be cached and the priority of the data packet to be cached, the purpose of preempting the cache queue by priority can be achieved, thereby avoiding queue cache allocation The problem of idle cache and the problem of high priority cache being occupied.
  • FIG. 3 is a schematic flowchart of an implementation of the data processing method provided in Embodiment 3 of the present application. As shown in FIG. 3, the method includes steps S301-S304.
  • Step S301 The server determines the set of queues to be cached according to the priority of the data packets to be cached sent by the terminal.
  • the set of queues to be cached has at least one queue to be cached, and each of the queues to be cached has a preset priority.
  • the cache space may be divided into multiple queues to be cached, and each queue to be cached is used to cache different data packets.
  • the priority of the data packet to be cached can be compared with the priority of all the queues to be cached in the system, and then a queue to be cached with a priority greater than or equal to the priority of the data packet to be cached can be selected to form the to-be-cached Queue collection.
  • Step S302 For each queue to be cached in the queue-to-be-cached set, determine whether to allow the data packet to be cached according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information To the corresponding queue to be cached.
  • the attribute information of the queue to be cached includes at least one of the following: the number of cached data packets in the queue to be cached, and the memory space occupied by the cached data packets in the queue to be cached.
  • the preset threshold corresponding to the attribute information includes at least one of the following: a quantity threshold and a memory space threshold.
  • step S302 in this embodiment includes the following three implementation methods:
  • the attribute information of the queue to be cached is the number of cached data packets in the queue to be cached, and the preset threshold corresponding to the attribute information is the quantity threshold;
  • the corresponding queue to be cached includes: for each queue to be cached in the queue to be cached set, if the number of the cached data packets in each queue to be cached is less than or equal to the number of corresponding queues to be cached.
  • the threshold value allows the data packet to be cached to be cached in the corresponding queue to be cached.
  • the attribute information of the queue to be cached is the memory space occupied by the cached data packets in the queue to be cached, and the preset threshold corresponding to the attribute information is the memory space threshold;
  • the corresponding queue to be cached includes: for each queue to be cached in the queue to be cached set, if the memory space occupied by the cached data packet in each queue to be cached is less than or equal to the corresponding queue to be cached Threshold of the memory space, it is allowed to cache the data packet to be cached into the corresponding queue to be cached.
  • the attribute information of the queue to be cached is the number of cached data packets in the queue to be cached and the memory space occupied by the cached data packets, and the preset threshold corresponding to the attribute information is the quantity threshold and the memory space threshold ;
  • determine whether to allow caching of the to-be-cached data packet To the corresponding queue to be cached includes: for each queue to be cached in the queue to be cached set, if the number of buffered data packets in each queue to be cached is less than or equal to the threshold of the number of queues to be cached , And the memory space occupied by the cached data packet is less than or equal to the memory space threshold of the corresponding queue to be cached, it is allowed to cache the data packet to be cached into the corresponding queue to be cached.
  • the number of cached data packets in the attribute information of the queue to be cached may be counted according to the number of data packet fragments and the number of data packet packets as a unit of measurement; the cached data packet occupancy
  • the memory space can be calculated based on bits, bytes, etc. for the measurement unit.
  • the preset threshold corresponding to the attribute information is determined according to the priority of the queue to be cached, that is, each queue to be cached corresponds to a preset threshold, and the The threshold is different.
  • the preset threshold of the queue to be cached is also greater than the preset threshold of other queues to be cached.
  • the preset threshold corresponding to the Nth cache queue may be determined according to the preset priority of the Nth queue to be cached in the set of queues to be cached; wherein, if the Nth queue to be cached The preset priority of the queue is higher than the preset priority of the Mth queue to be cached, then the preset threshold corresponding to the Nth cache queue is greater than the preset threshold corresponding to the Mth buffer queue.
  • the attribute information of all queues to be cached in the queue set to be cached is compared with the corresponding preset threshold, and only the attribute information of all queues to be cached and the corresponding preset threshold satisfy the above three methods Can be determined to allow the data packet to be cached to the corresponding queue to be cached, as long as the attribute information of any queue to be cached and the corresponding preset threshold do not satisfy the three ways (Any one determined to be executed), it is forbidden to cache the data packet to be cached into the corresponding queue to be cached.
  • Step S303 For each queue to be cached in the queue-to-be-cached set, if the result is determined to be allowed, from the queue-to-be-cached set, a queue to be cached with the same priority as the data packet to be cached , Determined as the target cache queue.
  • the priority of the target cache queue is the same as the priority of the data packet to be cached.
  • the set of queues to be cached is determined according to the queues to be cached whose priority is greater than or equal to the priority of the data packets to be cached. Then, that is to say, the target cache queue having the same priority as the data packet to be cached is the queue to be cached with the lowest priority in the set of queues to be cached.
  • step S303 can also be achieved by the following steps:
  • Step S3031 For each queue to be cached in the queue-to-be-cached set, if the determination result is allowed, from the queue-to-be-cached set, the queue to be cached with the lowest priority is determined as the target cache queue.
  • the method further includes the following steps:
  • Step S310 if there are at least one queue to be cached in the set of queues to be cached, the number of the cached data packets is greater than the corresponding threshold value of the queue to be cached, then discard the data packets; and/or, if In the set of queues to be cached, if the memory space occupied by the cached data packet with at least one queue to be cached is greater than the memory space threshold of the corresponding queue to be cached, the data packet is discarded.
  • the determination that the data packet to be cached is discarded includes the following three types:
  • the number of cached data packets in which at least one queue to be cached is greater than a threshold corresponding to the number of queues to be cached.
  • the memory space occupied by the cached data packets of at least one queue to be cached is greater than the threshold of the memory space corresponding to the queue to be cached.
  • Step S304 Cache the data packet to be cached into the target cache queue.
  • the data packet to be cached is cached in the target cache queue to ensure that the data packet to be cached can be Effective transmission.
  • the data processing method for each queue to be cached in the queue queue to be cached, according to the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information, it is determined whether to allow The data packet to be cached is cached into a corresponding queue to be cached.
  • the attribute information of each queue to be cached and the corresponding preset threshold are judged to determine whether the data packet to be cached is allowed to enter the queue to be cached, the target cache queue is determined and cached only when permitted Cache data packets, therefore, the purpose of prioritizing cache queue preemption can be achieved, thereby avoiding the problem of cache idleness in queue cache allocation and the problem of high priority cache being occupied.
  • FIG. 4 is a schematic flowchart of an implementation of a data processing method provided in Embodiment 4 of the present application. As shown in FIG. 4, the method includes steps S401-S406.
  • Step S401 The server determines the set of queues to be cached according to the priority of the received data packets to be cached.
  • Step S402 Determine the target cache queue from the queue-to-be-cached set according to the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached.
  • Step S403 Cache the data packet to be cached into the target cache queue.
  • Steps S401 to S403 are the same as the above steps S101 to S103, and details are not described in this embodiment.
  • Step S404 when the cached data packet flows out of the cache queue, it is determined that the priority of the cache queue that caches the cached data packet is the cache priority.
  • the cached data packet flowing out of the cache queue may be a cached data packet flowing out of any cache queue in the system, that is to say, the cache queue may be the queue to be cached in the aforementioned queue to be cached set, It may not be the queue to be cached in the aforementioned queue to be cached set, as long as there is a cached data packet outflow in any one of the cache queues in the current data processing system, the action of step S404 is performed to determine the cache of the cached data packet
  • the priority of the cache queue is the cache priority. That is, the priority of the buffer queue that buffers the outgoing buffered data packet is determined, and the priority is determined as the buffer priority.
  • Step S405 Determine the attribute information of the cache queue whose priority is greater than or equal to the cache priority as the target attribute information.
  • the priority of all cache queues in the system is compared with the size of the cache priority; then, the attribute information of the cache queue whose priority is greater than the cache priority is determined as the target attribute information.
  • Step S406 Update the target attribute information.
  • updating the attribute information of the cache queue whose priority is greater than or equal to the cache priority includes the following three cases:
  • Case 1 When the number of cached data packets flowing out of the cache queue is counted on a statistical basis, the number of cached data packets in the cache queue is updated.
  • Case 2 When the cached data packets flowing out of the cache queue are counted based on the size of the memory space, the memory space occupied by the cached data packets in the cache queue is updated.
  • Case 3 When the number of cached data packets flowing out of the cache queue is counted based on statistics and the size of the memory space, the number of cached data packets in the cache queue and the memory space occupied are updated.
  • the attribute information of all cache queues whose priority is higher than and equal to the outgoing cache queue is updated.
  • the priority of the cache queue that caches the cached data packet is determined as the cache priority, and the priority is greater than the cache priority of the cache
  • the attribute information of the queue is determined as the target attribute information, and the target attribute information is updated. In this way, it can be ensured that when a data packet flows out of the cache queue, the attribute information of the cache queue is updated in time to ensure that the data packets to be cached that subsequently enter the queue can be effectively cached.
  • the data processing method provided by the embodiments of the present application through the method of priority preemption, while reserving cache for high-priority queues, provides greater cache occupation opportunities for low-priority queues while avoiding traditional In the queue cache allocation, the mutual exclusion configuration mode cache is idle and the preemption configuration mode high priority cache is occupied. In this way, the internal cache of the device can be used more effectively, and the effect of prioritized services can be achieved.
  • an embodiment of the present application further provides a data processing method, including the following steps one and two.
  • Step 1 Receive the data packet to be cached, and determine the target cache queue corresponding to the priority according to the priority of the data packet to be cached;
  • the processing device for implementing the method of this embodiment may be any device that has a buffer queue processing requirement, for example, it may be a server, and it may not be a terminal.
  • the data packet to be buffered received by the device may be sent by the terminal, or may be sent by another server.
  • the processing device is preset with two or more cache queues, and each cache queue corresponds to a priority level, and is used to receive cache data packets corresponding to the priority levels.
  • Step 2 Determine whether to cache the data packet to be cached into the target cache queue according to the attribute information of all cache queues whose priority is not lower than the target cache queue, the attribute information is used to indicate that the corresponding cache queue is currently Cache situation.
  • the attribute information of the cache queue includes the number of cached data packets in the queue
  • the second step is implemented in the following manner:
  • the cache queue of the queue refers to: all cache queues including the target cache queue whose priority is not lower than the target cache queue.
  • the attribute information of the cache queue includes the number of cached data packets in the queue.
  • a threshold is set for each cache queue to indicate the maximum number of cache queues that can be cached.
  • a first difference threshold and a second difference threshold can also be set for each cache queue. Both the first difference threshold and the second difference threshold can be used to reflect the reserved space of the current cache queue. The difference is that A difference threshold is used when judging whether to buffer the data packet, and a second difference threshold is used when judging whether to discard the data packet.
  • the first difference threshold and the second difference threshold set for the same cache queue may be the same or different.
  • the above “equal to difference threshold” judgment step only needs to keep one of the first condition or the second condition, that is, when the first When the judgment in one condition is “greater than or equal to”, the judgment in the second condition may be “less than”, and when the judgment in the first condition is “greater than”, the judgment in the second condition may be “less than or equal to”.
  • high-priority caches can be avoided, and on the other hand, priority caching of high-priority packets can be better ensured. For example, when the number of data packets cached in a high-priority cache queue is close to the number threshold, the cache of low-priority packets can be blocked to avoid the high-priority cache being occupied, but when the data cached in the high-priority cache queue When the number of packets is not close to the number threshold, the cache in the high-priority cache queue can still be occupied by low-priority packets to avoid the high-priority cache being idle.
  • the attribute information of the cache queue may further include memory space occupied by cached data packets in the queue;
  • the second step is implemented in the following manner:
  • the priority is not lower than the number of cached data packets of each cache queue of the target cache queue, if the difference between the memory space occupied by the cached data packets of each cache queue and the memory space threshold of the corresponding cache queue is greater than Or equal to the third difference threshold (third condition) of the corresponding cache queue, it is determined that the data packet to be cached is cached in the target cache queue, if the memory space occupied by the cached data packet of any cache queue is equal to If the difference in the memory space threshold of the corresponding cache queue is less than or equal to the fourth difference threshold (fourth condition) of the corresponding cache queue, it is determined that the data packet to be cached is not cached in the target cache queue; wherein, the A cache queue whose priority is not lower than the target cache queue refers to: all cache queues including the target cache queue whose priority is not lower than the target cache queue.
  • the attribute information of the cache queue includes the memory space occupied by the cached data packets in the queue.
  • a memory space threshold is set for each cache queue to indicate the maximum memory space that the cache queue can cache.
  • a third difference threshold and a fourth difference threshold can also be set for each cache queue. Both the third difference threshold and the fourth difference threshold can be used to reflect the reserved space of the current cache queue. The difference is that The three difference threshold is used when judging whether to buffer the data packet, and the fourth difference threshold is used when judging whether to discard the data packet.
  • the third difference threshold and the fourth difference threshold set for the same cache queue may be the same or different.
  • step two when it is determined that the to-be-cached data packet is cached in the target cache queue, the method further includes: updating all priority levels not lower than the target cache queue Attribute information of the cache queue of the; where the cache queue with a priority not lower than the target cache queue refers to: all cache queues including the target cache queue with a priority not lower than the target cache queue .
  • the preset threshold (number threshold or memory space threshold) of each cache queue is less than the total cache, and the sum of the preset thresholds of all cache queues is greater than the total cache.
  • all cache queues share the same cache area, so after caching data packets, the attribute information of all cache queues whose priority is not lower than the target cache queue is updated to reflect the current cache occupancy overall and comprehensively , So as to ensure that the data packets to be cached subsequently entered the queue can be effectively cached.
  • the method further includes: updating any attribute information of the cache queue whose priority is not lower than the cache queue when the cache data packet in any cache queue is dequeued; wherein, the priority A cache queue not lower than the exit cache queue refers to: all cache queues including the exit cache queue whose priority is not lower than the exit cache queue.
  • the cache queue is the out-cache queue.
  • the attribute information of all cache queues with a priority not lower than the cache queue is updated to reflect the current situation in a comprehensive and comprehensive manner Cache occupancy, so as to ensure that subsequent packets to be cached can be effectively cached.
  • n queues with a priority relationship are numbered 0 to n-1, wherein the highest priority queue number is 0, the second highest priority queue number is 1, and again The high priority queue number is 2... and so on, the lowest priority queue number is n-1.
  • the counter can be in units of cache measurement units such as bits, bytes, number of fragments, or number of packets.
  • the less-than sign in the above expression can also be changed to less than or equal to the sign, that is, in other embodiments, the above-mentioned cnt x ⁇ T x can also be cnt x ⁇ T x .
  • the counters of all the queues with x ⁇ i are increased correspondingly according to the counter unit and the message length. For example, if the message is in units of bits, it increases according to the number of bits in the message; if the message is in units of bytes, it increases according to the number of bytes in the message.
  • FIG. 6 is a schematic diagram of the composition structure of the data processing device provided in the embodiment of the present application.
  • the data processing device includes a queue depth counter module 601, an enqueue judgment module 602, an enqueue depth calculation module 603, and an outgoing Team depth calculation module 604, in which:
  • the queue depth counter module 601 is set to maintain a corresponding buffer occupation depth counter for each priority queue
  • the less than sign in the above expression can also be changed to less than or equal to sign, accordingly, the greater than or equal sign can be changed to greater than sign;
  • FIG. 7 is a schematic flowchart of an implementation process of the data processing method provided by an embodiment of the present application, as shown in FIG. 7 The method includes the following steps S701-S704.
  • step S701 when the system is powered on, each queue depth counter is initialized to 0.
  • step S702 it is detected that a message enters.
  • the less than sign in the above expression can also be changed to less than or equal to sign.
  • step S705 if the judgment result in step S702 is NO, the message is discarded.
  • FIG. 8 is a schematic diagram of an implementation process of a message reading process in a data processing method provided by an embodiment of the present application. As shown in FIG. 8, the method includes the following steps S801-S802.
  • step S801 it is detected that a message is read.
  • n 5 is used as an example to describe in detail.
  • the priority of the queues is Q 0 , Q 1 , Q 2 , Q 3 , and Q 4 from high to low.
  • Five buffer depth counters cnt 0 , cnt 1 , cnt 2 , cnt 3 , cnt 4 are provided for the five priority levels.
  • the depth counter uses the number of packets as the statistical unit, and the drop thresholds of the five priority queues are T 0 , T 1 , T 2 , T 3 , and T 4 , respectively, and satisfy T 0 >T 1 >T 2 >T 3 >T 4 .
  • the message enters the queue Q 4 , and cnt 0 , cnt 1 , cnt 2 , cnt 3 , and cnt 4 are increased by one respectively; if one of the above conditions is not met, the message is discarded and the counter remains unchanged.
  • the message enters the queue Q 3 , and cnt 0 , cnt 1 , cnt 2 , and cnt 3 are increased by one respectively; if one of the above conditions is not met, the message is discarded and the counter is unchanged.
  • the message enters the queue Q 2 , and cnt 0 , cnt 1 , and cnt 2 are increased by one respectively; if one of the above conditions is not met, the message is discarded and the counter is unchanged.
  • the message enters the queue Q 1 , and cnt 0 and cnt 1 are increased by one respectively; if one of the above conditions is not met, the message is discarded and the counter is unchanged.
  • the message enters the queue Q 0 and cnt 0 is increased by one; if the above conditions are not met, the message is discarded and the counter remains unchanged.
  • cnt 0 , cnt 1 , cnt 2 , cnt 3 , and cnt 4 are each decremented by one.
  • cnt 0 , cnt 1 , cnt 2 , and cnt 3 are each decremented by one.
  • the data processing method provided by the embodiment of the present application by preempting the priority method, while reserving the cache for the high-priority queue, provides a greater chance of cache occupation for the low-priority queue, and avoids the traditional queue cache allocation
  • Mutually exclusive configuration mode cache is idle and preemption configuration mode high priority cache is occupied.
  • the high-priority queue still has more space for it to occupy, and the low-priority queue will not consume the cache and cause high-priority packets to be lost.
  • embodiments of the present application provide a data processing apparatus including each unit included and each module included in each unit, which may be implemented by a processor in a data processing device;
  • the processor may be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGA field programmable gate array
  • FIG. 9 is a schematic structural diagram of a data processing apparatus provided by an embodiment of the present application.
  • the data processing apparatus 900 includes a first determining unit 901, a second determining unit 902, and a cache unit 903, where:
  • the first determining unit 901 is configured to determine the set of queues to be cached according to the priority of the received data packets to be cached;
  • the second determining unit 902 is configured to determine the target cache queue from the queue-to-be-cached set according to the attribute information of each queue-to-be-cached in the queue-to-be-cached set and the priority of the data packet to be cached ;
  • the cache unit 903 is configured to cache the data packet to be cached into the target cache queue.
  • each of the queues to be cached has a preset priority; correspondingly, the first determination unit 901 includes a comparison module and a first determination module, where:
  • the comparison module is configured to compare the priority of each queue to be cached with the priority of the data packet to be cached;
  • the first determining module is configured to determine a queue to be cached with a priority greater than or equal to the priority of the data packet to be cached as the set of queues to be cached.
  • the second determination unit 902 includes a second determination module and a third determination module, where:
  • the second determining module is configured to determine, for each queue to be cached in the set of queues to be cached, based on the attribute information of each queue to be cached and a preset threshold corresponding to the attribute information, whether to allow all The data packet to be cached is cached into the corresponding queue to be cached;
  • the third determination module is set to each queue to be cached in the queue-to-be-cached set, if the determination result is allowed, from the queue-to-be-cached set, priority will be given to the data packet to be cached
  • the queue to be cached with the same level is determined as the target cache queue.
  • the attribute information of the queue to be cached includes at least one of the following in the queue to be cached: the number of cached data packets and the memory space occupied by the cached data packets, corresponding to the attribute
  • the preset threshold corresponding to the information includes at least one of the following: quantity threshold and memory space threshold;
  • the second determination module includes a first control module and/or a second control module, wherein:
  • the first control module is configured to, for each queue to be cached in the queue set to be cached, if the number of the cached data packets in each queue to be cached is less than or equal to the number of corresponding queues to be cached Threshold, it is allowed to cache the data packet to be cached into the corresponding queue to be cached;
  • the second control module is set for each queue to be cached in the queue to be cached set, if the memory space occupied by the cached data packet in each queue to be cached is less than or equal to the corresponding queue to be cached Threshold of the memory space, it is allowed to cache the data packet to be cached into the corresponding queue to be cached.
  • the apparatus further includes a third determination unit, a fourth determination unit, and an update unit, where:
  • the third determining unit is configured to determine that the priority of the cache queue to cache the cached data packet is the cache priority when the cached data packet flows out of the cache queue;
  • the fourth determining unit is configured to determine the attribute information of the cache queue whose priority is greater than or equal to the cache priority as the target attribute information;
  • the update unit is configured to update the target attribute information.
  • the apparatus further includes a first discarding unit and/or a second discarding unit, wherein:
  • the first discarding unit is configured to discard the data to be cached if the number of the cached data packets in at least one queue to be cached in the set of queues to be cached is greater than the corresponding threshold value of the number of queues to be cached package;
  • the second discarding unit is configured to discard the memory space of the cached data packet where at least one queue to be cached occupies more than a memory space threshold corresponding to the queue to be cached in the set of queues to be cached Packets to be cached.
  • the embodiments of the present application if the above data processing method is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the technical solutions of the embodiments of the present invention may be embodied in the form of software products in essence or part of contributions to related technologies.
  • the computer software products are stored in a storage medium and include several instructions to make A terminal executes all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage media include various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read-only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk. In this way, the embodiments of the present invention are not limited to any specific combination of hardware and software.
  • FIG. 10 is a schematic structural diagram of the composition of the data processing device provided by the embodiment of the present application.
  • the data processing device 1000 at least includes: a processor 1001 , A communication interface 1002 and a storage medium 1003 configured to store executable instructions, wherein:
  • the processor 1001 is configured to control the overall operation of the data processing device 1000.
  • the communication interface 1002 is configured to enable the data processing device to communicate with other terminals or servers through the network.
  • the storage medium 1003 is configured to store instructions and applications executable by the processor 1001, and can also cache data to be processed or processed by the processor 1001 and each module in the data processing device 1000, which can be accessed through flash memory (FLASH) or random access Memory (Random Access Memory, RAM) implementation.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units is only a division of logical functions.
  • the displayed or discussed components are coupled to each other, or directly coupled, or the communication connection may be through some interfaces, and the indirect coupling or communication connection of the device or unit may be electrical, mechanical, or other forms of.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer-readable storage medium.
  • the execution includes The steps of the foregoing method embodiments; and the foregoing storage media include various media that can store program codes, such as a mobile storage device, a read-only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • ROM Read Only Memory
  • the above integrated unit of the present invention is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the technical solutions of the embodiments of the present invention may be embodied in the form of software products in essence or part of contributions to related technologies.
  • the computer software products are stored in a storage medium and include several instructions to make A terminal executes all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage media include various media that can store program codes, such as mobile storage devices, ROM, magnetic disks, or optical disks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Des modes de réalisation de la présente invention concernent un procédé, un appareil, un dispositif de traitement de données et un support de stockage. Le procédé comprend les étapes suivantes : un serveur détermine, selon les priorités des paquets reçus de données à mettre en cache, un ensemble de files d'attente disponibles pour une mise en cache ; il détermine une file d'attente cible de cache à partir dudit ensemble de files d'attente selon des informations d'attribut de chaque file d'attente disponible pour une mise en cache dans ledit ensemble de files d'attente et les priorités desdits paquets de données ; il met en cache lesdits paquets de données dans la file d'attente cible de cache.
PCT/CN2019/112792 2018-12-24 2019-10-23 Procédé, appareil et dispositif de traitement de données et support de stockage WO2020134425A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811585237.5A CN111355673A (zh) 2018-12-24 2018-12-24 一种数据处理方法、装置、设备及存储介质
CN201811585237.5 2018-12-24

Publications (1)

Publication Number Publication Date
WO2020134425A1 true WO2020134425A1 (fr) 2020-07-02

Family

ID=71126853

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/112792 WO2020134425A1 (fr) 2018-12-24 2019-10-23 Procédé, appareil et dispositif de traitement de données et support de stockage

Country Status (2)

Country Link
CN (1) CN111355673A (fr)
WO (1) WO2020134425A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114567674A (zh) * 2022-02-25 2022-05-31 腾讯科技(深圳)有限公司 一种数据处理方法、装置、计算机设备以及可读存储介质
CN115080468A (zh) * 2022-05-12 2022-09-20 珠海全志科技股份有限公司 一种非阻塞的信息传输方法和装置
CN115396384A (zh) * 2022-07-28 2022-11-25 广东技术师范大学 一种数据包调度方法、系统及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112202681B (zh) * 2020-09-18 2022-07-29 京信网络系统股份有限公司 数据拥塞处理方法、装置、计算机设备和存储介质
CN115209166A (zh) * 2021-04-12 2022-10-18 北京字节跳动网络技术有限公司 一种消息发送方法、装置、设备和存储介质
CN113315720B (zh) * 2021-04-23 2023-02-28 深圳震有科技股份有限公司 一种数据流控制方法、系统及设备
CN114979023A (zh) * 2022-07-26 2022-08-30 浙江大华技术股份有限公司 一种数据传输方法、系统、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130282984A1 (en) * 2011-11-28 2013-10-24 Huawei Technologies Co., Ltd. Data caching method and apparatus
CN104079502A (zh) * 2014-06-27 2014-10-01 国家计算机网络与信息安全管理中心 一种多用户多队列调度方法
CN104199790A (zh) * 2014-08-21 2014-12-10 北京奇艺世纪科技有限公司 数据处理方法及装置
CN107450971A (zh) * 2017-06-29 2017-12-08 北京五八信息技术有限公司 任务处理方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2314444A1 (fr) * 1999-08-02 2001-02-02 At&T Corp. Appareil et methode permettant d'assurer un service a haute priorite pour les messages d'urgence d'un reseau
CN102594691B (zh) * 2012-02-23 2019-02-15 中兴通讯股份有限公司 一种处理报文的方法及装置
CN105763481A (zh) * 2014-12-19 2016-07-13 北大方正集团有限公司 一种信息缓存方法及装置
CN106330760A (zh) * 2015-06-30 2017-01-11 深圳市中兴微电子技术有限公司 一种缓存管理的方法和装置
CN108632169A (zh) * 2017-03-21 2018-10-09 中兴通讯股份有限公司 一种分片的服务质量保证方法及现场可编程逻辑门阵列

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130282984A1 (en) * 2011-11-28 2013-10-24 Huawei Technologies Co., Ltd. Data caching method and apparatus
CN104079502A (zh) * 2014-06-27 2014-10-01 国家计算机网络与信息安全管理中心 一种多用户多队列调度方法
CN104199790A (zh) * 2014-08-21 2014-12-10 北京奇艺世纪科技有限公司 数据处理方法及装置
CN107450971A (zh) * 2017-06-29 2017-12-08 北京五八信息技术有限公司 任务处理方法及装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114567674A (zh) * 2022-02-25 2022-05-31 腾讯科技(深圳)有限公司 一种数据处理方法、装置、计算机设备以及可读存储介质
CN114567674B (zh) * 2022-02-25 2024-03-15 腾讯科技(深圳)有限公司 一种数据处理方法、装置、计算机设备以及可读存储介质
CN115080468A (zh) * 2022-05-12 2022-09-20 珠海全志科技股份有限公司 一种非阻塞的信息传输方法和装置
CN115396384A (zh) * 2022-07-28 2022-11-25 广东技术师范大学 一种数据包调度方法、系统及存储介质
CN115396384B (zh) * 2022-07-28 2023-11-28 广东技术师范大学 一种数据包调度方法、系统及存储介质

Also Published As

Publication number Publication date
CN111355673A (zh) 2020-06-30

Similar Documents

Publication Publication Date Title
WO2020134425A1 (fr) Procédé, appareil et dispositif de traitement de données et support de stockage
EP2466824B1 (fr) Dispositif et procédé de programmation de services
US8411574B2 (en) Starvation free flow control in a shared memory switching device
US11637786B1 (en) Multi-destination traffic handling optimizations in a network device
EP4175232A1 (fr) Procédé et dispositif de gestion de congestion
US20150215226A1 (en) Device and Method for Packet Processing with Memories Having Different Latencies
US7272150B2 (en) System and method for shaping traffic from a plurality of data streams using hierarchical queuing
WO2020029819A1 (fr) Procédé et appareil de traitement de message, dispositif de communication et circuit de commutation
US10050896B2 (en) Management of an over-subscribed shared buffer
EP3907944A1 (fr) Mesures de contrôle de la congestion dans un adaptateur de réseau multi-hôte
US8879578B2 (en) Reducing store and forward delay in distributed systems
WO2021143913A1 (fr) Procédé, appareil et système de gestion de congestion, et support de stockage
CN114531488A (zh) 一种面向以太网交换器的高效缓存管理系统
CN109391559B (zh) 网络设备
US20230013331A1 (en) Adaptive Buffering in a Distributed System with Latency/Adaptive Tail Drop
WO2021209016A1 (fr) Procédé de traitement de message dans un dispositif de réseau, et dispositif associé
CN113765796B (zh) 流量转发控制方法及装置
WO2022174444A1 (fr) Procédé et appareil de transmission de flux de données, et dispositif de réseau
WO2020200307A1 (fr) Procédé et dispositif de marquage de paquet de données, système de transmission de données
CN110708255B (zh) 一种报文控制方法及节点设备
US11658924B2 (en) Buffer allocation method, and device
WO2023193689A1 (fr) Procédé et appareil de transmission de paquets, dispositif et support de stockage lisible par ordinateur
WO2020114133A1 (fr) Procédé de mise en œuvre d'expansion de pq, dispositif, équipement et support d'informations
JP2024518019A (ja) 予測分析に基づくバッファ管理のための方法およびシステム
CN117749726A (zh) Tsn交换机输出端口优先级队列混合调度方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19902020

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19902020

Country of ref document: EP

Kind code of ref document: A1