CN113835611A - Storage scheduling method, device and storage medium - Google Patents

Storage scheduling method, device and storage medium Download PDF

Info

Publication number
CN113835611A
CN113835611A CN202010582287.9A CN202010582287A CN113835611A CN 113835611 A CN113835611 A CN 113835611A CN 202010582287 A CN202010582287 A CN 202010582287A CN 113835611 A CN113835611 A CN 113835611A
Authority
CN
China
Prior art keywords
target
queue
chip
storage
message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010582287.9A
Other languages
Chinese (zh)
Inventor
付行双
陈昌胜
仲建锋
胡达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanechips Technology Co Ltd
Original Assignee
Sanechips Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanechips Technology Co Ltd filed Critical Sanechips Technology Co Ltd
Priority to CN202010582287.9A priority Critical patent/CN113835611A/en
Priority to PCT/CN2021/101809 priority patent/WO2021259321A1/en
Publication of CN113835611A publication Critical patent/CN113835611A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention discloses a storage scheduling method, storage scheduling equipment and a storage medium, and belongs to the technical field of communication. The method comprises the following steps: determining the sum of the length of a received target message and the current on-chip queue depth of a target queue corresponding to the target message as the update depth of the target queue, determining that the target message needs to be moved to an off-chip storage device for storage when the update depth of the target queue is larger than the moving threshold of the target queue, triggering the message dequeuing of the target queue and discarding the dequeued message if the target message is moved to the off-chip storage device for storage and the current on-chip storage occupancy value is smaller than the preset storage occupancy threshold, and determining the current on-chip queue depth of the target queue as the new current on-chip queue depth of the target queue by subtracting the difference of the length of the dequeued message. The storage scheduling method improves the utilization rate of the on-chip storage space, thereby improving the overall performance of the chip.

Description

Storage scheduling method, device and storage medium
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a storage scheduling method, storage scheduling equipment and a storage medium.
Background
In a network processing engine, on one hand, performance indexes need to be continuously improved, and on the other hand, the performance indexes are limited by chip area, resources and processes, and the purpose of meeting performance requirements cannot be achieved by improving dominant frequencies without limitation or stacking resources. Therefore, the performance index is often achieved by increasing the number of queues and correspondingly increasing the on-chip storage space during system design. The messages in each queue may be stored in on-chip memory space or off-chip memory devices. When a message is newly received, the storage location of the message needs to be scheduled.
At present, after the number of queues is increased, when the queues are congested, that is, in a scenario where packets are enqueued quickly and dequeued slowly, the depth of the queue of the congested queue is increased more quickly and decreased more slowly. When the queue depth reaches the moving threshold value, the storage management module moves the newly received message to the off-chip storage device for storage. Since the queue depth increases faster and decreases slower, the queue depth is always greater than the move threshold for a period of time, which causes the storage management module to move the newly received packets to off-chip storage. At this time, messages are dequeued from the congestion queue, and the dequeued messages are messages stored in the on-chip storage space before because the queue management is first-in first-out. After the messages are dequeued, the on-chip storage space becomes free, and the newly received messages of the congestion queue are still stored outside the chip.
Therefore, in the current storage scheduling method, when the queue is congested, when the on-chip storage space is free, the storage of the message newly received by the congested queue in the on-chip storage space cannot be realized, so that the utilization rate of the on-chip storage space is low, and the overall performance of the chip is affected.
Disclosure of Invention
The embodiment of the invention mainly aims to provide a storage scheduling method, storage scheduling equipment and a storage medium, and aims to improve the utilization rate of an on-chip storage space through storage scheduling so as to improve the overall performance of a chip.
In order to achieve the above object, an embodiment of the present invention provides a storage scheduling method, where the method includes the following steps:
determining the sum of the length of a received target message and the depth of a current on-chip queue of a target queue corresponding to the target message as the update depth of the target queue;
when the updating depth of the target queue is larger than the moving threshold value of the target queue, determining that the target message needs to be moved to an off-chip storage device for storage;
if the target message is moved to an off-chip storage device for storage, message moving discarding occurs, and the current on-chip storage occupation value is smaller than a preset storage occupation threshold value, the dequeue of the message of the target queue is triggered and the dequeue message is discarded;
and subtracting the length difference of the dequeue message from the current on-chip queue depth of the target queue to determine the current on-chip queue depth of the target queue.
To achieve the above object, an embodiment of the present invention further provides a storage scheduling apparatus, which includes a memory, a processor, a program stored in the memory and executable on the processor, and a data bus for implementing connection communication between the processor and the memory, where the program implements the steps of the foregoing method when executed by the processor.
To achieve the above object, an embodiment of the present invention provides a storage medium for a computer-readable storage, the storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps of the foregoing method.
The storage scheduling method, device and storage medium provided by the embodiment include: determining the sum of the length of a received target message and the current on-chip queue depth of a target queue corresponding to the target message as the update depth of the target queue, determining that the target message needs to be moved to an off-chip storage device for storage when the update depth of the target queue is larger than the moving threshold of the target queue, triggering the message dequeuing of the target queue and discarding the dequeued message if the target message is moved to the off-chip storage device for storage and the current on-chip storage occupancy value is smaller than the preset storage occupancy threshold, and determining the current on-chip queue depth of the target queue as the new current on-chip queue depth of the target queue by subtracting the difference of the length of the dequeued message. The storage scheduling method comprises the steps that when the update depth of a target queue is larger than the moving threshold of the target queue, namely when the queue is congested, on one hand, when an off-chip storage device cannot successfully store a target message and the current on-chip storage occupancy value is smaller than a preset storage occupancy threshold, the message dequeuing of the target queue is triggered and the dequeued message is discarded, on the one hand, the dequeued message is inevitably the message stored in an on-chip storage space due to the nature of the queue, the difference between the current on-chip queue depth of the target queue and the length of the dequeued message is subtracted to determine the new current on-chip queue depth of the target queue, namely, the current on-chip queue depth of the target queue is reduced equivalently, and when the message is received again next time, the sum of the length of the message and the new current on-chip queue depth of the target queue can be smaller than the moving threshold of the target queue, so that the newly received message is stored in the on-chip storage space, the utilization rate of the on-chip storage space is improved, so that the overall performance of the chip is improved; on the other hand, when the off-chip storage device cannot successfully store the target message and the current on-chip storage occupancy value is smaller than the preset storage occupancy threshold value, dequeuing of the message in the target queue is triggered and the dequeued message is discarded, so that the probability of discarding the message is reduced as much as possible, and the storage scheduling method is prevented from influencing the normal operation of the service.
Drawings
FIG. 1 is a flow diagram of a method for memory scheduling according to an embodiment;
FIG. 2 is a flowchart of a method for memory scheduling according to another embodiment;
fig. 3 is a schematic structural diagram of a storage scheduling apparatus according to an embodiment;
fig. 4 is a schematic structural diagram of a storage scheduling apparatus according to another embodiment;
fig. 5 is a schematic structural diagram of a storage scheduling apparatus according to yet another embodiment;
fig. 6 is a schematic structural diagram of a storage scheduling apparatus according to an embodiment.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the embodiments of the invention and are not limiting of the embodiments of the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the description of the embodiments of the present invention, and have no peculiar meaning by themselves. Thus, "module", "component" or "unit" may be used mixedly.
The existing storage scheduling method cannot store the newly received message of the congestion queue in the on-chip storage space when the on-chip storage space is idle due to the congestion of the queue, so that the utilization rate of the on-chip storage space is low, and the overall performance of a chip is affected.
The embodiment of the invention provides a storage scheduling method, which comprises the following steps: determining the sum of the length of a received target message and the current on-chip queue depth of a target queue corresponding to the target message as the update depth of the target queue, determining that the target message needs to be moved to an off-chip storage device for storage when the update depth of the target queue is larger than the moving threshold of the target queue, triggering the message dequeuing of the target queue and discarding the dequeued message if the target message is moved to the off-chip storage device for storage and the current on-chip storage occupancy value is smaller than the preset storage occupancy threshold, and determining the current on-chip queue depth of the target queue as the new current on-chip queue depth of the target queue by subtracting the difference of the length of the dequeued message. The storage scheduling method comprises the steps that when the update depth of a target queue is larger than the moving threshold of the target queue, namely when the queue is congested, on one hand, when an off-chip storage device cannot successfully store a target message and the current on-chip storage occupancy value is smaller than a preset storage occupancy threshold, the message dequeuing of the target queue is triggered and the dequeued message is discarded, on the one hand, the dequeued message is inevitably the message stored in an on-chip storage space due to the nature of the queue, the difference between the current on-chip queue depth of the target queue and the length of the dequeued message is subtracted to determine the new current on-chip queue depth of the target queue, namely, the current on-chip queue depth of the target queue is reduced equivalently, and when the message is received again next time, the sum of the length of the message and the new current on-chip queue depth of the target queue can be smaller than the moving threshold of the target queue, so that the newly received message is stored in the on-chip storage space, the utilization rate of the on-chip storage space is improved, and further, the overall performance of the chip is improved; on the other hand, when the off-chip storage device cannot successfully store the target message and the current on-chip storage occupancy value is smaller than the preset storage occupancy threshold value, dequeuing of the message in the target queue is triggered and the dequeued message is discarded, so that the probability of discarding the message is reduced as much as possible, and the storage scheduling method is prevented from influencing the normal operation of the service.
Fig. 1 is a flowchart of a storage scheduling method according to an embodiment. The embodiment is suitable for a scene of scheduling the storage position of the message. The present embodiment may be performed by a storage scheduler, which may be implemented by means of software and/or hardware, which may be integrated in the communication device. As shown in fig. 1, the storage scheduling method provided in this embodiment includes the following steps:
step 101: and determining the sum of the length of the received target message and the current on-chip queue depth of the target queue corresponding to the target message as the update depth of the target queue.
The queues in this embodiment may be divided according to users, that is, messages of different users belong to different queues, or may be divided according to service types, that is, messages of different services belong to different queues, or may be divided according to other elements, which is not limited in this embodiment.
After receiving the target message, the storage scheduling apparatus needs to determine whether the target message is stored in the on-chip storage space or the off-chip storage device. The on-chip memory space in this embodiment refers to a memory space on a chip in the memory scheduling device. The on-chip Memory space in this embodiment may be a cache space, a Random Access Memory (RAM), or the like. The off-chip memory device in this embodiment may be a memory device disposed off-chip. The chip in this embodiment may be a chip in a network processing engine.
In one embodiment, the target message may carry a queue number. And after receiving the target message, determining a queue corresponding to the target message according to the queue number of the target message. For convenience of description, the queue corresponding to the target packet is referred to as a target queue. The current on-chip queue depth of a target queue refers to the length of a packet stored in the on-chip storage space in the target queue. The current on-chip queue depth of the target queue may be obtained from a module for managing the on-chip queue depth in the storage scheduling device after receiving the target packet, or may be directly calculated according to the length of the packet currently stored in the on-chip storage space of the target queue after receiving the target packet.
Optionally, the length of the packet in this embodiment may refer to the length under different metrics, such as the number of bytes of the packet, the number of slices of the packet, and the like.
And determining the sum of the length of the received target message and the current on-chip queue depth of the target queue as the update depth of the target queue. For example, assuming that the length of the target packet is c0 and the current on-chip queue depth of the target queue is b0, c0+ b0 is determined as the updated depth of the target queue.
Step 102: and when the update depth of the target queue is greater than the moving threshold value of the target queue, determining that the target message needs to be moved to an off-chip storage device for storage.
In an embodiment, each target queue corresponds to a shift threshold, and the shift threshold may be preset. The moving threshold of the target queue can be set according to the requirement of a user, and can also be sent to the storage scheduling device by other devices.
In one implementation, when the update depth of the target queue is greater than the moving threshold of the target queue, it is indicated that the target queue is congested, and it is determined that the target packet needs to be moved to an off-chip storage device for storage.
In another implementation mode, when the update depth of the target queue is smaller than or equal to the moving threshold of the target queue, storing the target message in the on-chip storage space; and determining the current on-chip queue depth of the target queue and the sum of the lengths of the target messages as the new current on-chip queue depth of the target queue. In the implementation mode, when the target queue is not congested, after the target message is stored in the on-chip storage space, the current on-chip queue depth of the target queue is updated, so that the accuracy of the current on-chip queue depth of the target queue can be improved, the accuracy of subsequent storage scheduling is improved, and the performance of a chip is further improved.
It should be noted that the way of storing the messages in the queues in the on-chip storage space may be shared storage, that is, multiple queues share the on-chip storage space, or may be independent storage, that is, each queue has its own corresponding on-chip storage space, or may be hybrid storage, that is, multiple queues share a part of the on-chip storage space, and each queue also has its own corresponding on-chip storage space. The embodiment does not limit the storage mode of the queue message in the on-chip storage space.
Step 103: and if the message is moved and discarded when the target message is moved to the off-chip storage device for storage, and the current on-chip storage occupation value is smaller than a preset storage occupation threshold value, triggering the dequeue of the message in the target queue and discarding the dequeue message.
In an embodiment, when the bandwidth of the off-chip storage device cannot meet the bandwidth requirement of the target packet or the storage space of the off-chip storage device is insufficient, that is, the off-chip storage device is congested, after determining that the target packet needs to be moved to the off-chip storage device for storage, the storage scheduling device cannot successfully move the target packet in the process of moving the target packet, and discards the target packet.
Meanwhile, if the current on-chip storage occupation value is smaller than the preset storage occupation threshold value, dequeuing of the messages of the target queue is triggered, and the dequeued messages are discarded. When the current on-chip storage occupation value is smaller than the preset storage occupation threshold value, the redundant storage space exists in the on-chip storage space at the moment.
It can be understood that, since the management of the queue is a first-in first-out principle, the dequeued packets in the target queue are packets located in the on-chip storage space.
Optionally, the specific process of triggering dequeuing of the packet in the target queue and discarding the dequeued packet may be: adding a discard tag to the target queue; in the queue dequeue scheduling, if the target queue is determined to have the discard label, the dequeue of the message of the target queue is triggered, and the dequeue message is discarded. This process may also be referred to as a fast aging process.
The purpose of fast aging is to quickly reduce the current on-chip queue depth of the target queue. In the normal dequeuing process of the queue, the messages in the queue need to wait for being scheduled, and then whether the dequeuing is qualified or not is determined, namely, the dequeuing of each queue has a sequence. This approach may result in a slower or non-decreasing current on-chip queue depth of the target queue when the target queue is congested.
After adding a discarding label to a target queue, scanning the existence of the queue with the discarding label added in each dequeuing scheduling of the queue; and if the queue with the discarding label is added exists, the messages in the queue are preferentially dequeued, and the dequeued messages are discarded. The target queue corresponding to the drop tag added has the right to be dequeued and scheduled preferentially without waiting to be scheduled, so that the current on-chip queue depth of the target queue can be reduced rapidly.
The current on-chip storage occupancy value refers to the sum of the current on-chip queue depths of all queues.
In one embodiment, the current on-chip memory footprint value may be sent to the memory scheduler by other modules.
In another embodiment, before step 103, the storage scheduling device needs to determine the current on-chip storage occupancy value: and determining the sum of the depths of the current on-chip queues of all the queues as a current on-chip storage occupation value. In the implementation mode, the sum of the depths of the current on-chip queues of all the queues is determined as the current on-chip storage occupation value, so that the accuracy of the current on-chip storage occupation value can be improved as much as possible.
Alternatively, the off-chip storage device may send the status information to the storage scheduling apparatus at a preset frequency. The status information is used to indicate whether the off-chip storage device has the capability of storing the target message. Illustratively, the status information may be information such as a current bandwidth and/or a current remaining storage space of the off-chip storage device.
Before step 103, the storage scheduling means may receive the status information sent by the off-chip storage device; if the fact that the off-chip storage device can store the target message is determined according to the state information, the target message is moved to the off-chip storage device to be stored; and if the fact that the off-chip storage equipment cannot store the target message is determined according to the state information, discarding the target message.
More specifically, if it is determined that the current bandwidth of the off-chip storage device cannot meet the required bandwidth of the target packet according to the state information, and/or it is determined that the remaining storage space of the off-chip storage device is smaller than the required storage space of the target packet according to the state information, it is determined that the off-chip storage device cannot store the target packet. And when determining that the off-chip storage equipment cannot store the target message, the storage scheduling device discards the target message.
The method for determining whether the message is moved and discarded is simple to implement and high in efficiency.
Step 104: and subtracting the difference of the length of the dequeue message from the current on-chip queue depth of the target queue to determine the new current on-chip queue depth of the target queue.
In one embodiment, after step 103, the current on-chip queue depth of the target queue needs to be updated. The specific updating mode is as follows: and subtracting the difference of the length of the dequeue message from the current on-chip queue depth of the target queue to determine the new current on-chip queue depth of the target queue.
And then, when a new message of the target queue is received, the step 101 is repeatedly executed, and the new current on-chip queue depth of the target queue is used in the process, at the moment, because the current on-chip queue depth of the target queue is reduced compared with the last time, the update depth of the target queue is probably less than or equal to the moving threshold value of the target queue, the new message is probably stored in the on-chip storage space, so that the utilization rate of the on-chip storage space is improved, and further, the performance of the chip is improved.
Before executing the storage scheduling method provided by this embodiment, information configuration needs to be performed in advance. The information that needs to be configured may include: the number of queues, the moving threshold corresponding to each queue, the preset storage occupation threshold and the like.
The embodiment of the invention provides a storage scheduling method, which comprises the following steps: determining the sum of the length of a received target message and the current on-chip queue depth of a target queue corresponding to the target message as the update depth of the target queue, determining that the target message needs to be moved to an off-chip storage device for storage when the update depth of the target queue is larger than the moving threshold of the target queue, triggering the message dequeuing of the target queue and discarding the dequeued message if the target message is moved to the off-chip storage device for storage and the current on-chip storage occupancy value is smaller than the preset storage occupancy threshold, and determining the current on-chip queue depth of the target queue as the new current on-chip queue depth of the target queue by subtracting the difference of the length of the dequeued message. The storage scheduling method comprises the steps that when the update depth of a target queue is larger than the moving threshold of the target queue, namely when the queue is congested, on one hand, when an off-chip storage device cannot successfully store a target message and the current on-chip storage occupancy value is smaller than a preset storage occupancy threshold, the message dequeuing of the target queue is triggered and the dequeued message is discarded, on the one hand, the dequeued message is inevitably the message stored in an on-chip storage space due to the nature of the queue, the difference between the current on-chip queue depth of the target queue and the length of the dequeued message is subtracted to determine the new current on-chip queue depth of the target queue, namely, the current on-chip queue depth of the target queue is reduced equivalently, and when the message is received again next time, the sum of the length of the message and the new current on-chip queue depth of the target queue can be smaller than the moving threshold of the target queue, so that the newly received message is stored in the on-chip storage space, the utilization rate of the on-chip storage space is improved, so that the overall performance of the chip is improved; on the other hand, when the off-chip storage device cannot successfully store the target message and the current on-chip storage occupancy value is smaller than the preset storage occupancy threshold value, dequeuing of the message in the target queue is triggered and the dequeued message is discarded, so that the probability of discarding the message is reduced as much as possible, and the storage scheduling method is prevented from influencing the normal operation of the service.
Fig. 2 is a flowchart of a storage scheduling method according to another embodiment. The embodiment provides a detailed description of other steps included in the storage scheduling method based on the embodiment shown in fig. 1 and various alternatives. As shown in fig. 2, the storage scheduling method provided in this embodiment includes the following steps:
step 201: and judging the attribute of the target message.
Step 202: when the attribute of the target packet is determined to be of the hybrid type, step 205 is determined to be executed.
Namely, when the attribute of the target message is judged to be the mixed type, determining to execute the step of determining the sum of the length of the received target message and the current on-chip queue depth of the target queue corresponding to the target message as the update depth of the target queue.
Step 203: and when the attribute of the target message is judged to be the on-chip message, storing the target message in the on-chip storage.
Step 204: and when the attribute of the target message is judged to be the off-chip message, storing the target message in the off-chip storage equipment.
In an embodiment, the target packet still carries its own attribute information. The attributes of the packet in this embodiment may include: mixed type messages, on-chip messages, and off-chip messages. The mixed type message refers to a message which can be stored in an off-chip storage device and can also be stored in an on-chip storage space, the on-chip message refers to a message which can only be stored in the on-chip storage space, and the off-chip message refers to a message which can only be stored in the off-chip storage device.
In this embodiment, after receiving the target packet, the attribute of the target packet is determined first. And selecting subsequent operation according to the attribute of the target message. In the first case: when determining that the attribute of the target packet is of the hybrid type, it needs to be determined whether the target packet needs to be stored on-chip or off-chip at this time, that is, storage scheduling needs to be performed, and step 205 is determined to be executed. In the second case: and when the target message is determined to be the on-chip message, storing the target message in the on-chip storage space without storage scheduling. In the third case: when the target message is determined to be an off-chip message, the target message is stored in the off-chip storage device without storage scheduling.
The processing mode of firstly determining the attribute of the target message and then determining the subsequent operation can perform storage scheduling when the storage scheduling is needed, so that the problem that the service processing efficiency is reduced due to the fact that the on-chip message is stored to the off-chip storage device by blind scheduling or the on-chip storage space is occupied due to the fact that the off-chip message is stored to the on-chip storage space is avoided.
It should be noted that, in this embodiment, the messages in the same queue may be messages with the same attribute, or messages with different attributes. The present embodiment is not limited thereto.
Step 205: and determining the sum of the length of the received target message and the current on-chip queue depth of the target queue corresponding to the target message as the update depth of the target queue.
Step 205 is similar to the implementation process and technical principle of step 101, and is not described herein again.
Step 206: and judging whether the updating depth of the target queue is larger than the moving threshold of the target queue.
Step 207: and when the update depth of the target queue is greater than the moving threshold value of the target queue, determining that the target message needs to be moved to an off-chip storage device for storage.
Step 207 is similar to the implementation process and technical principle of step 102, and is not described here again.
Step 208: and judging whether message moving and discarding occur.
In step 208, when the off-chip storage device is congested, after determining that the target packet needs to be moved to the off-chip storage device for storage, the storage scheduling device cannot successfully move the target packet in the process of moving the target packet, and discards the target packet.
Step 209: and if the message is moved and discarded when the target message is moved to the off-chip storage device for storage, and the current on-chip storage occupation value is smaller than a preset storage occupation threshold value, triggering the dequeue of the message in the target queue and discarding the dequeue message.
Step 209 is similar to the implementation process and technical principle of step 103, and is not described herein again.
In an embodiment, the preset storage occupancy threshold is a storage occupancy threshold corresponding to the target queue. Correspondingly, before the step 209, the following steps may be further included: and if message moving discarding occurs when the target message is moved to an off-chip storage device for storage, determining a storage occupation threshold corresponding to the priority of the target queue according to the priority of the target queue.
The processing mode can set the preset storage occupation threshold value which is adaptive to the priority of the queue, so as to realize different storage scheduling management aiming at different queues and improve the flexibility of storage scheduling.
Step 210: and subtracting the difference of the length of the dequeue message from the current on-chip queue depth of the target queue to determine the new current on-chip queue depth of the target queue.
Step 210 is similar to the implementation process and technical principle of step 104, and is not described herein again.
Step 211: and when the updating depth of the target queue is less than or equal to the moving threshold of the target queue, storing the target message in the on-chip storage space.
In step 211, the update depth of the target queue is less than or equal to the moving threshold of the target queue, which indicates that the target packet can be stored in the on-chip storage space, and therefore, the target packet is stored in the on-chip storage space.
Step 212: and determining the current on-chip queue depth of the target queue and the sum of the lengths of the target messages as the new current on-chip queue depth of the target queue.
In step 212, after the target packet is stored in the on-chip storage space, the current on-chip queue depth of the target queue needs to be updated, so as to improve the accuracy of subsequent storage scheduling. The updating mode is that the sum of the current on-chip queue depth of the target queue and the length of the target message is determined as the new current on-chip queue depth of the target queue.
Step 213: and if the target message is successfully moved to the off-chip storage device for storage, determining the current on-chip queue depth of the target queue as the new current on-chip queue depth of the target queue.
In step 213, the storage scheduler also needs to update the current on-chip queue depth of the target queue after successfully moving the target packet to the off-chip storage device for storage. Since the current on-chip queue depth of the target queue is not increased in such a scenario, the current on-chip queue depth of the target queue is determined as the new current on-chip queue depth of the target queue.
The following describes the scheme of this embodiment in detail with reference to a specific implementation manner of the storage scheduling apparatus. Fig. 3 is a schematic structural diagram of a storage scheduling apparatus according to an embodiment. As shown in fig. 3, the storage scheduling apparatus provided in this embodiment includes the following modules: a threshold counting module 31, a transition state counting module 32 and a transition arbitration module 33. The threshold counting module 31 and the moving state counting module 32 are both connected to the moving arbitration module 33.
The threshold counting module 31 may implement a moving threshold of the query queue and a storage occupation threshold corresponding to the query queue. Assuming that the number of queues is 2, and the queues are numbered 0 and 1, the threshold counting module 31 may query the corresponding queue moving threshold according to the queue number, store the queue moving threshold into the shared temporary register for use by the moving arbitration module 33, and query the corresponding storage occupation threshold according to the queue number. Assume that queue 0 corresponds to a move threshold value of a0 and a storage occupancy threshold value of s 0. The transfer threshold corresponding to queue 1 is a1, and the storage occupancy threshold corresponding to queue 1 is s 1.
Move state count module 32 may determine a current on-chip queue depth of the queue, determine an updated depth of the queue, and determine a new current on-chip queue depth of the queue. Further, the moving state counting module 32 may also determine the message attribute of the target message. FIG. 3 includes an N +1 shift state counter module 32. Each move state count module 32 may manage the current on-chip queue depth of the queue in a different dimension. For example, the moving state counting module 0 manages the current on-chip queue depth of Q +1 queues when the queues are divided from the dimension a, and the moving state counting module N manages the current on-chip queue depth of Q +1 queues when the queues are divided from the dimension M. Assume that the current on-chip queue depth of queue 0 is b0 and assume that the current on-chip queue depth of queue 1 is b 1. Queue 0 and queue 1 here may be queues in any dimension.
The migration arbitration module 33 includes an on-chip resource determination sub-module 331 and a scheduler 332. The number of the schedulers 332 may be plural. Optionally, the configuration value of scheduler 0 is 2' b00, indicating that scheduler 0 performs system level scheduling. Scheduler 0 is used to calculate the on-chip memory space consumption state in real time, i.e. to calculate the current on-chip memory footprint value. Other schedulers are used to schedule queues. Each queue may correspond to a scheduler. Illustratively, queue 0 corresponds to scheduler 1 and queue 1 corresponds to scheduler 2. The configuration values of scheduler 1 and scheduler 2 are 2' b01, indicating that the scheduling policy is flow level scheduling.
The length of the message received by the queue 0 is c0, the on-chip storage occupancy value at this time is c 0', and the current on-chip queue depth of the queue 0 is b 0. B0+ c0 is determined as the update depth of queue # 0. If the dispatcher 1 determines that b0+ c0> a0, moving the currently received message corresponding to the queue 0 to the off-chip storage device; if scheduler 1 determines that b0+ c0 is not greater than a0, then the currently received packet corresponding to queue 0 is stored in the on-chip storage space.
Similarly, the length of the packet received by the queue 1 is c1, the occupied value of the on-chip buffer at this time is c0 '+ c 1', and the current depth of the on-chip queue of the queue 1 is b 1. B1+ c1 is determined as the update depth of queue 1. If the dispatcher 2 determines that b1+ c1> a1, moving the currently received message corresponding to the queue 1 to the off-chip storage device; if the scheduler 2 determines that b1+ c1 is not greater than a1, the currently received message corresponding to the queue 1 is stored in the on-chip storage space.
If the off-chip storage device is congested (for example, the bandwidth of the off-chip storage device is insufficient) in the process of moving the currently received message corresponding to the queue 1 to the off-chip storage device, then moving and discarding may occur. At this time, if the on-chip resource determination submodule 331 determines that there is a resource on the chip, i.e., c0 '+ c 1' ≦ s1, the queue 1 is triggered for fast aging.
As shown in fig. 3, since any one of the schedulers determines that the transfer is necessary, the plurality of schedulers are connected by logical or. If either scheduler determines that a message needs to be moved to an off-chip storage device and (i.e., the upper and in fig. 3) the off-chip storage device has insufficient bandwidth, move discard occurs. If a move discard occurs and (i.e., the lower and in fig. 3) there is a resource on the chip, then a fast aging of the corresponding queue is triggered.
The move state count module 32 may update the current on-chip queue depth of the queue according to the stored scheduling result. Therefore, the move state counting module can be connected with the storage scheduling result, i.e. the on-chip/off-chip move result.
After the queue 1 is aged quickly, the depth of the current on-chip queue of the queue 1 is reduced, and after a message is received next time, the sum of the length of the newly received message and the depth of the current on-chip queue of the queue 1 is smaller than the moving threshold a1 corresponding to the queue 1, so that the message of the queue 1 is stored in the on-chip storage space, the idle on-chip storage space is reused, and the performance of the chip is improved.
Based on the storage scheduling method provided by the embodiment, the input multi-queue can be designed with extremely high degree of freedom. Parameterization and scale can be realized on the quantity of queues and scheduling strategies. Can adapt to various system structures and data flow scheduling requirements. The design difficulty of the system is greatly reduced, and the reliability of the structure is improved. And meanwhile, the portability and the inheritability of the structure are also assisted.
On one hand, the storage scheduling method provided by this embodiment determines the attribute of the target packet before performing storage scheduling, and then determines subsequent operations, so that the storage scheduling can be performed only when the storage scheduling is needed, thereby avoiding a reduction in service processing efficiency caused by blindly scheduling to store the on-chip packet in the off-chip storage device, or avoiding an occupation of on-chip storage space caused by storing the off-chip packet in the on-chip storage space; on the other hand, the present embodiment may update the current on-chip queue depth of the target queue by using different calculation processes based on different storage scheduling results, so as to provide an accurate basis for a subsequent storage scheduling process, improve the accuracy of storage scheduling, and further improve the performance of the chip.
Fig. 4 is a schematic structural diagram of a storage scheduling apparatus according to another embodiment. As shown in fig. 4, the storage scheduling apparatus provided in this embodiment includes the following modules: a first determination module 41, a second determination module 42, a triggering module 43, and a third determination module 44.
The first determining module 41 is configured to determine the sum of the length of the received target packet and the current on-chip queue depth of the target queue corresponding to the target packet as the update depth of the target queue.
And the second determining module 42 is configured to determine that the target packet needs to be moved to the off-chip storage device for storage when the update depth of the target queue is greater than the moving threshold of the target queue.
And the triggering module 43 is configured to trigger dequeuing of the messages in the target queue and discard dequeued messages if the message transfer discarding occurs when the target messages are transferred to the off-chip storage device for storage, and the current on-chip storage occupancy value is smaller than the preset storage occupancy threshold value.
Optionally, the apparatus further comprises: the device comprises a receiving module, a first storage module and a discarding module.
A receiving module configured to receive the status information sent by the off-chip storage device.
And the first storage module is configured to move the target message to the off-chip storage device for storage if the off-chip storage device is determined to be capable of storing the target message according to the state information.
And the discarding module is configured to discard the target message if the off-chip storage device cannot store the target message according to the state information.
A third determining module 44 configured to determine the current on-chip queue depth of the target queue minus the difference between the lengths of the dequeue packets as a new current on-chip queue depth of the target queue.
In one embodiment, the apparatus further comprises: and the fourth determining module is configured to determine the sum of the current on-chip queue depths of all the queues as the current on-chip storage occupancy value.
In an embodiment, in the aspect of triggering dequeuing of a packet in a target queue and discarding a dequeued packet, the triggering module 43 is specifically configured to: adding a discard tag to the target queue; in the queue dequeue scheduling, if the target queue is determined to have the discard label, the dequeue of the message of the target queue is triggered, and the dequeue message is discarded.
The storage scheduling apparatus provided in this embodiment is used to execute the storage scheduling method in any of the above embodiments, and the implementation principle and the technical effect of the storage scheduling apparatus provided in this embodiment are similar, and are not described here again.
Fig. 5 is a schematic structural diagram of a storage scheduling apparatus according to yet another embodiment. The embodiment of the present invention provides a detailed description of other modules included in the storage scheduling apparatus based on the embodiment shown in fig. 4 and various alternatives. As shown in fig. 5, the storage scheduling apparatus provided in this embodiment further includes the following modules: a second storage module 51, a judgment module 52, a fifth determination module 53, a third storage module 54 and a fourth storage module 55.
And the second storage module 51 is configured to store the target packet in the on-chip storage space when the update depth of the target queue is less than or equal to the moving threshold of the target queue.
The third determining module 44 is further configured to determine the sum of the current on-chip queue depth of the target queue and the length of the target packet as a new current on-chip queue depth of the target queue.
The third determining module 44 is further configured to determine the current on-chip queue depth of the target queue as a new current on-chip queue depth of the target queue if the target packet is successfully moved to the off-chip storage device for storage.
Optionally, the preset storage occupation threshold is a storage occupation threshold corresponding to the target queue. Correspondingly, the device also comprises: and the fourth determining module is configured to determine a storage occupation threshold corresponding to the priority of the target queue according to the priority of the target queue if message moving discarding occurs when the target message is moved to the off-chip storage device for storage.
A determining module 52 configured to determine the attribute of the target packet.
A fifth determining module 53, configured to determine, when the attribute of the target packet is determined to be the hybrid type, to perform a step of determining, as the update depth of the target packet, a sum of the length of the received target packet and a current on-chip queue depth of the target queue corresponding to the target packet.
In one embodiment, the third storage module 54 is configured to store the target packet in the on-chip storage when the attribute of the target packet is determined to be the on-chip packet.
And the fourth storage module 55 is configured to store the target message in the off-chip storage device when the attribute of the target message is determined to be the off-chip message.
The storage scheduling apparatus provided in this embodiment is used to execute the storage scheduling method in any of the above embodiments, and the implementation principle and the technical effect of the storage scheduling apparatus provided in this embodiment are similar, and are not described here again.
Fig. 6 is a schematic structural diagram of a storage scheduling apparatus according to an embodiment. As shown in fig. 6, the storage scheduling apparatus includes a processor 61 and a memory 62; the number of the processors 61 in the storage scheduling device may be one or more, and one processor 61 is taken as an example in fig. 6; a processor 61 and a memory 62 in the storage scheduling device; the connection may be via a bus or other means, such as via a bus as illustrated in FIG. 6.
The memory 62 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the storage scheduling method in the embodiment of the present application (for example, the first determining module 41, the second determining module 42, the triggering module 43, and the third determining module 44 in the storage scheduling apparatus). The processor 61 executes the software programs, instructions and modules stored in the memory 62 to store various functional applications of the scheduling device and data processing, i.e., to implement the above-described storage scheduling method.
The memory 62 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the storage scheduling apparatus, and the like. Further, the memory 62 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
Embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a storage scheduling method, the method comprising:
determining the sum of the length of a received target message and the depth of a current on-chip queue of a target queue corresponding to the target message as the update depth of the target queue;
when the updating depth of the target queue is larger than the moving threshold value of the target queue, determining that the target message needs to be moved to an off-chip storage device for storage;
if the target message is moved to an off-chip storage device for storage, message moving discarding occurs, and the current on-chip storage occupation value is smaller than a preset storage occupation threshold value, the dequeue of the message of the target queue is triggered and the dequeue message is discarded;
and subtracting the length difference of the dequeue message from the current on-chip queue depth of the target queue to determine the current on-chip queue depth of the target queue.
Of course, the storage medium provided by the present application contains computer-executable instructions, and the computer-executable instructions are not limited to the method operations described above, and may also perform related operations in the storage scheduling method provided by any embodiment of the present application.
The above description is only exemplary embodiments of the present application, and is not intended to limit the scope of the present application.
In general, the various embodiments of the application may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the application is not limited thereto.
One of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
The preferred embodiments of the present invention have been described above with reference to the accompanying drawings, and are not intended to limit the scope of the embodiments of the invention. Any modifications, equivalents and improvements that may occur to those skilled in the art without departing from the scope and spirit of the embodiments of the present invention are intended to be within the scope of the claims of the embodiments of the present invention.

Claims (11)

1. A method for memory scheduling, the method comprising:
determining the sum of the length of a received target message and the depth of a current on-chip queue of a target queue corresponding to the target message as the update depth of the target queue;
when the updating depth of the target queue is larger than the moving threshold value of the target queue, determining that the target message needs to be moved to an off-chip storage device for storage;
if the target message is moved to an off-chip storage device for storage, message moving discarding occurs, and the current on-chip storage occupation value is smaller than a preset storage occupation threshold value, the dequeue of the message of the target queue is triggered and the dequeue message is discarded;
and subtracting the length difference of the dequeue message from the current on-chip queue depth of the target queue to determine the current on-chip queue depth of the target queue.
2. The method according to claim 1, wherein if the message is moved and discarded when the target message is moved to the off-chip storage device for storage, and the current on-chip storage occupancy value is smaller than the preset storage occupancy threshold value, before the dequeuing of the message in the target queue is triggered and the dequeued message is discarded, the method further comprises:
and determining the sum of the depths of the current on-chip queues of all the queues as the current on-chip storage occupation value.
3. The method of claim 1, further comprising:
when the updating depth of the target queue is smaller than or equal to the moving threshold of the target queue, storing the target message in an on-chip storage space;
and determining the sum of the current on-chip queue depth of the target queue and the length of the target message as the new current on-chip queue depth of the target queue.
4. The method of claim 1, further comprising:
and if the target message is successfully moved to the off-chip storage device for storage, determining the current on-chip queue depth of the target queue as the new current on-chip queue depth of the target queue.
5. The method according to any one of claims 1 to 4, wherein the preset storage occupancy threshold is a storage occupancy threshold corresponding to the target queue;
before the triggering dequeuing of the message of the target queue and discarding the dequeued message, the method further includes:
and if message moving discarding occurs when the target message is moved to an off-chip storage device for storage, determining a storage occupation threshold corresponding to the priority of the target queue according to the priority of the target queue.
6. The method according to any one of claims 1 to 4, wherein before determining a sum of a length of a received target packet and a current on-chip queue depth of a target queue corresponding to the target packet as the updated depth of the target queue, the method further comprises:
judging the attribute of the target message;
and when the attribute of the target message is judged to be a mixed type, determining to execute the sum of the length of the target message to be received and the depth of the current on-chip queue of the target queue corresponding to the target message, and determining to be the updating depth of the target queue.
7. The method of claim 6, wherein after determining the attributes of the target packet, the method further comprises:
when the attribute of the target message is judged to be an on-chip message, storing the target message in an on-chip memory;
and when the attribute of the target message is judged to be the off-chip message, storing the target message in the off-chip storage equipment.
8. The method according to any one of claims 1 to 4, wherein the triggering dequeuing of the packets in the target queue and discarding of the dequeued packets comprises:
adding a discard tag to the target queue;
in queue dequeue scheduling, if it is determined that the discard tag exists in the target queue, dequeuing of the messages of the target queue is triggered, and the dequeued messages are discarded.
9. The method according to any one of claims 1 to 4, wherein before triggering dequeuing of the packets in the target queue and discarding dequeued packets, if packet migration discarding occurs when the target packets are migrated to an off-chip storage device for storage and a current on-chip storage occupancy value is smaller than a preset storage occupancy threshold value, the method further comprises:
receiving state information sent by the off-chip storage equipment;
if the fact that the off-chip storage device can store the target message is determined according to the state information, the target message is moved to the off-chip storage device to be stored;
and if the fact that the off-chip storage equipment cannot store the target message is determined according to the state information, discarding the target message.
10. A storage scheduling device, characterized in that the device comprises a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for enabling connection communication between the processor and the memory, which program, when executed by the processor, implements the steps of the storage scheduling method according to any one of claims 1 to 9.
11. A storage medium for computer readable storage, wherein the storage medium stores one or more programs which are executable by one or more processors to implement the steps of the storage scheduling method of any one of claims 1 to 9.
CN202010582287.9A 2020-06-23 2020-06-23 Storage scheduling method, device and storage medium Pending CN113835611A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010582287.9A CN113835611A (en) 2020-06-23 2020-06-23 Storage scheduling method, device and storage medium
PCT/CN2021/101809 WO2021259321A1 (en) 2020-06-23 2021-06-23 Storage scheduling method, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010582287.9A CN113835611A (en) 2020-06-23 2020-06-23 Storage scheduling method, device and storage medium

Publications (1)

Publication Number Publication Date
CN113835611A true CN113835611A (en) 2021-12-24

Family

ID=78964168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010582287.9A Pending CN113835611A (en) 2020-06-23 2020-06-23 Storage scheduling method, device and storage medium

Country Status (2)

Country Link
CN (1) CN113835611A (en)
WO (1) WO2021259321A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115277591A (en) * 2022-08-04 2022-11-01 深圳云豹智能有限公司 Message processing circuit, method, chip and computer equipment
WO2024066257A1 (en) * 2022-09-29 2024-04-04 深圳市中兴微电子技术有限公司 Storage scheduling method and apparatus, device, and computer readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8031607B2 (en) * 2009-01-29 2011-10-04 Alcatel Lucent Implementation of internet protocol header compression with traffic management quality of service
CN103888377A (en) * 2014-03-28 2014-06-25 华为技术有限公司 Message cache method and device
CN108063653B (en) * 2016-11-08 2020-02-14 华为技术有限公司 Time delay control method, device and system
CN109729014B (en) * 2017-10-31 2023-09-12 深圳市中兴微电子技术有限公司 Message storage method and device
CN109688070A (en) * 2018-12-13 2019-04-26 迈普通信技术股份有限公司 A kind of data dispatching method, the network equipment and retransmission unit

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115277591A (en) * 2022-08-04 2022-11-01 深圳云豹智能有限公司 Message processing circuit, method, chip and computer equipment
CN115277591B (en) * 2022-08-04 2023-11-07 深圳云豹智能有限公司 Message processing circuit, method, chip and computer equipment
WO2024066257A1 (en) * 2022-09-29 2024-04-04 深圳市中兴微电子技术有限公司 Storage scheduling method and apparatus, device, and computer readable storage medium

Also Published As

Publication number Publication date
WO2021259321A1 (en) 2021-12-30

Similar Documents

Publication Publication Date Title
US9813529B2 (en) Effective circuits in packet-switched networks
CN111512602B (en) Method, equipment and system for sending message
CN109088829B (en) Data scheduling method, device, storage medium and equipment
CN107404443B (en) Queue cache resource control method and device, server and storage medium
EP4175232A1 (en) Congestion control method and device
US8457142B1 (en) Applying backpressure to a subset of nodes in a deficit weighted round robin scheduler
US10432429B1 (en) Efficient traffic management
WO2021259321A1 (en) Storage scheduling method, device, and storage medium
RU2641250C2 (en) Device and method of queue management
US20230283578A1 (en) Method for forwarding data packet, electronic device, and storage medium for the same
US7684422B1 (en) Systems and methods for congestion control using random early drop at head of buffer
US7209489B1 (en) Arrangement in a channel adapter for servicing work notifications based on link layer virtual lane processing
US8018958B1 (en) System and method for fair shared de-queue and drop arbitration in a buffer
CN116414534A (en) Task scheduling method, device, integrated circuit, network equipment and storage medium
WO2019109902A1 (en) Queue scheduling method and apparatus, communication device, and storage medium
CN116868553A (en) Dynamic network receiver driven data scheduling on a data center network for managing endpoint resources and congestion relief
CN111756586B (en) Fair bandwidth allocation method based on priority queue in data center network, switch and readable storage medium
EP1576772B1 (en) Method and apparatus for starvation-free scheduling of communications
CN112671832A (en) Forwarding task scheduling method and system for guaranteeing hierarchical time delay in virtual switch
CN112968845A (en) Bandwidth management method, device, equipment and machine-readable storage medium
CN111638986A (en) QoS queue scheduling method, device, system and readable storage medium
EP3826245A1 (en) Method and device for determining rate of packet dequeuing
CN116664377A (en) Data transmission method and related device
CN113010464A (en) Data processing apparatus and device
US11516145B2 (en) Packet control method, flow table update method, and node device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination