CN117834556A - Multi-queue organization and scheduling method, system, storage medium and electronic equipment - Google Patents

Multi-queue organization and scheduling method, system, storage medium and electronic equipment Download PDF

Info

Publication number
CN117834556A
CN117834556A CN202311732559.9A CN202311732559A CN117834556A CN 117834556 A CN117834556 A CN 117834556A CN 202311732559 A CN202311732559 A CN 202311732559A CN 117834556 A CN117834556 A CN 117834556A
Authority
CN
China
Prior art keywords
queue
data packet
scheduling
length
entry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311732559.9A
Other languages
Chinese (zh)
Inventor
阮召崧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Jinzhen Microelectronics Technology Co ltd
Original Assignee
Nanjing Jinzhen Microelectronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Jinzhen Microelectronics Technology Co ltd filed Critical Nanjing Jinzhen Microelectronics Technology Co ltd
Priority to CN202311732559.9A priority Critical patent/CN117834556A/en
Publication of CN117834556A publication Critical patent/CN117834556A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6295Queue scheduling characterised by scheduling criteria using multiple queues, one for each individual QoS, connection, flow or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/628Queue scheduling characterised by scheduling criteria for service slots or service orders based on packet size, e.g. shortest packet first
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a multi-queue organizing and scheduling method, a system, a storage medium and electronic equipment, wherein the multi-queue organizing and scheduling method comprises the following steps: writing the data packet into the queue in the URAM memory space based on the state of the queue; and scheduling the queue according to the time difference between the data packet and the queue. In the application, the multi-queue organizing and scheduling method is provided, different data packets are sequentially written into the queues according to the enqueuing requirements, and the queues are selected based on the output control request so as to meet the scheduling of the multi-queues, so that efficient management of complex network queues is realized.

Description

Multi-queue organization and scheduling method, system, storage medium and electronic equipment
Technical Field
The application belongs to the technical field of computers, and relates to a multi-queue organization and scheduling method, a system, a storage medium and electronic equipment.
Background
Multi-queue scheduling can be classified into prioritized scheduling and non-prioritized scheduling. Priority scheduling is carried out, namely the scheduling sequence is determined according to the priority of each queue; there is no priority scheduling, and there is no priority division among queues, and fair scheduling is usually required for each queue to ensure fairness of scheduling.
With the development of networks, network queues are more and more complex, and multi-queue scheduling based on a complex network can ensure fairness of scheduling, but reduces scheduling efficiency and occupies more resources.
Disclosure of Invention
The purpose of the application is to provide a multi-queue organization and scheduling method, a system, a storage medium and electronic equipment, which are used for solving the technical problems that the multi-queue organization and scheduling efficiency is intersected and the management is complex under a complex network in the prior art.
In a first aspect, the present application provides a method for multi-queue organization and scheduling, the method comprising: writing the data packet into the queue in the URAM memory space based on the state of the queue; and scheduling the queue according to the time difference between the data packet and the queue.
In one implementation manner of the first aspect, the writing, by the queue-based state, a data packet into the queue in the URAM memory space includes: acquiring a head pointer and a tail pointer of the queue to check the state of the queue; when the state of the queue is empty, writing the current data packet into the queue and updating a head pointer and a tail pointer of the queue; when the state of the queue is not empty and the length of the queuing data packet is larger than the available space of the current entry, adding the next entry into the queue, and sequentially writing the queuing data packet into the queue.
In an implementation manner of the first aspect, before writing the data packet to the queue, the method further includes: acquiring the length of the data packet to judge whether the queue has enough space to store the data packet; discarding the data packet when the length of the data packet is greater than the available space of the queue; and otherwise, writing the data packet into the queue.
In one implementation manner of the first aspect, the available space of the queue includes: when the queue is a guaranteed queue, the available space is the available space of the item of the queue, and the free list FIFO remaining items of the guaranteed queue; when the queue is a non-guaranteed queue, the available space is the available space of the entry where the queue is located and the remaining entries of the free list FIFO.
In an implementation manner of the first aspect, the scheduling the queue according to a time difference between the data packet and the queue includes: reading a head pointer and a tail pointer of the queue based on the current queue number of the queue; and outputting the data packets in the queue according to the first-in-first-out order so as to schedule the queue.
In an implementation manner of the first aspect, the outputting the data packets in the queue in a first-in-first-out order includes: judging the state of the queue according to the serial number of the queue; when the queue is in conflict and empty after dequeuing, if the dequeuing length of the data packet is smaller than the available length of the queue, updating the head pointer of the queue by using the next cell count in the current entry in the queue; otherwise, updating the number of entries from the free list FIFO to the head pointer of the queue, and clearing the cell count in the current entry; and when the queue is empty after collision-free dequeuing or the dequeuing length of the data packet is smaller than the available length of the queue, updating the head pointer of the queue by using the next cell count in the current entry in the queue.
In one implementation manner of the first aspect, the length of the entry is 4KB, when the length of the data packet is greater than or equal to the available length in the entry of 4KB when the data packet exits the queue, or when the queue is empty after the data packet exits the queue and the queue has no conflict, the current entry of the queue is released and written into a free list FIFO or a free list FIFO of a guaranteed queue.
In a second aspect, the present application provides a multi-queue organization and scheduling system, the system comprising: the queue management module is used for writing the data packet into the queue in the URAM memory space based on the state of the queue; and the flow management output module is used for scheduling the queue according to the time difference between the data packet and the queue.
In a third aspect, the present application provides a computer readable storage medium having stored thereon a computer program, characterized in that the program, when executed by an electronic device, implements the multi-queue organization and scheduling method according to any one of the first aspects of the present application.
In a fourth aspect, the present application provides an electronic device, including a processor and a memory; the memory is used for storing a computer program; the processor is configured to execute the computer program stored in the memory, so that the electronic device performs the multi-queue organizing and scheduling method according to any one of the first aspect of the present application.
As described above, the multi-queue organizing and scheduling method, system, storage medium and electronic device have the following beneficial effects:
in the method, different data packets are stored into the queues based on the state of the queues, the queues are divided into guaranteed queues and non-guaranteed queues based on the queue manager, the data packets are dragged and dropped through the shared memory, so that the memory space is improved, the queues are selected based on the output control request, the scheduling of multiple queues is met, and therefore efficient management of complex network queues is achieved.
Drawings
Fig. 1 is a schematic flow chart of a multi-queue organizing and scheduling method according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a packet write queue according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of multi-queue scheduling according to an embodiment of the present application.
Fig. 4 is a flowchart illustrating a signal processing method according to an embodiment of the present application.
Fig. 5 is a schematic flow chart of multi-queue scheduling according to another embodiment of the present application.
FIG. 6 is a schematic diagram of a multi-queue organization and scheduling system according to an embodiment of the present application.
FIG. 7 shows a diagram of a multi-queue organization and scheduling architecture as described in embodiments of the present application.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Description of element reference numerals
100. Multi-queue organization and scheduling system
10. Queue management module
20. Flow management output module
30 URAM memory space
81. Processing unit
82. Memory device
821. Random access memory
822. Cache memory
823. Storage system
824. Program/utility tool
8251. Program module
83. Bus line
84. Input/output interface
851. Network adapter
81. Processing unit
82. Memory device
821. Random access memory
822. Cache memory
823. Storage system
824. Program/utility tool
8251. Program module
83. Bus line
84. Input/output interface
85. Network adapter
S1-S2 steps
S11 to S12 steps
S21 to S22 steps
S221 to S222 steps
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the present disclosure, when the following description of the embodiments is taken in conjunction with the accompanying drawings. The present application may be embodied or carried out in other specific embodiments, and the details of the present application may be modified or changed from various points of view and applications without departing from the spirit of the present application. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that, the illustrations provided in the following embodiments merely illustrate the basic concepts of the application by way of illustration, and only the components related to the application are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complex.
The embodiment of the application provides a multi-queue organization and scheduling method, a system, a storage medium and electronic equipment, wherein different data packets are stored into queues based on the state of the queues, the queues are divided into guaranteed queues and non-guaranteed queues based on a queue manager, the queues are selected based on output control requests, and efficient management of complex network queues is achieved.
Next, a detailed description will be given of a technical solution in the embodiment of the present application with reference to the drawings in the embodiment of the present application.
As shown in fig. 1, in one embodiment, the multi-queue organizing and scheduling method described herein includes the following steps:
and step S1, writing the data packet into the queue in the URAM memory space based on the state of the queue.
Specifically, a data packet is stored in a packet buffer, and a packet descriptor of a queue management module corresponding to the data packet is generated based on a queue number of the queue and a length of the data packet. Wherein the queue manager in the queue management module supports 256 queues, i.e. the number of the queues is from 0 to 255.
Specifically, in this embodiment, the internal storage space of the URAM memory space is 2MB, and it may configure a maximum number of guaranteed queues of 64, where the guaranteed buffer size of each guaranteed queue is 16KB. That is, the range of the queue number is ensured to be 0 to 64.
In the URAM memory space, the other queues share the remaining memory except the memory occupied by the guaranteed queue.
Specifically, the memory of the URAM memory space is composed of 512 entries, each of which has a memory size of 4KB, and is divided into 64B granularity units for writing/reading of data packets in each queue. That is, the memory of any one entry is divided into 64 cells of size 64B.
As shown in fig. 2, in one embodiment, the writing of the data packet into the queue in the URAM memory space based on the state of the queue includes the following steps:
step S11, a head pointer and a tail pointer of the queue are acquired to check the state of the queue.
Specifically, any one of the queues is associated with a head pointer and a tail pointer. As shown in table 1 below.
Table 1: head pointer/tail pointer format for queues
bit 15 bit 14:6 bit 5:0
used:all 4KB used ptr:4KB entry pointer wd_cnt:64B cell count within 4KB
Wherein { ptr, wd_cnt } in the head pointer points to the first non-queued cell in the URAM memory space entry and { ptr, wd_cnt } in the tail pointer points to the queued cell in the URAM memory space entry. For the head pointer, if the used bit is set, it indicates that no available cells in the current entry are readable, and for the tail pointer, if the used bit is set, it indicates that no available cells in the current entry are available.
Specifically, when the { used, ptr, wd_cnt } of the head pointer in the queue is the same as the { used, ptr, wd_cnt } of the tail pointer, it indicates that the current queue is empty.
Step S12, when the state of the queue is empty, writing the current data packet into the queue and updating a head pointer and a tail pointer of the queue; when the state of the queue is not empty and the length of the queuing data packet is larger than the available space of the current entry, adding the next entry into the queue, and sequentially writing the queuing data packet into the queue.
Specifically, when the { used, ptr, wd_cnt } of the head pointer of the queue is the same as the { used, ptr, wd_cnt } of the tail pointer of the queue, the queue is empty, then the ptr of the tail pointer in the queue currently refers to the entry number when the packet enters the queue, and in the current entry, the available space of the queue is ". About.wd_cnt+1".
The head pointer of the queue is updated with {1'd0, new entry number, 6'd0} and the tail pointer is updated to the new cell count and entry number.
Specifically, when the state of the queue is empty, the free list FIFO is read to acquire a new entry and store the entry into the queue, and if the packet length is greater than the available space of the acquired new entry, the free list FIFO is read again to continue to acquire the entry.
Specifically, when the state of the queue is not empty and the length of the queued data packet is greater than the available space of the current entry, updating the linked list, and storing the next entry into the queue through the linked list.
Further, when the queued packet length is greater than 4KB plus the available space for the current entry, then the free list FIFO needs to be read to obtain the new entry and store the entry into the queue and update the linked list.
It should be noted that, when writing the data packet into the queue, if the data packet length is greater than one 4KB entry and not greater than two 4KB entries, the queue will generate two write requests, otherwise, only one write request is generated.
In one embodiment, before writing the data packet to the queue, the method further comprises: and acquiring the length of the data packet to judge whether the queue has enough space to store the data packet.
Specifically, according to the generated packet descriptor based on the queue management module, the length of the data packet is obtained. Discarding the data packet when the length of the data packet is greater than the available space of the queue; and otherwise, writing the data packet into the queue.
Specifically, the available space of the queue includes: when the queue is a guaranteed queue, the available space is the available space of the item of the queue, and the free list FIFO remaining items of the guaranteed queue; when the queue is a non-guaranteed queue, the available space is the available space of the entry where the queue is located and the remaining entries of the free list FIFO. The space of any of the entries is 4KB.
The free list FIFO is the 4KB entry number available for storing the guaranteed queue and the non-guaranteed queue of the shared memory, and the shared memory in the URAM memory space is distributed through the free list FIFO.
Specifically, the free list of guaranteed queues uses read/write pointers to track the available 4KB entries for each guaranteed queue, and up to 4KB entries may be used for each guaranteed queue. The format of the read/write pointers guaranteeing the free list of the queue is shown in table 2.
Table 2: the read/write pointer format of the free list of guaranteed queues:
bit 2:0
read/write ptr:4KB entry number
wherein the number of 4KB entries available to any of the guard queues is equal to 4- (read_ptr-write_ptr).
Specifically, a plurality of entries are stored in the same guaranteed queue through a linked list.
And step S2, scheduling the queue according to the time difference of the data packet stored in the queue.
Specifically, according to an output control request, a queue number corresponding to the output control request is obtained, and the queue is scheduled.
As shown in fig. 3, in one embodiment, the scheduling the queue according to the time difference between the data packet and the queue includes the following steps:
and step S21, reading a head pointer and a tail pointer of the queue based on the current queue number of the queue.
Specifically, ptr in the head pointer of the queue refers to the dequeued current entry number in which the available read length is "64 minus wd_cnt". And if { used, ptr, wd_cnt } of the head pointer updated by the queue is the same as { used, ptr, wd_cnt } of the tail pointer after the data packet exits the queue, the queue is empty.
And S22, outputting the data packets in the queue according to the first-in first-out order so as to schedule the queue.
Specifically, dequeuing of the data packets is performed in the order in which the data packets are written into the queue, so as to schedule the queue.
It should be noted that enqueue (enqueue) is an operation of adding an element to the end of a queue. This means that the newly added element will become the next element to be processed, i.e. the element that entered the queue first will be processed first. Dequeue is an operation that removes an element from the head of the queue and returns its value. When the queue is not empty, the dequeue operation will return the element that was first inserted into the queue.
In one embodiment, as shown in FIG. 4, a time slot diagram of a packet during enqueuing/dequeuing is shown.
At a frequency of 300MHZ, the data transmission rate may reach 50MBps, i.e. 6 cycles are required to complete a packet enqueue/dequeue, i.e. the packet starts processing from global slot 0 when enqueued/dequeued, and continues processing to the end of global slot 5.
Specifically, as shown in fig. 4, when packet a starts enqueuing from global slot 0, packet B starts dequeuing from global slot 0.
It should be noted that, for the data packet a, if it needs to read a free list FIFO again, it needs to continue enqueuing the data packet a from the global slot 0, otherwise, enqueuing a new data packet from the global slot 0; similarly, for packet B, if it needs to read a free list FIFO again, it needs to continue dequeuing the packet B from global slot 0, otherwise, dequeuing a new packet begins.
As shown in fig. 5, in an embodiment, the outputting the data packets in the queue sequentially includes the following steps:
step S221, judging the state of the queue according to the serial number of the queue.
Specifically, if the number of the queue is the same as the number of the queue in which writing is performed, the queue is considered to have conflict with the queue in which writing is performed, otherwise, no conflict exists.
Step S222, when the queue is in conflict and empty after dequeuing, if the length of the data packet is smaller than the available length of the queue, updating the head pointer of the queue by using the next cell count in the current entry in the queue; otherwise, updating the number of entries from the free list FIFO to the head pointer of the queue, and clearing the cell count in the current entry; and when the queue is empty after collision-free dequeuing or the dequeuing length of the data packet is smaller than the available length of the queue, updating the head pointer of the queue by using the next cell count in the current entry in the queue.
Specifically, the length of the entry is 4KB, when the length of the data packet is greater than or equal to the available length in the 4KB entry when the data packet exits the queue, or when the queue is empty and the queue has no conflict after the data packet exits the queue, the current entry of the queue is released and written into a free list FIFO or a free list FIFO of a guaranteed queue.
As shown in fig. 4 and 7, before a normal data packet is queued into a queue, the data packet is stored in a packet buffer area, a packet descriptor of the data packet is generated, then the data packet is sequentially written into the queue in a ura memory space based on a write control request, a read control request is performed based on an output control instruction, the queue requiring a read control operation is obtained based on the read control request, and the data packet is output according to the time difference of the data packet stored in the queue.
Where { Q#, addr, len } represents the number and length of the queue written, and { Q#, len } represents the length of the queue.
The protection scope of the multi-queue organizing and scheduling method according to the embodiments of the present application is not limited to the execution sequence of the steps listed in the embodiments, and all the schemes implemented by adding or removing steps and replacing steps according to the principles of the present application in the prior art are included in the protection scope of the present application.
The present application also provides a multi-queue organizing and scheduling system, as shown in fig. 6, the multi-queue organizing and scheduling system 100 described in the present application includes:
a queue management module 10, configured to write a data packet into a queue in a URAM memory space 30 based on a state of the queue; and the traffic management output module 20 is configured to schedule the queue according to the time difference between the data packet and the queue.
Specifically, in one embodiment, as shown in fig. 7, the data packets are stored in the packet buffer before queuing, the queue management module 10 generates packet descriptors of the data packets based on the number of the queues and the length of the data packets, and writes the data packets into the queues in the URAM memory space 30 based on the state of the queues, and the traffic management output module 20 schedules the queues in the order of first in first out based on the output control instructions.
It should be noted that, the specific working principle of the multi-queue organizing and scheduling system of the present application may refer to the description of the working principle of the multi-queue organizing and scheduling method, so that the description is omitted herein.
It should be noted that, it should be understood that the above division of each module is merely a division of a logic function, and may be fully or partially integrated into one physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; the method can also be realized in a form of calling software by a processing element, and the method can be realized in a form of hardware by a part of modules. For example, the x module may be a processing element that is set up separately, may be implemented in a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and the function of the x module may be called and executed by a processing element of the apparatus. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
For example, the modules above may be one or more integrated circuits configured to implement the methods above, such as: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more microprocessors (Digital Signal Processor, abbreviated as DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Embodiments of the present application also provide a computer-readable storage medium. Those of ordinary skill in the art will appreciate that all or part of the steps in the method implementing the above embodiments may be implemented by a program to instruct a processor, where the program may be stored in a computer readable storage medium, where the storage medium is a non-transitory (non-transitory) medium, such as a random access memory, a read only memory, a flash memory, a hard disk, a solid state disk, a magnetic tape (magnetic tape), a floppy disk (floppy disk), an optical disk (optical disk), and any combination thereof. The storage media may be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The embodiment of the application also provides electronic equipment, which comprises: a processor and a memory.
In particular, the memory is for storing a computer program; the memory includes: various media capable of storing program codes, such as ROM, RAM, magnetic disk, U-disk, memory card, or optical disk.
The processor is used for executing the computer program stored in the memory so as to enable the electronic equipment to execute the camera abnormality detection method.
Preferably, the processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field programmable gate arrays (Field Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
As shown in fig. 8, the sub-devices of the present application are embodied in the form of general purpose computing devices. Components of an electronic device may include, but are not limited to: one or more processors or processing units 81, a memory 82, and a bus 83 connecting the various system components, including the memory 82 and the processing units 81.
Bus 83 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic devices typically include a variety of computer system readable media. Such media can be any available media that can be accessed by the electronic device and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 82 may include computer system readable media in the form of volatile memory such as Random Access Memory (RAM) 821 and/or cache memory 822. The electronic device may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 823 may be used to read from or write to non-removable, non-volatile magnetic media (not shown in FIG. 8, commonly referred to as a "hard disk drive"). Although not shown in fig. 8, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be coupled to bus 83 via one or more data medium interfaces. The memory 82 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the embodiments of the invention.
A program/utility 824 having a set (at least one) of program modules 5251 may be stored, for example, in the memory 82, such program modules 8241 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 8251 generally perform the functions and/or methodologies of the described embodiments of the invention.
The electronic device may also communicate with one or more external devices (e.g., keyboard, pointing device, display, etc.), with one or more devices that enable a user to interact with the electronic device, and/or with any device (e.g., network card, modem, etc.) that enables the electronic device to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 84. And the electronic device may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet, through network adapter 85. As shown in fig. 8, the network adapter 85 communicates with other modules of the electronic device over the bus 83. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with an electronic device, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The descriptions of the processes or structures corresponding to the drawings have emphasis, and the descriptions of other processes or structures may be referred to for the parts of a certain process or structure that are not described in detail.
In summary, according to the method and the device, different data packets are stored into the queues based on the states of the queues, the queues are divided into guaranteed queues and non-guaranteed queues based on the queue manager, the data packets are dragged and dropped through the shared memory, so that the memory space is improved, the queues are selected based on the output control request, the scheduling of multiple queues is met, and therefore efficient management of complex network queues is achieved. Therefore, the present application effectively overcomes various disadvantages in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles of the present application and their effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those of ordinary skill in the art without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications and variations which may be accomplished by persons skilled in the art without departing from the spirit and technical spirit of the disclosure be covered by the claims of this application.

Claims (10)

1. A method of multi-queue organization and scheduling, the method comprising:
writing the data packet into the queue in the URAM memory space based on the state of the queue;
and scheduling the queue according to the time difference between the data packet and the queue.
2. The multi-queue organization and scheduling method of claim 1, wherein the writing data packets into the queues in the URAM memory space based on the state of the queues comprises:
acquiring a head pointer and a tail pointer of the queue to check the state of the queue;
when the state of the queue is empty, writing the current data packet into the queue and updating a head pointer and a tail pointer of the queue;
when the state of the queue is not empty and the length of the queuing data packet is larger than the available space of the current entry, adding the next entry into the queue, and sequentially writing the queuing data packet into the queue.
3. The multi-queue organization and scheduling method of claim 2, further comprising, prior to writing the data packet to the queue: acquiring the length of the data packet to judge whether the queue has enough space to store the data packet;
discarding the data packet when the length of the data packet is greater than the available space of the queue;
and otherwise, writing the data packet into the queue.
4. A multi-queue organization and scheduling method according to claim 3 wherein the available space of the queue comprises:
when the queue is a guaranteed queue, the available space is the available space of the item of the queue, and the free list FIFO remaining items of the guaranteed queue;
when the queue is a non-guaranteed queue, the available space is the available space of the entry where the queue is located and the remaining entries of the free list FIFO.
5. The method of multi-queue organization and scheduling of claim 1, wherein said scheduling the queues according to the time difference of the data packets stored to the queues comprises:
reading a head pointer and a tail pointer of the queue based on the current queue number of the queue;
and outputting the data packets in the queue according to the first-in-first-out order so as to schedule the queue.
6. The method of multi-queue organization and scheduling according to claim 5, wherein said outputting the packets in the queue in a first-in-first-out order comprises:
judging the state of the queue according to the serial number of the queue;
when the queue is in conflict and empty after dequeuing, if the dequeuing length of the data packet is smaller than the available length of the queue, updating the head pointer of the queue by using the next cell count in the current entry in the queue; otherwise, updating the number of entries from the free list FIFO to the head pointer of the queue, and clearing the cell count in the current entry;
and when the queue is empty after collision-free dequeuing or the dequeuing length of the data packet is smaller than the available length of the queue, updating the head pointer of the queue by using the next cell count in the current entry in the queue.
7. The method of claim 6, wherein the length of the entry is 4KB, the length of the packet is greater than or equal to the available length in the 4KB entry when the packet exits the queue, or the queue is empty after the packet exits the queue and the queue has no conflict, releasing the current entry of the queue and writing the current entry into a free list FIFO or a free list FIFO of a guaranteed queue.
8. A multi-queue organization and scheduling system, the system comprising:
the queue management module is used for writing the data packet into the queue in the URAM memory space based on the state of the queue;
and the flow management output module is used for scheduling the queue according to the time difference between the data packet and the queue.
9. A computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by an electronic device implements the multi-queue organization and scheduling method of any one of claims 1-7.
10. An electronic device, comprising a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory, to cause the electronic device to perform the multi-queue organization and scheduling method of any one of claims 1-7.
CN202311732559.9A 2023-12-15 2023-12-15 Multi-queue organization and scheduling method, system, storage medium and electronic equipment Pending CN117834556A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311732559.9A CN117834556A (en) 2023-12-15 2023-12-15 Multi-queue organization and scheduling method, system, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311732559.9A CN117834556A (en) 2023-12-15 2023-12-15 Multi-queue organization and scheduling method, system, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117834556A true CN117834556A (en) 2024-04-05

Family

ID=90518258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311732559.9A Pending CN117834556A (en) 2023-12-15 2023-12-15 Multi-queue organization and scheduling method, system, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117834556A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118113445A (en) * 2024-04-30 2024-05-31 浪潮电子信息产业股份有限公司 Data transmission method, apparatus and device, storage medium and computer program product

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118113445A (en) * 2024-04-30 2024-05-31 浪潮电子信息产业股份有限公司 Data transmission method, apparatus and device, storage medium and computer program product

Similar Documents

Publication Publication Date Title
US11392529B2 (en) Systems and method for mapping FIFOs to processor address space
US7337275B2 (en) Free list and ring data structure management
CN110678847B (en) Continuous analysis tasks for GPU task scheduling
EP1645967B1 (en) Multi-channel DMA with shared FIFO buffer
US10552213B2 (en) Thread pool and task queuing method and system
EP2647163B1 (en) A method and system for improved multi-cell support on a single modem board
CN112084136B (en) Queue cache management method, system, storage medium, computer device and application
US6738831B2 (en) Command ordering
US20090119460A1 (en) Storing Portions of a Data Transfer Descriptor in Cached and Uncached Address Space
WO2021209051A1 (en) On-chip cache device, on-chip cache read/write method, and computer readable medium
CN117834556A (en) Multi-queue organization and scheduling method, system, storage medium and electronic equipment
US6993602B2 (en) Configuring queues based on a given parameter
CN105337896A (en) Message processing method and device
Maruyama et al. An RTOS in hardware for energy efficient software-based TCP/IP processing
US7266650B2 (en) Method, apparatus, and computer program product for implementing enhanced circular queue using loop counts
US20040064627A1 (en) Method and apparatus for ordering interconnect transactions in a computer system
WO2017218133A1 (en) Technologies for coordinating access to data packets in a memory
CN112433839B (en) Method, equipment and storage medium for realizing high-speed scheduling of network chip
CN108958903B (en) Embedded multi-core central processor task scheduling method and device
CN117389766A (en) Message sending method and device, storage medium and electronic device
CN113126911A (en) Queue management method, medium and equipment based on DDR3SDRAM
US9996489B2 (en) Memory aggregation device
CN109426562B (en) priority weighted round robin scheduler
US9116739B2 (en) Fast and scalable concurrent queuing system
CN115981893A (en) Message queue task processing method and device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination