CN116346739B - Multi-queue scheduling method, system, circuit and chip - Google Patents

Multi-queue scheduling method, system, circuit and chip Download PDF

Info

Publication number
CN116346739B
CN116346739B CN202310337600.6A CN202310337600A CN116346739B CN 116346739 B CN116346739 B CN 116346739B CN 202310337600 A CN202310337600 A CN 202310337600A CN 116346739 B CN116346739 B CN 116346739B
Authority
CN
China
Prior art keywords
queue
port
calendar
scheduling
polling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310337600.6A
Other languages
Chinese (zh)
Other versions
CN116346739A (en
Inventor
程杰杰
史佳晨
阮召崧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Jinzhen Microelectronics Technology Co ltd
Original Assignee
Nanjing Jinzhen Microelectronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Jinzhen Microelectronics Technology Co ltd filed Critical Nanjing Jinzhen Microelectronics Technology Co ltd
Priority to CN202310337600.6A priority Critical patent/CN116346739B/en
Publication of CN116346739A publication Critical patent/CN116346739A/en
Application granted granted Critical
Publication of CN116346739B publication Critical patent/CN116346739B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6295Queue scheduling characterised by scheduling criteria using multiple queues, one for each individual QoS, connection, flow or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a multi-queue scheduling method, a multi-queue scheduling system, a multi-queue scheduling circuit and a multi-queue scheduling chip, which comprise the following steps: setting a minimum scheduling byte number and configuring a calendar table; sequentially inquiring a queue port in the calendar table and an enabling bit of the queue port; judging whether the currently selected queue port is invalid or not based on the enabling bit aiming at each sequential query result; or judging whether the currently selected queue port is consistent with the last polled queue port; if yes, starting a ring polling scheduler; and taking the queue polled by the annular polling scheduler as a final scheduling result, or taking the queue selected by the current calendar table as a final scheduling result. The application realizes multi-queue scheduling by combining the calendar table and the annular polling scheduling algorithm, and effectively solves the technical problems that the more the queues occupy larger resources or the occupied resources are less but the flexibility is reduced when bandwidth allocation is realized by utilizing the multi-queue scheduling in the design of an Ethernet chip; the configuration of the calendar and the scheduler is more flexible.

Description

Multi-queue scheduling method, system, circuit and chip
Technical Field
The application belongs to the technical field of Ethernet, and particularly relates to a multi-queue scheduling method, a multi-queue scheduling system, a multi-queue scheduling circuit and a multi-queue scheduling chip.
Background
With the development of mobile communication technology, the number of users and the traffic volume are continuously increasing, and users put forward higher service quality requirements on data distribution services, and bandwidth is an important network resource, which is a main parameter affecting the performance of data distribution services. In packet switched networks, since users need to share network resources such as buffers, ports, links, etc., this will necessarily introduce contention, requiring a queue scheduling mechanism to arbitrate.
The ethernet technology generally adopts a multi-queue scheduling algorithm to achieve the purpose of bandwidth allocation, but more scheduling mechanisms in the prior art all need to configure weight values, the weights of the multi-queues are used as conditions, and the bit width of the weights is often more than 10 bits. When the number of queues is larger, occupied resources are proportionally increased, so that for items with more strict requirements on chip area, the scheduling method with more occupied resources becomes a technical pain point in the field.
Disclosure of Invention
The application aims to provide a multi-queue scheduling method, a multi-queue scheduling system, a multi-queue scheduling circuit and a multi-queue scheduling chip, which are used for solving the technical problems that the more queues occupy larger resources or the fewer the queues occupy but the flexibility is reduced when bandwidth allocation is realized by utilizing multi-queue scheduling in the design of an Ethernet chip.
In a first aspect, the present application provides a multi-queue scheduling method, including the steps of:
setting a minimum scheduling byte number and configuring a calendar table;
sequentially inquiring a queue port in the calendar table and an enabling bit of the queue port;
judging whether a queue port selected by a current calendar table is invalid or not based on the enabling bit aiming at each sequential query result; or (b)
Judging whether the queue port selected by the current calendar is consistent with the last polling queue port;
starting the annular polling scheduler when the queue port selected by the current calendar is invalid or the queue port selected by the current calendar is consistent with the last polling queue port;
and taking the queue polled by the annular polling scheduler as a final scheduling result, or taking the queue selected by the current calendar as the final scheduling result when the queue port selected by the current calendar is effective and the queue port selected by the current calendar is inconsistent with the queue port polled last time.
In one implementation of the first aspect, the calendar table is configured based on bandwidth requirements between queues; the configuration of the calendar table based on bandwidth requirements among the queues comprises the following steps:
calculating the configuration proportion of the queue ports based on the bandwidth requirements;
the number and order of each queue port is set based on the configuration ratio.
In one implementation manner of the first aspect, the queue ports in the calendar table and the enable bits of the queue ports are in one-to-one correspondence.
In one implementation of the first aspect, configuring the calendar table includes configuring a head pointer and a tail pointer in the calendar table.
In one implementation of the first aspect, the query pointer sequentially queries a queue port in the calendar table and an enable bit of the queue port between a head pointer and a tail pointer.
In a second aspect, the present application provides a multi-queue scheduling system, the system comprising:
the initialization module is used for setting the minimum dispatching byte number and configuring a calendar;
the polling module is used for sequentially inquiring the queue port in the calendar table and the enabling bit of the queue port;
the judging module is used for judging whether the queue port selected by the current calendar table is invalid or not according to the sequential query result; or (b)
Judging whether the queue port selected by the current calendar is consistent with the last polling queue port;
the annular polling module is used for starting the annular polling scheduler when the queue port selected by the current calendar is invalid or the queue port selected by the current calendar is consistent with the last polling queue port;
the multiplexing processing module is used for taking the queue polled by the annular polling scheduler as a final scheduling result; and when the queue port selected by the current calendar is effective and the queue port selected by the current calendar is inconsistent with the queue port polled last time, taking the queue selected by the current calendar as a final scheduling result.
In a third aspect, the present application provides a circuit comprising a comparison unit, a ring polling unit, and a multiplexing unit;
the comparison unit is used for judging whether the queue port selected by the current calendar is invalid or not based on the queue port enabling bit in the calendar configured by the central processing unit, or judging whether the queue port selected by the current calendar is consistent with the last polling queue port;
the annular polling unit is connected with the comparison unit and comprises an annular polling scheduler, and the annular polling scheduler is started when a queue port selected by a current calendar table is invalid or the queue port selected by the current calendar table is consistent with a queue port polled last time;
the multiplexing unit is connected with the annular polling unit and the comparing unit and is used for taking the queue polled by the annular polling scheduler as a final scheduling result, or taking the queue selected by the current calendar table as the final scheduling result when the queue port selected by the current calendar table is effective and the queue port selected by the current calendar table is inconsistent with the queue port polled last time.
In an implementation manner of the third aspect, the comparing unit includes a comparator and an or gate;
the first input end of the comparator is used for inputting a queue port selected by a calendar table, and the second input end is used for inputting a queue port polled last time;
the first input end of the OR gate is connected with the output end of the comparator and is used for inputting a high level when the queue port selected by the calendar table is consistent with the queue port polled last time, inputting a low level when the queue port selected by the calendar table is inconsistent with the queue port polled last time, the second input end is used for inputting a queue port enabling bit selected by the calendar table, and the output end is used for outputting a high level when the queue port selected by the current calendar table is invalid or the queue port selected by the current calendar table is consistent with the queue port polled last time; the high level is used to activate a circular polling scheduler in the multiplexing unit.
In a fourth aspect, the application provides a chip comprising a circuit as claimed in any one of the preceding claims.
In a fifth aspect, the present application provides a multi-queue scheduling system, including the above chip and a central processing unit;
the CPU is used for setting the minimum dispatch byte number and configuring a calendar table, and sequentially inquiring a queue port in the calendar table and an enabling bit of the queue port.
As described above, the multi-queue scheduling method, the multi-queue scheduling system, the multi-queue scheduling circuit and the multi-queue scheduling chip have the following beneficial effects:
(1) The multi-queue scheduling is realized by combining the calendar table and the annular polling scheduling algorithm, so that the technical problems that the more the queues, the larger the occupied resources or the fewer the occupied resources but the flexibility is reduced when bandwidth allocation is realized by utilizing the multi-queue scheduling in the design of an Ethernet chip are effectively solved;
(2) The queue port is used as a parameter in bandwidth allocation, so that the bit width of configuration information is reduced to a great extent;
(3) Bandwidth resources are allocated according to the configuration proportion, so that different queues obtain fair scheduling opportunities; the queue port and the enabling bit are configured automatically according to the bandwidth requirement of the user, so that the configuration of the calendar is more flexible;
(4) The configuration of the scheduler is flexible by self-defining a head pointer and a tail pointer;
(5) The annular polling scheduler is used for ensuring that each scheduling period is not wasted, so that scheduling bandwidth is not wasted, and the bandwidth utilization rate is improved.
Drawings
Fig. 1 is a flowchart of a multi-queue scheduling method according to an embodiment of the application.
Fig. 2 is a schematic diagram of a multi-queue scheduling system according to an embodiment of the application.
Fig. 3 shows a circuit diagram according to an embodiment of the application.
FIG. 4 is a schematic diagram of a multi-queue scheduling system according to an embodiment of the present application.
Description of element reference numerals
11. Initialization module
12. Polling module
13. Judgment module
14. Annular polling scheduling module
15. Multiplexing processing module
21. Chip
22. Central processing unit
31. Comparison unit
32. Annular polling unit
33. Multiplexing unit
4. Chip
5. Central processing unit
S1 to S5 steps
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present application with reference to specific examples. The application may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present application. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present application by way of illustration, and only the components related to the present application are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
Furthermore, descriptions such as those referred to as "first," "second," and the like, are provided for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implying an order of magnitude of the indicated technical features in the present disclosure. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present application.
The following embodiments of the present application provide a multi-queue scheduling method, system, circuit and chip, which implement multi-queue scheduling by combining a calendar table and a ring polling scheduling algorithm, and effectively solve the technical problem that when bandwidth allocation is implemented by using multi-queue scheduling in the design of an ethernet chip, more queues occupy larger resources or fewer occupied resources but flexibility is reduced.
As shown in fig. 1, the present embodiment provides a multi-queue scheduling method, which includes the following steps:
s1, setting the minimum dispatching byte number and configuring a calendar table.
The minimum number of scheduling bytes is the maximum transmission unit (Maximum Transmission Unit, MTU). Each time a scheduling result is generated, it represents that there are MTU bytes in the selected queue to be scheduled out.
In one embodiment, the MTU value is set according to the actual application scenario. Specifically, the conventional value interval of the MTU value is 100 to 200 bytes (Byte), but a larger MTU value may be set correspondingly for a larger size data packet. Theoretically, if the MTU setting is too large, the bandwidth allocation error between the queue ports becomes high. If the MTU is too small, the data packet with larger size needs to be fragmented, the processing capacity of the data packet is increased, and the fragmentation and reassembly process also increases extra resource consumption, thereby reducing the bandwidth transmission efficiency.
In the implementation manner, the proper MTU value is set according to different scenes, so that the effect of improving the network transmission performance can be achieved.
In one embodiment, the calendar table is configured based on bandwidth requirements between queues.
The calendar (calendar) algorithm is a scheduling method adapted according to time, and the method can ensure that all members waiting to be scheduled are evenly scheduled according to a preset proportion. The calendar table is a two-dimensional data structure, the value of each address in the two-dimensional data structure corresponds to a queue port and an enabling bit of the queue port respectively, wherein the queue port in the calendar table corresponds to the enabling bit of the queue port one by one, and any queue port can appear in the calendar table for a plurality of times.
The calendar table shown in table 1 is an n×2 two-dimensional data structure including N elements corresponding to N addresses, respectively, each element including a queue port and an enable bit of the queue port, wherein X represents the queue port in the nth element.
The resources occupied by the queue ports are related to the size of the N value, for example, when the number of elements n=14, at least 4bits are needed to completely represent 14 queue ports such as 0-13; the resource occupied by the enable bit is 1bit, the enable bit value en is 0 or 1, wherein when the enable bit en of the queue port is 0, the queue is invalid; when the enable bit en of the queue port is 1, this indicates that the queue is valid. The purpose of setting the enable bit is to disable the enable bit for a queue port when the user does not want that queue to participate in a subsequent periodic schedule. In addition, the state value of the enabling bit can be adjusted at any time in the scheduling process.
Calendar of Table 1, nx 2
It should be noted that the calendar table stores queue ports and enable bits of the queue ports, and is not a real queue and a data packet, and the real queue and the data packet are located in the buffer. The application needs to select the queue port first and then schedule the data packet with MTU byte number from the queue corresponding to the queue port.
In the implementation mode, the queue port is used as a parameter in bandwidth allocation, so that the bit width of configuration information is reduced to a great extent.
In one embodiment, configuring a calendar table based on bandwidth requirements among queues includes the steps of: calculating the configuration proportion of the queue ports based on the bandwidth requirements; the number and order of each queue port is set based on the configuration ratio. Queues can be evenly scheduled based on the configured calendar.
As shown in table 2, the full-bandwidth calendar table includes 15 queues, and if address 0 is set as a start address and address 14 is set as an end address, in one scheduling period, 9 to 10 queue ports are respectively called three times and 0 to 8 queue ports are respectively called once, so that the bandwidth allocation ratio of the queue ports 0 to 8 and 9 to 10 satisfies 1:3. Statistically, the larger the configuration ratio, the more bandwidth is allocated; conversely, the smaller the configuration ratio, the less bandwidth is allocated.
Table 2 calendar with full bandwidth
It should be noted that the configuration ratio in this embodiment refers to the ratio of the number of times that the queue port is called singly and repeatedly in the calendar table in one scheduling period. Full bandwidth refers to the enabling bit of each queue port being valid, i.e., 15 queues are simultaneously involved in one cycle of queue scheduling. In other embodiments, the preset configuration ratio may also be obtained by adjusting the valid and invalid status values of the enable bits, and the number and order of each queue port when not full bandwidth. In this embodiment, the number, address and number of occurrences of each queue port are flexible and variable, and are not fixed, so that the addresses and proportions of occurrences of the queue ports can be configured according to actual bandwidth requirements, and custom setting of the queue ports under different proportions can be realized.
In the implementation mode, bandwidth resources are allocated according to configuration proportion, so that different queues obtain fair scheduling opportunities; the queue port and the enabling bit are configured by the user according to the bandwidth requirement of the user, so that the calendar table is configured more flexibly.
S2, sequentially inquiring the queue ports in the calendar table and enabling bits of the queue ports.
In one embodiment, the enquiry pointer enquires sequentially between a head pointer and a tail pointer a queue port in the calendar table and an enable bit for the queue port.
The configuration calendar table comprises a first pointer and a last pointer in the configuration calendar table, and the calendar table also comprises a query pointer. The head pointer, the tail pointer and the query pointer respectively correspond to the value of one address in the two-dimensional data structure, wherein the head pointer and the tail pointer jointly determine a scheduling interval; the first pointer is denoted as start ptr and is used to define the upper limit of the scheduling interval; the tail pointer is denoted as endtr and is used to define the lower limit of the scheduling interval; the query pointer, denoted cal ptr, is used to cycle through the scheduling interval to achieve queue scheduling for one scheduling period. As shown in table 2, the start address pointed to by the start ptr is 0, the end address pointed to by the end ptr is 14, and the cal ptr loops within the interval [0,14 ]. When a scheduling period is completed, the inquiry pointer points from the start address to the end address, and then a new scheduling period is started from the start address again.
It should be noted that, in this embodiment, the starting address pointed by the first pointer and the ending address pointed by the last pointer are not fixed, so that the scheduling interval can be flexibly adjusted. For example, any intermediate address of the calendar table may be selected as the start address or the end address, but startptr < endtr needs to be ensured.
In the implementation mode, the configuration of the scheduler is flexible by defining the head pointer and the tail pointer by self.
S3, judging whether the currently selected queue port is invalid or not based on the enabling bit according to each sequential query result; or judging whether the currently selected queue port is consistent with the last polled queue port.
Specifically, for each sequential query result, the following four determination results may be obtained:
(1) The currently selected queue port is invalid and is consistent with the last polled queue port;
(2) The currently selected queue port is invalid and is inconsistent with the last polled queue port;
(3) The currently selected queue port is effective and is consistent with the last polled queue port;
(4) The currently selected queue port is not valid and the currently selected queue port is inconsistent with the last polled queue port.
The judgment results (1) to (3) are included when the currently selected queue port is invalid or the currently selected queue port is consistent with the last polling queue port.
And S4, starting the annular polling scheduler when the queue port selected by the current calendar is invalid or the queue port selected by the current calendar is consistent with the last polling queue port.
Specifically, the basic principle of a Round Robin (RR) scheduling algorithm is to poll multiple queues in a Round manner. Judging whether the polled queue is empty or not, if not, taking a message from the queue, otherwise, directly skipping the queue, and waiting by the scheduler. The queues participating in the annular polling scheduling have no priority, so that the equal utilization of the bandwidth among the queues is realized, and different queues obtain fair scheduling opportunities.
And S5, taking the queue polled by the annular polling scheduler as a final scheduling result, or taking the queue selected by the current calendar as the final scheduling result when the queue port selected by the current calendar is effective and the queue port selected by the current calendar is inconsistent with the queue port polled last time.
Specifically, the queue acquired through polling by the annular polling scheduler may be a queue of an effective port, or may be a queue of a queue port different from the last polling, so as to acquire a data packet with the minimum scheduling byte number from the queue of the effective port or the queue of the queue port different from the last polling, and serve as a current scheduling result. If the currently selected queue port is effective and the currently selected queue port is inconsistent with the last-time polled queue port, acquiring the data packet with the minimum scheduling byte number from the currently selected queue, and taking the data packet as a current scheduling result.
The scheduling principle of the embodiment is that if the query pointer points to a queue port, but the enabling bit of the queue port is found to be closed, in order not to waste the scheduling bandwidth this time, the annular polling scheduling algorithm can be directly started, the annular polling is used as a mature scheduling algorithm, and the queue of the invalid port can be directly polled instead of the queue of the valid port; similarly, if the queue port is found to be consistent with the last round-robin scheduling result, the round-robin scheduling algorithm also needs to poll the queue again.
In chip design, multiplexers (muxes) are often used in data paths, registers, and memories as an important data selector to improve the efficiency and performance of the system. Multiplexing is mainly used to output one of a plurality of input signals into a single output signal, and the input signal to be output is selected by a control signal. In an embodiment of the present application, the multiplexed multiple input signals correspond to two scheduling queues to be selected in the present application, namely, a re-polling queue and a currently selected queue, and the single output signal corresponds to one of the two scheduling queues to be selected; the multiplexing process also includes obtaining the data packet with the minimum dispatching byte number from the output queue and taking the data packet as the dispatching result.
In the implementation mode, the annular polling scheduler is used for ensuring that each scheduling period is not wasted, so that scheduling bandwidth is not wasted, and the bandwidth utilization rate is improved.
The protection scope of the multi-queue scheduling method according to the embodiment of the present application is not limited to the execution sequence of the steps listed in the embodiment, and all the schemes implemented by adding or removing steps and replacing steps according to the prior art according to the principles of the present application are included in the protection scope of the present application.
As shown in fig. 2, the present embodiment provides a multi-queue scheduling system, which includes:
an initialization module 11 for setting a minimum number of scheduling bytes and configuring a calendar.
A polling module 12, configured to sequentially query a queue port in the calendar table and an enable bit of the queue port.
A judging module 13, configured to judge, based on the enable bit, whether a queue port selected by a current calendar table is invalid for each sequential query result; or (b)
And judging whether the queue port selected by the current calendar table is consistent with the last polling queue port.
The annular polling module 14 is configured to start the annular polling scheduler when the queue port selected by the current calendar table is invalid or the queue port selected by the current calendar table is consistent with the queue port of the last polling.
A multiplexing processing module 15, configured to take a queue polled by the annular polling scheduler as a final scheduling result; and when the queue port selected by the current calendar is effective and the queue port selected by the current calendar is inconsistent with the queue port polled last time, taking the queue selected by the current calendar as a final scheduling result.
It should be noted that, the structures and principles of the initialization module 11, the polling module 12, the judgment module 13, the annular polling module 14 and the multiplexing processing module 15 in this embodiment are in one-to-one correspondence with the steps and embodiments in the multi-queue scheduling method described above, so that the description thereof will not be repeated here.
The multi-queue scheduling system provided by the embodiment of the application can realize the multi-queue scheduling method of the application, but the implementation device of the multi-queue scheduling method of the application comprises but is not limited to the structure of the multi-queue scheduling system listed in the embodiment, and all structural modifications and substitutions of the prior art according to the principles of the application are included in the protection scope of the application.
As shown in fig. 3, an embodiment of the present application provides a circuit including a comparing unit 31, a ring polling unit 32, and a multiplexing unit 33.
The comparing unit 31 is configured to determine whether the queue port selected by the current calendar is invalid or whether the queue port selected by the current calendar is consistent with the queue port polled last time based on the enable bit in the calendar configured by the central processing unit.
In one embodiment, the comparing unit 31 includes a comparator and an or gate.
The first input end of the comparator is used for inputting a queue port selected by a calendar table, and the second input end is used for inputting a queue port polled last time.
The first input end of the OR gate is connected with the output end of the comparator and is used for inputting a high level when the queue port selected by the calendar table is consistent with the queue port polled last time, inputting a low level when the queue port selected by the calendar table is inconsistent with the queue port polled last time, the second input end is used for inputting a queue port enabling bit selected by the calendar table, and the output end is used for outputting a high level when the queue port selected by the current calendar table is invalid or the queue port selected by the current calendar table is consistent with the queue port polled last time; the high level is used to activate a circular polling scheduler in the multiplexing unit.
The annular polling unit 32 is connected to the comparing unit 31, and the annular polling unit 32 includes an annular polling scheduler, and is configured to start the annular polling scheduler when a queue port selected by the current calendar table is invalid or a queue port selected by the current calendar table is consistent with a queue port of a previous polling.
The multiplexing unit 33 is connected to the annular polling unit 32 and the comparing unit 31, and is configured to take the queue polled by the annular polling scheduler as a final scheduling result, or take the queue selected by the current calendar as a final scheduling result when the queue port selected by the current calendar is valid and the queue port selected by the current calendar is inconsistent with the queue port polled last time.
The embodiment of the application also provides a chip 4 comprising a circuit as described in any of the preceding claims.
As shown in fig. 4, the present embodiment provides a multi-queue scheduling system, including the chip 4 and the cpu 5.
The central processing unit 5 is configured to set a minimum number of scheduling bytes and configure a calendar table, and sequentially query a queue port in the calendar table and an enable bit of the queue port.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus, or method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules/units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple modules or units may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules or units, which may be in electrical, mechanical or other forms.
The modules/units illustrated as separate components may or may not be physically separate, and components shown as modules/units may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules/units may be selected according to actual needs to achieve the objectives of the embodiments of the present application. For example, functional modules/units in various embodiments of the application may be integrated into one processing module, or each module/unit may exist alone physically, or two or more modules/units may be integrated into one module/unit.
Those of ordinary skill would further appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The descriptions of the processes or structures corresponding to the drawings have emphasis, and the descriptions of other processes or structures may be referred to for the parts of a certain process or structure that are not described in detail.
In summary, the application provides a multi-queue scheduling method, a system, a circuit and a chip, which realize multi-queue scheduling by combining a calendar table and a ring polling scheduling algorithm, and effectively solve the technical problems that when bandwidth allocation is realized by using multi-queue scheduling in the design of an Ethernet chip, more queues occupy larger resources or fewer occupied resources but flexibility is reduced; the queue port is used as a parameter in bandwidth allocation, so that the bit width of configuration information is reduced to a great extent; bandwidth resources are allocated according to the configuration proportion, so that different queues obtain fair scheduling opportunities; the queue port and the enabling bit are configured automatically according to the bandwidth requirement of the user, so that the configuration of the calendar is more flexible; the configuration of the scheduler is flexible by self-defining a head pointer and a tail pointer; the annular polling scheduler is used for ensuring that each scheduling period is not wasted, so that scheduling bandwidth is not wasted, and the bandwidth utilization rate is improved.
The above embodiments are merely illustrative of the principles of the present application and its effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the application. Accordingly, it is intended that all equivalent modifications and variations of the application be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (10)

1. A multi-queue scheduling method, comprising the steps of:
setting a minimum scheduling byte number and configuring a calendar table;
sequentially inquiring a queue port in the calendar table and an enabling bit of the queue port;
judging whether a queue port selected by a current calendar table is invalid or not based on the enabling bit aiming at each sequential query result; or (b)
Judging whether the queue port selected by the current calendar is consistent with the last polling queue port;
starting the annular polling scheduler when the queue port selected by the current calendar is invalid or the queue port selected by the current calendar is consistent with the last polling queue port;
taking the queue polled by the annular polling scheduler as a final scheduling result, or taking the queue selected by the current calendar as the final scheduling result when the queue port selected by the current calendar is effective and the queue port selected by the current calendar is inconsistent with the queue port polled last time;
the round-robin scheduling algorithm includes determining whether a polled queue is empty, if not, taking a message from the queue, otherwise directly skipping the queue, and the scheduler does not wait.
2. The multi-queue scheduling method of claim 1, wherein the calendar table is configured based on bandwidth requirements among the queues;
the configuring the calendar table based on the bandwidth requirements among the queues includes:
calculating the configuration proportion of the queue ports based on the bandwidth requirements;
the number and order of each queue port is set based on the configuration ratio.
3. The multi-queue scheduling method of claim 1, wherein the queue ports in the calendar table and the enable bits of the queue ports are in one-to-one correspondence.
4. The multi-queue scheduling method of claim 1, wherein configuring the calendar table comprises configuring a head pointer and a tail pointer in the calendar table.
5. The multi-queue scheduling method of claim 1, wherein the enquiry pointer enquires sequentially between a head pointer and a tail pointer for a queue port in the calendar table and an enable bit of the queue port.
6. A multi-queue scheduling system, the system comprising:
the initialization module is used for setting the minimum dispatching byte number and configuring a calendar;
the inquiry module is used for sequentially inquiring the queue ports in the calendar table and the enabling bits of the queue ports;
the judging module is used for judging whether the queue port selected by the current calendar table is invalid or not according to the sequential query result; or (b)
Judging whether the queue port selected by the current calendar is consistent with the last polling queue port;
the annular polling module is used for starting the annular polling scheduler when the queue port selected by the current calendar is invalid or the queue port selected by the current calendar is consistent with the last polling queue port;
the multiplexing processing module is used for taking the queue polled by the annular polling scheduler as a final scheduling result, or taking the queue selected by the current calendar as the final scheduling result when the queue port selected by the current calendar is effective and the queue port selected by the current calendar is inconsistent with the queue port polled last time;
the round-robin scheduling algorithm includes determining whether a polled queue is empty, if not, taking a message from the queue, otherwise directly skipping the queue, and the scheduler does not wait.
7. A circuit comprising a comparison unit, a ring polling unit and a multiplexing unit;
the comparison unit is used for judging whether the queue port selected by the current calendar is invalid or not based on the queue port enabling bit in the calendar configured by the central processing unit, or judging whether the queue port selected by the current calendar is consistent with the last polling queue port;
the annular polling unit is connected with the comparison unit and comprises an annular polling scheduler, and the annular polling scheduler is started when a queue port selected by a current calendar table is invalid or the queue port selected by the current calendar table is consistent with a queue port polled last time;
the multiplexing unit is connected with the annular polling unit and the comparing unit and is used for taking the queue polled by the annular polling scheduler as a final scheduling result, or taking the queue selected by the current calendar as the final scheduling result when the queue port selected by the current calendar is effective and the queue port selected by the current calendar is inconsistent with the queue port polled last time;
the round-robin scheduling algorithm includes determining whether a polled queue is empty, if not, taking a message from the queue, otherwise directly skipping the queue, and the scheduler does not wait.
8. The circuit of claim 7, wherein the comparison unit comprises a comparator and an or gate;
the first input end of the comparator is used for inputting a queue port selected by a calendar table, and the second input end is used for inputting a queue port polled last time;
the first input end of the OR gate is connected with the output end of the comparator and is used for inputting a high level when the queue port selected by the calendar table is consistent with the queue port polled last time, inputting a low level when the queue port selected by the calendar table is inconsistent with the queue port polled last time, the second input end is used for inputting a queue port enabling bit selected by the calendar table, and the output end is used for outputting a high level when the queue port selected by the current calendar table is invalid or the queue port selected by the current calendar table is consistent with the queue port polled last time; the high level is used to activate a circular polling scheduler in the multiplexing unit.
9. A chip comprising a circuit as claimed in any one of claims 7 to 8.
10. A multi-queue scheduling system comprising the chip of claim 9 and a central processor;
the CPU is used for setting the minimum dispatch byte number and configuring a calendar table, and sequentially inquiring a queue port in the calendar table and an enabling bit of the queue port.
CN202310337600.6A 2023-03-31 2023-03-31 Multi-queue scheduling method, system, circuit and chip Active CN116346739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310337600.6A CN116346739B (en) 2023-03-31 2023-03-31 Multi-queue scheduling method, system, circuit and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310337600.6A CN116346739B (en) 2023-03-31 2023-03-31 Multi-queue scheduling method, system, circuit and chip

Publications (2)

Publication Number Publication Date
CN116346739A CN116346739A (en) 2023-06-27
CN116346739B true CN116346739B (en) 2023-12-05

Family

ID=86883884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310337600.6A Active CN116346739B (en) 2023-03-31 2023-03-31 Multi-queue scheduling method, system, circuit and chip

Country Status (1)

Country Link
CN (1) CN116346739B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1391756A (en) * 1999-11-23 2003-01-15 国际商业机器公司 Method and system for controlling transmission of packets in computer networks
CN1419767A (en) * 2000-04-13 2003-05-21 国际商业机器公司 Method and system for scheduling information using disconnection/reconnection of network server
CN1862575A (en) * 2005-08-19 2006-11-15 华为技术有限公司 Method for planing dispatching timing task
CN101753246A (en) * 2008-11-28 2010-06-23 华为技术有限公司 Polling method and device thereof
CN104731657A (en) * 2013-12-24 2015-06-24 中国移动通信集团山西有限公司 Resource scheduling method and system
CN108390832A (en) * 2018-02-12 2018-08-10 盛科网络(苏州)有限公司 A kind of configuration method of mixing rate pattern lower network chip calendar
CN113938439A (en) * 2021-10-25 2022-01-14 深圳市风云实业有限公司 Port queue scheduling method based on timer and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1391756A (en) * 1999-11-23 2003-01-15 国际商业机器公司 Method and system for controlling transmission of packets in computer networks
CN1419767A (en) * 2000-04-13 2003-05-21 国际商业机器公司 Method and system for scheduling information using disconnection/reconnection of network server
CN1862575A (en) * 2005-08-19 2006-11-15 华为技术有限公司 Method for planing dispatching timing task
CN101753246A (en) * 2008-11-28 2010-06-23 华为技术有限公司 Polling method and device thereof
CN104731657A (en) * 2013-12-24 2015-06-24 中国移动通信集团山西有限公司 Resource scheduling method and system
CN108390832A (en) * 2018-02-12 2018-08-10 盛科网络(苏州)有限公司 A kind of configuration method of mixing rate pattern lower network chip calendar
CN113938439A (en) * 2021-10-25 2022-01-14 深圳市风云实业有限公司 Port queue scheduling method based on timer and electronic equipment

Also Published As

Publication number Publication date
CN116346739A (en) 2023-06-27

Similar Documents

Publication Publication Date Title
US6687796B1 (en) Multi-channel DMA with request scheduling
US7539199B2 (en) Switch fabric scheduling with fairness and priority consideration
EP2464058B1 (en) Queue scheduling method and apparatus
US9448847B2 (en) Concurrent program execution optimization
US7487505B2 (en) Multithreaded microprocessor with register allocation based on number of active threads
US8296764B2 (en) Internal synchronization control for adaptive integrated circuitry
US7995472B2 (en) Flexible network processor scheduler and data flow
US7525978B1 (en) Method and apparatus for scheduling in a packet buffering network
US20010043564A1 (en) Packet communication buffering with dynamic flow control
AU3988999A (en) Method and apparatus for forwarding packets from a plurality of contending queues to an output
US20060221823A1 (en) Assigning resources to items such as processing contexts for processing packets
US7483377B2 (en) Method and apparatus to prioritize network traffic
CN112084027A (en) Network-on-chip data transmission method, device, network-on-chip, equipment and medium
Wolf et al. Locality-aware predictive scheduling of network processors.
US20040004972A1 (en) Method and apparatus for improving data transfer scheduling of a network processor
Zhang et al. Minimizing coflow completion time in optical circuit switched networks
US20070067531A1 (en) Multi-master interconnect arbitration with time division priority circulation and programmable bandwidth/latency allocation
CN116346739B (en) Multi-queue scheduling method, system, circuit and chip
CN114531488A (en) High-efficiency cache management system facing Ethernet exchanger
US9281053B2 (en) Memory system and an apparatus
EP1335540B1 (en) Communications system and method utilizing a device that performs per-service queuing
CN116627891A (en) Software-controllable network-on-chip dynamic credit management device, system and method
CN112073336A (en) High-performance data exchange system and method based on AXI4Stream interface protocol
US7729302B2 (en) Adaptive control of multiplexed input buffer channels
CN106487713A (en) A kind of service quality multiplexing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant