CN112787956B - Method, system, storage medium and application for crowding occupation processing in queue management - Google Patents

Method, system, storage medium and application for crowding occupation processing in queue management Download PDF

Info

Publication number
CN112787956B
CN112787956B CN202110131937.2A CN202110131937A CN112787956B CN 112787956 B CN112787956 B CN 112787956B CN 202110131937 A CN202110131937 A CN 202110131937A CN 112787956 B CN112787956 B CN 112787956B
Authority
CN
China
Prior art keywords
priority
data frame
destination port
occupation
port
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110131937.2A
Other languages
Chinese (zh)
Other versions
CN112787956A (en
Inventor
邱智亮
耿政琦
潘伟涛
黄永东
刘心雨
李家俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110131937.2A priority Critical patent/CN112787956B/en
Publication of CN112787956A publication Critical patent/CN112787956A/en
Application granted granted Critical
Publication of CN112787956B publication Critical patent/CN112787956B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/103Packet switching elements characterised by the switching fabric construction using a shared central buffer; using a shared memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention belongs to the technical field of wireless communication, and discloses a method, a system, a storage medium and an application for crowding occupation in queue management, wherein when grouping is enqueued, the residual amount of available cache space of a destination port of a current data frame is judged; judging the priority of the data frame stored in the destination port cache space of the current data frame; after judging that the high-priority data frames which are squeezed to occupy are successfully enqueued, judging that the squeezing to occupy needs to be carried out through the priority of the data frames stored in the destination port cache space of the current data frames, extracting destination port, frame length and priority information in the data frames, and writing the information into a corresponding value in a squeezing to occupy information table; updating the crowding information table; and reading the occupation information table when occupation is effective, and dequeuing and scheduling each priority. The invention ensures that the buffer memory of each port is enough to store the next longest data frame in the limit state for high priority to carry out crowding; when flow control is performed or the input flow exceeds the capacity of the switch, the enqueue of the incoming high-priority data frames is ensured as much as possible.

Description

Method, system, storage medium and application for crowding occupation processing in queue management
Technical Field
The invention belongs to the technical field of wireless communication, and particularly relates to a method and a system for processing crowding in queue management, a storage medium and application.
Background
At present: the switching equipment is arranged on the satellite, so that the data transmission efficiency of the satellite communication system can be greatly improved, and convenience is provided for flexibly meeting various requirements of users. The information transmission between the ground network and the satellite network can be realized more conveniently and rapidly. The satellite switching equipment can replace the switching equipment of the ground station to complete the switching management work and can independently form a network. With the development of satellites and space stations, the processed services are increased, the priority of the services is increased, and the important significance is achieved in guaranteeing the services with high priority.
The packet forwarding process comprises the steps of packet receiving and caching, output port searching, queue management, dequeue scheduling, packet sending and the like. Queue management is an important function of packet switching equipment, and the method for processing preemption can further ensure the stable transmission of high-priority services.
The patent document "queue management method and apparatus for storing variable-length packets based on fixed-length cells" (publication No. CN 102377682B) filed by the university of sienna electronics technology discloses a queue management method and apparatus. On the basis of storing variable-length packets based on a fixed-length unit, dividing a queue storage space into basic cache units with equal size, setting a cache descriptor for each unit, and storing the descriptors in a cache descriptor storage table to form a linked list; the disadvantages of the method used for the packet switching system are that: firstly, the complexity of the current service type is increased, and as the number of lower logical ports of a physical port is increased, the cache required by queue management is greatly increased; secondly, different priorities exist for the same queue, and if a low-priority packet occupies the buffer, a high-priority packet cannot be enqueued. For one, under the queue management with crowding, a minimum threshold does not need to be set for each priority, and the queue buffer does not need to be increased along with the increase of the priority.
Through the above analysis, the problems and defects of the prior art are as follows: in the queue management of the existing packet switching system, low-priority service data occupies a cache, and high-priority service data cannot be enqueued.
The difficulty in solving the above problems and defects is: in the queue management method and device for storing variable-length packets based on the fixed-length unit, the packet length is flexible and variable, so that the variable-length packets are crowded, the random property is high, the management of the fixed-length unit is very complicated, and the random update of a plurality of table entries is involved. Meanwhile, the fixed-length unit is managed according to the linked list, and the linked list is disordered due to random coverage of the fixed-length unit.
The significance for solving the problems and the defects is as follows: and establishing an occupied buffer area of the longest frame, discarding the occupied packet on the basis of the original dequeuing, and maintaining the stability of a linked list. In the existing communication system, the high-priority service is often handshake information, and once the high-priority service is blocked, the link is disconnected, so that the high-priority service can be guaranteed to be more fully occupied, and the reliability of the link is further guaranteed.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method, a system, a storage medium and an application for crowding occupation processing in queue management.
The invention is realized in this way, a method for processing the crowding in the queue management comprises the following steps:
setting a buffer area for each port, wherein each priority queue under each port shares the buffer; when the grouping is enqueued, judging the residual amount of the available buffer space of the destination port of the current data frame; judging the priority of the data frame stored in the destination port cache space of the current data frame; ensuring that high priority can enter the cache with restrictions.
After judging that the high-priority data frames which are squeezed to occupy are successfully enqueued, judging that the squeezing to occupy needs to be carried out through the priority of the data frames stored in the destination port cache space of the current data frames, extracting destination port, frame length and priority information in the data frames, and writing the information into a corresponding value in a squeezing to occupy information table; updating the crowding information table; and providing a basis for dequeuing how to judge whether the low priority is preempted or not.
Reading the occupation information table when occupation is effective, and dequeuing and scheduling each priority; accessing an occupation information table, determining a port needing occupation according to a target port when an occupation flag bit is effective, and judging whether the space of the target port can meet a longest frame; and according to the sequence of the priority of each queue in the port from low to the priority in the crowding occupation information table, dequeuing one by one until the available buffer space meets a longest frame. Therefore, the cache space can be guaranteed to be occupied by squeezing next time.
Further, the method for processing crowding in queue management judges that the residual amount of available cache space of a destination port of a current data frame includes;
if the current destination port cache space is larger than the actual cache space after the length of the data frame needing to be enqueued is added, the enqueue is failed;
if the current destination port cache space is not enough to store a longest data frame after adding the length of the data frame to be enqueued, further judgment is needed;
if the available buffer space of the current destination port is enough to store one longest data frame, the frame is enqueued successfully.
Further, the method for processing crowding in queue management judges that the priority of the data frame stored in the destination port cache space of the current data frame includes; reading the priority of the data frame stored in the destination port cache space of the current data frame for judgment on the basis that the judgment of the surplus of the available cache space of the destination port of the current data frame is successful and the current destination port cache space is not enough to store one longest data frame;
if the sum of the queue lengths of the current destination port cache space lower than the priority of the data frame is less than the length of the priority data frame, the enqueue is failed;
if the sum of the queue lengths of the current destination port cache space lower than the priority of the data frame is greater than or equal to the length of the priority data frame, the enqueue is successful and the data frame needs to be crowded and occupied.
Further, the occupation information table is used for storing information of data frames where occupation occurs, and is composed of a table, and the table entry has 3 items of information: destination port, frame length, priority.
Further, when the preemption is valid, reading the preemption information table, and after dequeuing and scheduling each priority, needing to: when a certain queue has a packet to be dequeued, if the occupation flag bit is effective, the frame is discarded, otherwise, the queue is normally dequeued.
It is a further object of the invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
setting a buffer area for each port, wherein each priority queue under each port shares the buffer; when the packet is enqueued, judging the residual amount of available buffer space of a destination port of the current data frame; judging the priority of the data frame stored in the destination port cache space of the current data frame;
after judging that the high-priority data frames which are squeezed to occupy are successfully enqueued, judging that the squeezing to occupy needs to be carried out through the priority of the data frames stored in the destination port cache space of the current data frames, extracting destination port, frame length and priority information in the data frames, and writing the information into a corresponding value in a squeezing to occupy information table; updating the crowding information table;
reading the occupation information table when occupation is effective, and dequeuing and scheduling each priority; accessing an occupation information table, determining a port needing to be occupied according to a target port when an occupation flag bit is effective, and judging whether the space available for the target port meets a longest frame; according to the sequence of the priority of each queue in the port from low to the priority in the crowding occupation information table, dequeuing one by one until the available buffer space meets a longest frame;
when a certain queue has a packet to be dequeued, if the occupation flag bit is effective, the frame is discarded, otherwise, the queue is normally dequeued.
It is another object of the present invention to provide a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
setting a buffer area for each port, wherein each priority queue under each port shares the buffer; when the packet is enqueued, judging the residual amount of available buffer space of a destination port of the current data frame; judging the priority of the data frame stored in the destination port cache space of the current data frame;
after judging that the high-priority data frames which are squeezed to occupy are successfully enqueued, judging that the squeezing to occupy needs to be carried out through the priority of the data frames stored in the destination port cache space of the current data frames, extracting destination port, frame length and priority information in the data frames, and writing the information into a corresponding value in a squeezing to occupy information table; updating the crowding information table;
reading the occupation information table when occupation is effective, and dequeuing and scheduling each priority; accessing an occupation information table, determining a port needing to be occupied according to a target port when an occupation flag bit is effective, and judging whether the space available for the target port meets a longest frame; according to the sequence of the priority of each queue in the port from low to the priority in the crowding occupation information table, dequeuing one by one until the available buffer space meets a longest frame;
when a certain queue has a packet to be dequeued, if the occupation flag bit is effective, the frame is discarded, otherwise, the queue is normally dequeued.
Another object of the present invention is to provide a queue management preemption processing system for implementing the queue management preemption processing method, the queue management preemption processing system comprising:
the priority queue cache sharing module is used for setting a cache region for each port, and each priority queue under each port shares the cache;
the system comprises an occupation information table updating module, a display module and a display module, wherein the occupation information table updating module is used for updating an occupation information table after judging that the occupied high-priority data frames are successfully enqueued;
the dequeue scheduling module is used for reading the occupation information table when occupation is effective and dequeue scheduling each priority;
and the packet dequeuing processing module is used for discarding the frame if the occupation flag bit is valid when a certain queue has a packet dequeue, and otherwise, dequeuing normally.
Another objective of the present invention is to provide a wireless communication control system, which is used to implement the method for handling preemption in queue management.
Another objective of the present invention is to provide a satellite communication system, which is used to implement the method for handling preemption in queue management.
By combining all the technical schemes, the invention has the advantages and positive effects that: in the prior art, when a low priority occupies a buffer area managed by a queue, the high priority cannot be served, and a high priority service can be guaranteed by a crowding and occupying processing method. In the prior art, when the priority of the same destination port is increased, more buffer areas are needed to guarantee. The unlimited increase of the buffer area can be avoided by the way of crowding.
Assuming that the requirement of the traffic burst tolerance is at least M, the storage space of a longest frame is N, and M equals 16N. For the buffer area required by the prior art, a priority minimum threshold is 2N, which can ensure that enqueuing and dequeuing of a longest frame can be carried out simultaneously
Figure BDA0002925674810000051
Figure BDA0002925674810000061
The invention ensures that the cache of each port is enough to store the next longest data frame in the limit state for high priority to be crowded and occupied; when flow control is carried out or the input flow exceeds the capacity of the switch, the data frames of the low-priority queue lower than the current enqueue frame priority under the port are discarded, and the enqueue of the coming high-priority data frames is ensured as far as possible.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained from the drawings without creative efforts.
Fig. 1 is a flowchart of a method for handling preemption in queue management according to an embodiment of the present invention.
FIG. 2 is a block diagram of a system for preemption processing in queue management according to an embodiment of the present invention;
in fig. 2: 1. a priority queue cache sharing module; 2. the crowding information table updating module; 3. a dequeue scheduling module; 4. and a packet dequeue processing module.
Fig. 3 is a flowchart of an implementation of enqueue information processing provided by an embodiment of the present invention.
Fig. 4 is a flowchart of an implementation of updating an occupation information table according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In view of the problems in the prior art, the present invention provides a method, a system, a storage medium and an application for processing preemption in queue management, and the following describes the present invention in detail with reference to the accompanying drawings.
As shown in fig. 1, the method for processing crowding in queue management provided by the present invention includes the following steps:
s101: setting a buffer area for each port, wherein each priority queue under each port shares the buffer; when the packet is enqueued, judging the residual amount of available buffer space of a destination port of the current data frame; judging the priority of the data frame stored in the destination port cache space of the current data frame;
s102: after judging that the high-priority data frames which are squeezed to occupy are successfully enqueued, judging that the squeezing to occupy needs to be carried out through the priority of the data frames stored in the destination port cache space of the current data frames, extracting destination port, frame length and priority information in the data frames, and writing the information into a corresponding value in a squeezing to occupy information table; updating the crowding information table;
s103: reading the occupation information table when occupation is effective, and dequeuing and scheduling each priority; accessing an occupation information table, determining a port needing occupation according to a target port when an occupation flag bit is effective, and judging whether the space of the target port can meet a longest frame; according to the sequence of the priority of each queue from low priority to the priority in the crowding occupation information table under the port, dequeuing one by one until the available buffer space meets a longest frame;
s104: when a certain queue has a packet to be dequeued, if the occupation flag bit is effective, the frame is discarded, otherwise, the queue is normally dequeued.
Those skilled in the art can also use other steps to implement the method for processing crowding in queue management provided by the present invention, and the method for processing crowding in queue management provided by the present invention in fig. 1 is only a specific embodiment.
As shown in fig. 2, the system for processing preemption in queue management provided by the present invention comprises:
the priority queue cache sharing module 1 is used for setting a cache region for each port, and each priority queue under each port shares the cache;
the occupation information table updating module 2 is used for updating the occupation information table after judging that the occupied high-priority data frames are successfully enqueued;
the dequeue scheduling module 3 is used for reading the occupation information table when occupation is effective and dequeue scheduling each priority;
and the packet dequeuing processing module 4 is used for discarding the frame if the occupation flag bit is valid when a certain queue has a packet dequeue, and otherwise, dequeuing normally.
The technical solution of the present invention is further described below with reference to the accompanying drawings.
The invention discloses a queue management method, which comprises the following steps: the method is used for dequeuing different queues with different priorities of ports according to the sequence from high to low during dequeuing so as to realize the differentiation of service quality; extruding: the queue management is based on the fact that different priorities share one cache, when the low priority occupies a cache area, the high priority can occupy the cache space of the low priority, and the high priority is further guaranteed.
The invention provides a method for crowding occupation processing in queue management, which specifically comprises the following steps:
step one, setting a buffer area for each port, wherein each priority queue under each port shares the buffer; when the packet is enqueued, judging the residual amount of available buffer space of a destination port of the current data frame; judging the priority of the data frame stored in the destination port cache space of the current data frame; the process of packet enqueuing is shown in fig. 3:
(1) judging the residual amount of available cache space of a destination port of the current data frame;
if the current destination port cache space is larger than the actual cache space after the length of the data frame needing to be enqueued is added, the enqueue is failed;
if the current destination port cache space is not enough to store a longest data frame after adding the length of the data frame to be enqueued, further judgment is needed;
if the available buffer space of the current destination port is enough to store one longest data frame, the frame is enqueued successfully.
(2) Judging the priority of the data frame stored in the destination port cache space of the current data frame;
reading the priority of the data frame stored in the target port cache space of the current data frame for judgment on the basis that the enqueue is successfully judged and the current target port cache space is not enough to store one longest data frame through the judgment in the step (1);
if the sum of the queue lengths of the current destination port cache space lower than the priority of the data frame is less than the length of the priority data frame, the enqueue is failed;
if the sum of the queue lengths of the current destination port cache space lower than the priority of the data frame is greater than or equal to the length of the priority data frame, the enqueue is successful and the data frame needs to be crowded and occupied.
Secondly, after judging that the high-priority data frames which are squeezed to occupy are successfully enqueued, judging the priority of the data frames stored in the destination port cache space of the current data frame to be squeezed to occupy, extracting destination port, frame length and priority information in the data frame, and writing the information into a corresponding value in a squeezing occupation information table; the update preemption information table is shown in FIG. 4; the occupation information table is used for storing the information of the data frame where the occupation occurs, and is composed of a table, and the table entry has 3 items of information: destination port, frame length, priority;
thirdly, reading the occupation information table when the occupation is effective, and dequeuing and scheduling each priority; dequeuing the data frames with the priority lower than the priority in the occupation information table from low to high until the available cache area meets a longest frame, and updating the occupation flag bit to be effective; the dequeue scheduling process comprises the following steps: accessing an occupation information table, determining a port needing occupation according to a target port when an occupation flag bit is effective, and judging whether the space of the target port can meet a longest frame; according to the sequence of the priority of each queue in the port from low to the priority in the crowding occupation information table, dequeuing one by one until the available buffer space meets a longest frame;
and fourthly, when a certain queue has a packet to dequeue, if the occupation flag bit is effective, the frame is discarded, otherwise, the packet is dequeued normally.
The test is carried out by a network tester, and the eight priorities 327692 and 327689 are from low to high, each priority is 12M, the total flow is 96M, the flow control is 44.4M, and the crowding occupation processing method ensures that no frame loss is caused by the high priority.
It should be noted that the embodiments of the present invention can be realized by hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided on a carrier medium such as a disk, CD-or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier, for example. The apparatus and its modules of the present invention may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., or by software executed by various types of processors, or by a combination of hardware circuits and software, e.g., firmware.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A method for handling crowding in queue management is characterized in that the method for handling crowding in queue management comprises the following steps:
setting a buffer area for each port, wherein each priority queue under each port shares the buffer; when the packet is enqueued, judging the residual amount of available buffer space of a destination port of the current data frame; judging the priority of the data frame stored in the destination port cache space of the current data frame;
after judging that the high-priority data frames which are squeezed to occupy are successfully enqueued, judging that the squeezing to occupy needs to be carried out through the priority of the data frames stored in the destination port cache space of the current data frames, extracting destination port, frame length and priority information in the data frames, and writing the information into a corresponding value in a squeezing to occupy information table; updating the crowding information table;
reading the occupation information table when occupation is effective, and dequeuing and scheduling each priority; accessing an occupation information table, determining a port needing to be occupied according to a target port when an occupation flag bit is effective, and judging whether the available space of the target port meets a longest frame; according to the sequence of the priority of each queue in the port from low to the priority in the crowding occupation information table, dequeuing one by one until the available buffer space meets a longest frame;
the method for processing the crowding in the queue management judges that the priority of the data frame stored in the target port cache space of the current data frame comprises the priority; reading the priority of the data frame stored in the buffer space of the destination port of the current data frame for judgment on the basis that the queuing is successfully judged through the residual amount of the buffer space available at the destination port of the current data frame and the buffer space of the current destination port is not enough to store one longest data frame;
if the sum of the queue lengths of the current destination port cache space lower than the priority of the data frame is less than the length of the priority data frame, the enqueue is failed;
if the sum of the queue lengths of the current destination port cache space lower than the priority of the data frame is greater than or equal to the length of the priority data frame, the enqueue is successful and the data frame needs to be crowded and occupied;
when the occupation is effective, the occupation information table is read, and after dequeuing and scheduling are carried out on each priority, the following steps are required: when a certain queue has a packet to be dequeued, if the occupation flag bit is effective, the frame is discarded, otherwise, the queue is normally dequeued.
2. The method according to claim 1, wherein the method for handling squashing in queue management determines the remaining amount of available buffer space of the destination port of the current data frame;
if the current destination port cache space is larger than the actual cache space after the length of the data frame needing to be enqueued is added, the enqueuing fails;
if the current destination port cache space is not enough to store a longest data frame after adding the length of the data frame to be enqueued, further judgment is needed;
if the available buffer space of the current destination port is enough to store one longest data frame, the frame is enqueued successfully.
3. The method as claimed in claim 1, wherein the preemption information table is used to store information of data frames where preemption occurs, and comprises a table, and the table entry has 3 items of information: destination port, frame length, priority.
4. A computer device, characterized in that the computer device comprises a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the steps of:
setting a buffer area for each port, wherein each priority queue under each port shares the buffer; when the grouping is enqueued, judging the residual amount of the available buffer space of the destination port of the current data frame; judging the priority of the data frame stored in the destination port cache space of the current data frame;
after judging that the high-priority data frames which are squeezed to occupy are successfully enqueued, judging that the squeezing to occupy needs to be carried out through the priority of the data frames stored in the destination port cache space of the current data frames, extracting destination port, frame length and priority information in the data frames, and writing the information into a corresponding value in a squeezing to occupy information table; updating the crowding information table;
reading the occupation information table when occupation is effective, and dequeuing and scheduling each priority; accessing an occupation information table, determining a port needing occupation according to a target port when an occupation flag bit is effective, and judging whether the space of the target port can meet a longest frame; according to the sequence of the priority of each queue in the port from low to the priority in the crowding occupation information table, dequeuing one by one until the available buffer space meets a longest frame;
when a certain queue has packets to dequeue, if the occupation flag bit is effective, the frame is discarded, otherwise, the packets are dequeued normally;
the method for processing the crowding in the queue management judges that the priority of the data frame stored in the target port cache space of the current data frame comprises the priority; reading the priority of the data frame stored in the buffer space of the destination port of the current data frame for judgment on the basis that the queuing is successfully judged through the residual amount of the buffer space available at the destination port of the current data frame and the buffer space of the current destination port is not enough to store one longest data frame;
if the sum of the queue lengths of the current destination port cache space lower than the priority of the data frame is less than the length of the priority data frame, the enqueue is failed;
if the sum of the queue lengths of the current destination port cache space lower than the priority of the data frame is greater than or equal to the length of the priority data frame, the enqueue is successful and the data frame needs to be crowded and occupied.
5. A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
setting a buffer area for each port, wherein each priority queue under each port shares the buffer; when the packet is enqueued, judging the residual amount of available buffer space of a destination port of the current data frame; judging the priority of the data frame stored in the destination port cache space of the current data frame;
after judging that the high-priority data frames which are squeezed to occupy are successfully enqueued, judging that the squeezing to occupy needs to be carried out through the priority of the data frames stored in the destination port cache space of the current data frames, extracting destination port, frame length and priority information in the data frames, and writing the information into a corresponding value in a squeezing to occupy information table; updating the crowding information table;
reading the occupation information table when the occupation is effective, and performing dequeue scheduling on each priority; accessing an occupation information table, determining a port needing occupation according to a target port when an occupation flag bit is effective, and judging whether the space of the target port can meet a longest frame; according to the sequence of the priority of each queue from low priority to the priority in the crowding occupation information table under the port, dequeuing one by one until the available buffer space meets a longest frame;
when a certain queue has packets to dequeue, if the occupation flag bit is effective, the frame is discarded, otherwise, the packets are dequeued normally;
the method for processing the crowding in the queue management judges that the priority of the data frame stored in the target port cache space of the current data frame comprises the priority; reading the priority of the data frame stored in the buffer space of the destination port of the current data frame for judgment on the basis that the queuing is successfully judged through the residual amount of the buffer space available at the destination port of the current data frame and the buffer space of the current destination port is not enough to store one longest data frame;
if the sum of the queue lengths of the current destination port cache space lower than the priority of the data frame is less than the length of the priority data frame, the enqueue is failed;
if the sum of the queue lengths of the current destination port cache space lower than the priority of the data frame is greater than or equal to the length of the priority data frame, the enqueue is successful and the data frame needs to be crowded and occupied.
6. A queue management preemption processing system for implementing the queue management preemption processing method of any claim from 1 to 3, wherein the queue management preemption processing system comprises:
the priority queue cache sharing module is used for setting a cache region for each port, and each priority queue under each port shares the cache;
the system comprises an occupation information table updating module, a display module and a display module, wherein the occupation information table updating module is used for updating an occupation information table after judging that the occupied high-priority data frames are successfully enqueued;
the dequeue scheduling module is used for reading the occupation information table when occupation is effective and dequeue scheduling each priority;
and the packet dequeuing processing module is used for discarding the frame if the occupation flag bit is effective when a certain queue has a packet dequeue, and otherwise, dequeuing normally.
7. A wireless communication control system, wherein the wireless communication control system is configured to implement the method for handling preemption in queue management according to any one of claims 1 to 3.
8. A satellite communication system, wherein the satellite communication system is configured to implement the method for handling preemption in queue management according to any one of claims 1 to 3.
CN202110131937.2A 2021-01-30 2021-01-30 Method, system, storage medium and application for crowding occupation processing in queue management Active CN112787956B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110131937.2A CN112787956B (en) 2021-01-30 2021-01-30 Method, system, storage medium and application for crowding occupation processing in queue management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110131937.2A CN112787956B (en) 2021-01-30 2021-01-30 Method, system, storage medium and application for crowding occupation processing in queue management

Publications (2)

Publication Number Publication Date
CN112787956A CN112787956A (en) 2021-05-11
CN112787956B true CN112787956B (en) 2022-07-08

Family

ID=75760087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110131937.2A Active CN112787956B (en) 2021-01-30 2021-01-30 Method, system, storage medium and application for crowding occupation processing in queue management

Country Status (1)

Country Link
CN (1) CN112787956B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114401235B (en) * 2021-12-15 2024-03-08 西安电子科技大学 Method, system, medium, equipment and application for processing heavy load in queue management
CN115396384B (en) * 2022-07-28 2023-11-28 广东技术师范大学 Data packet scheduling method, system and storage medium
CN116233200B (en) * 2023-05-10 2023-08-15 浙江正泰仪器仪表有限责任公司 Electric energy meter communication method and system based on subsequent frame dynamic registration

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789729A (en) * 2016-12-13 2017-05-31 华为技术有限公司 Buffer memory management method and device in a kind of network equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8718067B2 (en) * 2004-11-24 2014-05-06 Lantiq Deutschland Gmbh Pre-emption mechanism for packet transport
CN101848149B (en) * 2010-04-22 2013-10-23 北京航空航天大学 Method and device for scheduling graded queues in packet network
CN102571188A (en) * 2012-02-21 2012-07-11 西安电子科技大学 Signaling processing method and system for satellite communication network with satellite switch capability
CN106130930B (en) * 2016-06-24 2019-04-19 西安电子科技大学 A kind of data frame is joined the team the device and method of processing in advance
CN108462650B (en) * 2016-12-12 2021-09-14 中国航空工业集团公司西安航空计算技术研究所 Output unit based on TTE switch
US11171891B2 (en) * 2019-07-19 2021-11-09 Ciena Corporation Congestion drop decisions in packet queues
CN110493145B (en) * 2019-08-01 2022-06-24 新华三大数据技术有限公司 Caching method and device
CN111400206B (en) * 2020-03-13 2023-03-24 西安电子科技大学 Cache management method based on dynamic virtual threshold

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789729A (en) * 2016-12-13 2017-05-31 华为技术有限公司 Buffer memory management method and device in a kind of network equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
带挤占的星载交换机中队列管理器的FPGA设计;王孟磊等;《军事通信技术》;20120925(第03期);全文 *

Also Published As

Publication number Publication date
CN112787956A (en) 2021-05-11

Similar Documents

Publication Publication Date Title
CN112787956B (en) Method, system, storage medium and application for crowding occupation processing in queue management
US11082366B2 (en) Method and apparatus for using multiple linked memory lists
US7260104B2 (en) Deferred queuing in a buffered switch
US8149708B2 (en) Dynamically switching streams of packets among dedicated and shared queues
EP2464058B1 (en) Queue scheduling method and apparatus
US20040064664A1 (en) Buffer management architecture and method for an infiniband subnetwork
US7385993B2 (en) Queue scheduling mechanism in a data packet transmission system
US7995472B2 (en) Flexible network processor scheduler and data flow
US20080196033A1 (en) Method and device for processing network data
EP0569173A2 (en) High-speed packet switch
US20050207426A1 (en) Per CoS memory partitioning
US10623329B2 (en) Queuing system to predict packet lifetime in a computing device
US8223788B1 (en) Method and system for queuing descriptors
US20230283578A1 (en) Method for forwarding data packet, electronic device, and storage medium for the same
US8671220B1 (en) Network-on-chip system, method, and computer program product for transmitting messages utilizing a centralized on-chip shared memory switch
US6445706B1 (en) Method and device in telecommunications system
US20050111461A1 (en) Processor with scheduler architecture supporting multiple distinct scheduling algorithms
US20090031306A1 (en) Method and apparatus for data processing using queuing
US20060126512A1 (en) Techniques to manage flow control
US20030099250A1 (en) Queue scheduling mechanism in a data packet transmission system
US7822051B1 (en) Method and system for transmitting packets
US7203198B2 (en) System and method for switching asynchronous transfer mode cells
WO2021101640A1 (en) Method and apparatus of packet wash for in-time packet delivery
US20030118020A1 (en) Method and apparatus for classification of packet data prior to storage in processor buffer memory
CN114401235B (en) Method, system, medium, equipment and application for processing heavy load in queue management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant