WO2017016505A1 - 一种数据入队与出队方法及队列管理单元 - Google Patents
一种数据入队与出队方法及队列管理单元 Download PDFInfo
- Publication number
- WO2017016505A1 WO2017016505A1 PCT/CN2016/092055 CN2016092055W WO2017016505A1 WO 2017016505 A1 WO2017016505 A1 WO 2017016505A1 CN 2016092055 W CN2016092055 W CN 2016092055W WO 2017016505 A1 WO2017016505 A1 WO 2017016505A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- queue
- slice
- tail
- information
- node
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9036—Common buffer combined with individual queues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/625—Queue scheduling characterised by scheduling criteria for service slots or service orders
- H04L47/6275—Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/52—Queue scheduling by attributing bandwidth to queues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/56—Routing software
- H04L45/566—Routing instructions carried by the data packet, e.g. active networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/60—Router architectures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/10—Packet switching elements characterised by the switching fabric construction
- H04L49/109—Integrated on microchip, e.g. switch-on-chip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/901—Buffering arrangements using storage descriptor, e.g. read or write pointers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9015—Buffering arrangements for supporting a linked list
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9021—Plurality of buffers per packet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9042—Separate storage for different parts of the packet, e.g. header and payload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5678—Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
- H04L2012/5681—Buffer or queue management
Definitions
- the present invention relates to the field of communications technologies, and in particular, to a data enqueue and dequeue method and a queue management unit.
- Queue Manager is a common function in communication processing chips. It mainly manages data packets by buffering addresses. Since the communication processing chip processes a large number of services, in the queue management, a queue is generally established in the shared cache according to the service classification. When the data packet arrives, a memory address is allocated to the arriving data packet, and then the cache address is included.
- the package descriptor (Package Cell Descriptor, PCD for short) is managed by classification into the queue. Since packets belonging to different classifications are interleaved, the cache address of a classified packet in memory cannot be guaranteed to be continuous. Similarly, the cache address of the packet descriptor in the cache cannot be guaranteed to be continuous, so each queue needs to be generated.
- a queue sub-list that links the packet descriptors of packets belonging to the same queue in the queue sub-list.
- the specific method is: in addition to the queue and queue sub-list, the queue header table and the queue tail table are also established.
- the pointer of the queue header table points to the cache address of the first packet descriptor in the queue, and the pointer of the queue tail table points to the last one in the queue.
- the cache address of the packet descriptor, and the cache address of the packet descriptor in the queue is linked in the queue sub-list.
- the queue header table pointer When the data packet is dequeued, the queue header table pointer is read, the packet descriptor is read from the queue according to the cache address pointed to by the queue header table pointer, and then the packet is read from the memory according to the cache address in the packet descriptor and sent to the corresponding packet.
- the Media Access Control (MAC) port transmits and reads the cache address of the next packet descriptor from the queue sub-list to update the pointer of the queue header table.
- MAC Media Access Control
- some existing queue management first divides the data packet into several slices, and then Each memory is allocated a cache address in memory. When enqueuing, the sliced slice information is replaced by the packet descriptor for management. When the queue is dequeued, the queue header table pointer is read, and the cache is pointed to by the queue header table pointer.
- the address reads the slice information from the queue, then The slice is read from the memory according to the cache address in the slice information and sent to the corresponding MAC port for transmission, and the cache address of the next slice information is read from the queue sub-list to update the pointer of the queue header table.
- the communication processing chip is required to send the slice to the MAC port according to the priority between the queues, and the MAC port performs slice transmission according to the priority.
- the priority scheduling strategy is that each packet can be prioritized and switched when it is dequeued, but since the packet is dequeued, the priority scheduling when the packet is dequeued is used.
- the strategy is to judge and switch the priority after each slice is completed, and it is contrary to the whole package. If the package is to be dequeued based on the whole package, the existing priority scheduling strategy cannot be implemented. Therefore, the slice is out. In the case of a team, it is difficult to prioritize based on the existing priority scheduling policy in the same MAC port.
- the embodiment of the invention provides a data enqueuing and dequeuing method and a queue management unit, which are used to implement the same communication port to perform the slice dequeuing and transmission according to the priority on the basis of the whole package dequeue.
- a first aspect of the present invention provides a data enqueue method, which is applied to a queue management unit of a queue management system of a communication processing chip, wherein the queue management system has a plurality of queues, a queue header table, and a queue tail table, and a a queue total linked list, the queue total linked list includes a plurality of queue sub-lists, each queue corresponding to a queue header table, a queue tail table, and a queue sub-list, the queue header table including at least a head pointer, the queue The tail table includes at least a tail pointer, and the communication processing chip is further provided with a plurality of communication ports, each of the communication ports corresponding to at least two queues, and the method may include:
- Correlating the corresponding slice information according to the sequence of the slice in the data packet if the slice mark has a tail slice identifier, determining the slice as the data packet a tail slice, generating a first type of node, the first type of node pointing to a cache address of the slice information of the tail slice and a tail slice identifier of the tail slice;
- the slice information of the tail slice is written into the target queue, and the first type node is added to the tail of the queue sub-list corresponding to the target queue.
- the method further includes: determining, in the process of enqueuing the corresponding slice information, if the slice is not marked with a tail slice identifier, determining that the slice is the a non-tailed slice of the data packet, generating a second type of node, the second type of node pointing to a cache address of the slice information of the non-tailed slice, the non-tailed slice being another slice of the data packet except the last slice Determining whether the target queue is empty, if the target queue is empty, writing the slice information of the non-tailed slice to the target queue, and updating the head pointer of the queue header table according to the second type node If the target queue is non-empty, the slice information of the non-tailed slice is written into the target queue, and the second type of node is added at the tail of the queue sub-list corresponding to the target queue.
- the queue header table further includes queue indication information
- determining whether the target queue is empty includes: reading And taking the queue indication information in the queue header table corresponding to the target queue, and determining, according to the queue indication information, whether the target queue is empty.
- the method further includes: writing, in the target queue, slice information of the tail slice to the target After the queue, or after the target queue is empty, after writing the slice information of the non-tailed slice to the target queue, the queue indication information is updated.
- the method further includes: after the slice information of the tail slice is written into the target queue, updating the queue tail according to the first type of node The tail pointer of the table.
- the method further includes: after the slice information of the non-tailed slice is written into the target queue, according to the The second type of node updates the tail pointer of the queue tail table.
- a second aspect of the present invention provides a data dequeuing method, which is applied to a queue management unit of a queue management system of a communication processing chip, wherein the queue management system has a plurality of queues, a queue header table, and a queue tail table, and a Queue total linked list, the queue total linked list includes several queue sub-chains
- Each queue corresponds to a queue header table, a queue tail table, and a queue sub-list
- the queue header table includes at least a head pointer
- the queue tail table includes at least a tail pointer
- the communication processing chip further includes A plurality of communication ports are provided, each of the communication ports corresponding to at least two queues, and the method may include: reading a head pointer of a queue header table of the target queue from a plurality of queues of the current communication port, where the queue is written with Sliced slice information, the slice is obtained by dividing a data packet, and the tail slice of the data packet is marked with a tail slice identifier, and the
- the header pointer points to a first type node or a second type node
- the first type node points to a cache address of slice information of a tail slice and a tail slice identifier of the tail slice
- the second type node points to a non-data packet a cache address of the slice information of the tail slice
- the tail slice identifier is used to indicate that the tail slice is the last slice of the data packet
- the non-tail slice is the Removing the other slice of the last slice from the packet
- the method further includes: if it is determined that the head pointer points to the second type of node, according to the slice information of the non-tailed slice pointed by the second type of node a cache address from which the slice information of the non-tailed slice is read. And after completing the reading of the slice information of the non-tailed slice, triggering queue priority switching of the current communication port.
- the method further includes: after completing the reading of the slice information of the tail slice, or after completing After the slice information of the non-tailed slice is read, the next node is read from the queue sub-list corresponding to the target queue, and the head pointer of the queue header table is updated according to the next node, the next one The node is the first type node or the second type node.
- the reading the head pointer of the queue head table of the target queue from the queues of the current communication port includes: receiving a dequeue notification message sent by the scheduler, where the dequeue notification message includes a queue number, and the queue indicated by the queue number is the target queue.
- triggering the queue priority switching of the current communication port includes: when the communication port scheduling time is reached, Triggering the scheduler to schedule a corresponding communication port according to the configured time slot, and reading the last dequeue information of the corresponding communication port, the last dequeuing information including at least the queue of the last time the corresponding communication port is dequeued a number, a last dequeued slice information, and a node for dequeuing the last dequeued slice information, the scheduler determining, according to the last dequeue information, whether to trigger a queue priority switch of the current communication port; No, the scheduler sends a dequeue notification request to the queue management unit, where the queue number included in the dequeue notification request is the queue number of the last dequeue, and if yes,
- the triggering the queue priority switching of the current communication port includes: triggering the scheduler to be non-empty from the current communication port A priority highest queue is determined in the queue, then the target queue is switched to the highest priority queue, and the dequeue notification message is sent.
- a third aspect of the present invention provides a queue management unit, the queue management unit is disposed in a queue management system of a communication processing chip, and the queue management system has a plurality of queues, a queue header table, a queue tail table, and a queue.
- the total linked list includes a plurality of queue sub-lists, each of which corresponds to a queue header table, a queue tail table, and a queue sub-list, wherein the queue header table includes at least a head pointer, and the queue tail
- the table includes at least a tail pointer
- the communication processing chip is further provided with a plurality of communication ports, each of the communication ports corresponding to at least two queues, and the queue management unit includes:
- a receiving module configured to receive a data packet that needs to be enqueued, divide the data packet into a plurality of slices, obtain slice information of the slice, and mark a tail slice identifier of the tail slice of the data packet, where the slice information is at least And including a port number, a priority, and a cache address of the slice in the memory, where the tail slice identifier is used to indicate that the tail slice is the last slice of the data packet;
- the enqueue processing module is configured to enqueue the corresponding slice information according to the sequence of the slice in the data packet, and if the slice identifier has a tail slice identifier, determine the location
- the slice is a tail slice of the data packet, and generates a first type of node, where the first type node points to a cache address of the slice information of the tail slice and a tail slice identifier of the tail slice; and determines whether the target queue is empty.
- the target queue If the target queue is empty, write the slice information of the tail slice to the a target queue, updating a head pointer of the queue header table according to the first type node; if the target queue is non-empty, writing slice information of the tail slice to the target queue, in the target queue
- the first class node is added to the tail of the corresponding queue sub-list, and the tail pointer of the queue tail table is updated according to the first class node.
- the enqueue processing module is further configured to: when the corresponding slice information is enqueued, if the slice is not marked with a tail slice identifier, determine the Slicing is a non-tailed slice of the data packet, generating a second type of node, the second type of node points to a cache address of the slice information of the non-tailed slice, and the non-tailed slice is the last one of the data packet Determining whether the target queue is empty, and if the target queue is empty, writing the slice information of the non-tailed slice to the target queue, and updating the queue header according to the second type of node a header pointer of the table; if the target queue is non-empty, the slice information of the non-tailed slice is written into the target queue, and the second type of node is added at a tail of the queue sub-list corresponding to the target queue, and Updating the tail pointer of the queue tail table according to the second type of no
- the queue header table further includes queue indication information, where the enqueue processing module is specifically configured to read And taking the queue indication information in the queue header table corresponding to the target queue, and determining, according to the queue indication information, whether the target queue is empty.
- the enqueue processing module is further configured to: if the target queue is empty, write the slice information of the tail slice After entering the target queue, or if the target queue is empty, after writing the slice information of the non-tailed slice to the target queue, updating the queue indication information, and scheduling to the queue management system
- the device sends a dequeue request message, and the dequeue request message includes a queue number.
- the enqueue processing unit is further configured to: after the slice information of the tail slice is written into the target queue, update according to the first type of node The tail pointer of the queue tail table.
- the enqueue processing unit is further configured to: after the slice information of the non-tailed slice is written into the target queue Updating the tail pointer of the queue tail table according to the second type of node.
- a fourth aspect of the present invention provides a queue management unit, wherein the queue management unit is set in the a queue management system of a letter processing chip, wherein the queue management system has a plurality of queues, a queue header table, a queue tail table, and a queue total linked list, wherein the queue total linked list includes a plurality of queue sub-lists, one for each queue a queue header table, a queue tail table, and a queue sub-list, the queue header table includes at least a head pointer, the queue tail table includes at least a tail pointer, and the communication processing chip is further provided with a plurality of communication ports, each One of the communication ports corresponds to at least two queues, and the queue management unit includes:
- a reading module configured to read a head pointer of a queue header table of the target queue from a plurality of queues of the current communication port, where the sliced slice information is written, and the slice is divided by a data packet, where the data packet is obtained
- the tail slice tag has a tail slice identifier, and the slice information includes at least a port number, a priority level, and a cache address of the slice in the memory, the head pointer pointing to the first type node or the second type node, the first The class node points to a cache address of the slice information of the tail slice and a tail slice identifier of the tail slice, the second type node points to a cache address of the slice information of the non-tail slice of the data packet, and the tail slice identifier is used to indicate the location
- the tail slice is the last slice of the data packet, and the non-tail slice is the other slice of the data packet except the last slice;
- a dequeuing processing module configured to: if the head pointer is determined to point to the first type of node, read the tail from the target queue according to a cache address of slice information of a tail slice pointed to by the first type of node
- the sliced slice information triggers queue priority switching of the current communication port after completing the slice information read of the tail slice.
- the dequeuing processing module is further configured to: if it is determined that the head pointer points to the second type of node, according to the non-tail pointed by the second type of node A cache address of the sliced slice information, and the slice information of the non-tail slice is read from the target queue. And after completing the reading of the slice information of the non-tailed slice, triggering queue priority switching of the current communication port.
- the dequeue processing module is further configured to: after completing the slice information of the tail slice Or after reading the slice information of the non-tailed slice, reading the next node from the queue sub-list corresponding to the target queue, and updating the head pointer of the queue header table according to the next node,
- the next node is the first type node or the second type node.
- the dequeue management module is further configured to: before reading a head pointer of a queue header table of the target queue from a plurality of queues of the current communication port, receive a dequeue notification message sent by the scheduler, where the The team notification message includes a queue number, and the queue indicated by the queue number is the target queue.
- the dequeuing processing module is specifically configured to: when the communication port scheduling time is reached, trigger the scheduler according to the configured time slot Scheduling a corresponding communication port, and reading the last dequeue information of the corresponding communication port, the last dequeue information including at least the last queue number of the corresponding communication port, the last dequeued slice information, and a node for dequeuing the last dequeued slice information, the scheduler determining, according to the last dequeuing information, whether to trigger queue priority switching of the current communication port; if not, the scheduler is configured to queue The unit sends a dequeue notification request, and the queue number included in the dequeue notification request is the last dequeue queue number, and if so, triggers the queue priority switching of the current communication port.
- the dequeue processing module is specifically configured to trigger the scheduler to determine a priority from a non-empty queue of a current communication port. The highest queue, then the target queue is switched to the highest priority queue, and the dequeue notification message is sent.
- the first type of node when the sliced slice information is enqueued, if the slice is a tail slice of the data packet, the first type of node is generated, first The class node points to the cache address of the slice information of the tail slice and the tail slice identifier of the tail slice, wherein the tail slice identifier is used to indicate that the tail slice is the last slice of the data packet.
- the first type of node is generated, if the target queue is empty, the slice information of the tail slice is written into the target queue, and the head pointer of the queue header table is updated according to the first type node, so that the head pointer points to the tail slice.
- the cache address of the slice information and the tail slice identifier of the tail slice When the target queue is not empty, the slice information of the tail slice is written into the target queue, and the first type node is added to the queue sub-list. Then, when the slice information is dequeued, when the slice information is dequeued according to the head pointer, if the head pointer points to the first type of node at this time, the slice to be dequeued is known in advance according to the tail slice identifier pointed to by the first type of node.
- the queue priority switching in the same port is performed, so that the same communication port can be sliced according to the priority on the basis of the whole packet dequeue. Dequeue and transmission.
- FIG. 1 is a schematic diagram of application of queue management according to some embodiments of the present invention.
- FIG. 2 is a schematic flowchart of a data enrollment method according to some embodiments of the present invention.
- FIG. 3 is a schematic flowchart of a data dequeuing method according to some embodiments of the present invention.
- FIG. 4 is a schematic flowchart of a data dequeuing method according to some embodiments of the present invention.
- FIG. 5 is a schematic structural diagram of a queue management unit according to some embodiments of the present invention.
- FIG. 6 is a schematic structural diagram of a queue management unit according to another embodiment of the present invention.
- FIG. 7 is a schematic structural diagram of a queue management apparatus according to some embodiments of the present invention.
- the embodiment of the invention provides a method for data enqueuing and dequeuing, which is used to implement the same communication port on the basis of the whole package dequeue according to the priority for slice dequeuing and transmission.
- the embodiment of the invention further provides a queue management unit corresponding to the data enqueue and a queue management unit corresponding to the data dequeue.
- the communication processing chip is responsible for pre-processing and forwarding the data packets that need to be sent.
- a plurality of MAC ports are usually disposed, and each of the MAC ports is responsible for forwarding data packets of a corresponding service type, and a plurality of MAC ports are scheduled by a certain scheduling policy.
- a scheduling policy is provided.
- the scheduling time slot is configured for the MAC port, and the corresponding MAC port is scheduled in the corresponding scheduling time slot, and only one MAC port is scheduled for each scheduling time slot.
- Each of the MAC ports can have different bandwidths, but the total bandwidth of all the MAC ports is limited by the total bandwidth of the system.
- each MAC port is 10G/bps, then the total bandwidth of the 28 MAC ports is 280G/bps. , less than the total system bandwidth of 320G / bps.
- the bandwidth of the MAC port is limited, if the communication processing chip continuously sends a packet larger than its bandwidth to the MAC port for a period of time, no data packet is sent to the MAC port in the next one time, and some data is sent after a period of time. Packets are sent to the MAC port, sometimes there is no, and the data packet is large, which causes the MAC port to transmit slowly, which is easy to cause congestion and cause data burst. Therefore, in the embodiment of the present invention, the data packet that needs to be enqueued is first divided into a plurality of slices, and the slice may be divided according to the minimum bandwidth of the MAC port, and then the slice is sent to the MAC port to reduce the data burst.
- a slice-based whole packet dequeuing is implemented, that is, in the same MAC port, one data packet is sent and another data packet is sent, and at the same time, one MAC port may be responsible for forwarding multiple different packets.
- packets of different service types have different forwarding priorities. Therefore, the MAC port is subject to priority, and the prioritized scheduling policy provided in the prior art is implemented based on the data packet, that is, After a packet is dequeued, the priority scheduling policy is invoked to perform queue priority scheduling.
- the packet is divided into slices, and the priority scheduling policy is scheduled for queue priority after each slice is dequeued. Scheduling violates the whole package dequeue strategy.
- the prior art does not have a priority scheduling strategy for the case based on slice dequeue.
- the embodiment of the present invention provides a data enqueue and dequeue method, which can implement slice demarcation and deportation in the same port under the premise of being sliced into a team and being able to be out of the package.
- the data enqueue and dequeue method provided by the embodiment of the present invention is applied to a queue management system of a communication processing chip, and the queue management system includes a queue management unit and a scheduler.
- the communication processing chip receives a data packet that needs to be enqueued, and the data packet comes from a network or a terminal connected to the communication processing chip.
- the data packet is sent to the queue management unit.
- the queue management unit divides the data packet into several slices, and marks the last slice identifier for the tail slice of the data packet to indicate that the slice is the last slice of the data packet, and the last slice is removed. Is a non-tailed slice of the packet.
- the slice information is mainly managed. To achieve the dequeue and enqueue of the slice.
- a queue is established according to different service types in the communication processing chip, and a queue header table, a queue tail table, and a queue total linked list are established.
- One queue corresponds to one queue header table and one queue tail table, and the queue total linked list includes several queues.
- the linked list has a queue corresponding to each queue sub-list, wherein the queue header table can be implemented by using a register group, and can perform a 2-read 2 write operation in one cycle.
- the queue tail table can be implemented using a dual port RAM.
- the queue sub-list can also be implemented using dual-port RAM.
- the queue is used to write slice information of a slice
- the queue header table includes queue status information and a header pointer, and the head pointer is updated according to the first type node or the second type node, the first type node points to the cache address of the slice information of the tail slice and the tail slice identifier of the tail slice, The second type of node points to the cache address of the slice information of the non-tailed slice, wherein the tail slice identifier is used to indicate that the tail slice is the last slice of the data packet, and the queue state information is used to indicate that the queue is empty or non-empty;
- the above queue tail table includes a tail pointer, and the tail pointer is also updated according to the first type node or the second type node.
- the queue sub-list is used to write to the node, and the cache address of all the slice information in the queue is linked by the node, and the node includes the first type node and/or the second type node, wherein the first queue is removed from the corresponding queue.
- the slice information, each of the remaining slice information is in one-to-one correspondence with the nodes in the queue sub-list.
- the corresponding queue includes slice information A, slice information B, and slice information C.
- the slice information A corresponds to the node 1
- the slice information B corresponds to the node 2
- the slice information C corresponds to the node 3
- the head pointer of the queue header table is updated according to the node 1.
- the tail pointer of the queue tail table is updated, node 2 and node 3 are in the queue sub-list, and node 2 is linked with node 3.
- node in the queue sub-list there is no node in the queue sub-list.
- the head pointer of the queue header table points to the node corresponding to the slice information
- the tail pointer of the queue tail table points to the node corresponding to the slice information.
- the head pointer of the queue header table points to the node corresponding to the first slice information
- the tail pointer of the queue tail table points Is the node corresponding to the second slice information.
- N pieces of slice information are written in the queue, and there are N-1 nodes in the queue sub-list, wherein the slice information 2 corresponds to the node 2, the slice information 3 corresponds to the node 3, and so on, and the slice information N Corresponding to node N.
- the head pointer in the queue header table points to node 1 corresponding to slice information 1, and the tail pointer points to node N.
- the queue management unit implements the encapsulation of the sliced slice information in the order of the data packets according to the slice.
- the slice information is enqueued, if the slice tag has a tail slice identifier, it indicates that the slice is a tail slice of the data packet.
- the first type node points to a cache address of the slice information of the tail slice and a tail slice identifier of the tail slice, and according to the first type node, the head pointer of the queue header table and the tail pointer of the queue tail table are updated, so that The head pointer points to the cache address of the slice information of the tail slice and the tail slice identifier of the tail slice, and the tail pointer points to the cache address of the slice information of the tail slice and the tail slice identifier of the tail slice; if the queue status information indicates that the queue is not empty, In addition to writing the slice information of the tail slice to the queue, the first type node is added at the end of the corresponding queue sub-list, and the tail pointer of the queue tail table is updated according to the first type node.
- the slice is a non-tailed slice of the data packet. If the queue status information in the queue header table indicates that the queue is empty, the slice information of the non-tailed slice is written into the queue to generate a second type of node.
- the second type of node points to the cache address of the slice information of the non-tail slice, and then updates the head pointer of the queue header table and the tail pointer of the queue tail table according to the second type node; if the queue status information indicates that the queue is non-empty, it will be non-
- the slice information of the tail slice is written into the queue, and the generated second type node is added in the queue sub-list, and the tail pointer of the queue tail table is updated according to the generated second type node.
- the queue management unit When the sliced slice information is dequeued, the queue management unit reads the head pointer of the queue header table, and if the head pointer indicates the tail slice identifier, it indicates that the slice to be dequeued is the tail slice of the data packet, and the slice is dequeued after completing the tail slice. After the entire packet is dequeued, the priority switch can be performed in the MAC port. Therefore, after the slice information is read from the queue according to the cache address pointed to by the head pointer, the priority switch in the MAC port is triggered. deal with. If the head pointer indicates that there is no tail slice identifier, the slice information is read from the queue according to the cache address pointed to by the head pointer.
- the head pointer Since the head pointer indicates that there is no tail slice identifier, it indicates that the slice to be dequeued is a non-tail slice of the data packet, and after completing the dequeue of the non-tail slice, the priority switching in the MAC port is not required.
- the head pointer indicates the tail slice identifier or the tail slice identifier is not indicated.
- the next node is read from the corresponding queue sub-list, and if the next node is the first type node, the new pointer according to the first type node If the next node is a second type of node, the header pointer is updated according to the second type of node.
- an embodiment of the present invention provides a data enqueuing method, which is applied to a queue management unit of a queue management system of a communication processing chip, wherein a queue, a queue header table, and a queue tail table are established in the queue management system. And a queue total linked list, wherein the queue total linked list includes a plurality of queue sub-lists, each queue corresponding to a queue header table, a queue tail table, and a queue sub-list, wherein the queue header table includes at least a head pointer.
- the queue tail table includes at least a tail pointer, and the communication processing chip is further provided with a plurality of communication ports, each of the communication ports corresponding to at least two queues, and the method includes: receiving a data packet that needs to be enqueued, and Dividing the data packet into a plurality of slices, obtaining slice information of the slice, and marking a tail slice identifier of the tail slice of the data packet, the slice information including at least a port number, a priority, and a cache of the slice in memory An address, the tail slice identifier is used to indicate that the tail slice is the last slice of the data packet; Deleting the corresponding slice information in the sequence in the data packet.
- the slice identifier has a tail slice identifier in the process of enqueuing the corresponding slice information, determining that the slice is a tail slice of the data packet, Generating a first type of node, the first type of node pointing to a cache address of the slice information of the tail slice and a tail slice identifier of the tail slice; determining whether the target queue is empty, and if the target queue is empty, The slice information of the tail slice is written into the target queue, and the head pointer of the queue header table is updated according to the first type node; if the target queue is non-empty, the slice information of the tail slice is written into the The target queue is added to the tail of the queue sub-list corresponding to the target queue.
- the embodiment of the present invention further provides a data dequeuing method, which is applied to a queue management unit of a queue management system of a communication processing chip, wherein the queue management system has a plurality of queues, a queue header table, and a queue tail table.
- the queue total linked list includes a plurality of queue sub-lists, each queue corresponding to a queue header table, a queue tail table and a queue sub-list, wherein the queue header table includes at least a head pointer, The queue tail table includes at least a tail pointer, the communication processing chip is further provided with a plurality of communication ports, each of the communication ports corresponding to at least two queues, and the method may include: reading from a plurality of queues of the current communication port a head pointer of a queue header table of a target queue in which slice information of a slice is written, the slice being divided by a packet, The tail slice of the data packet is marked with a tail slice identifier, and the slice information includes at least a port number, a priority level, and a cache address of the slice in the memory, and the head pointer points to the first type node or the second type node, The first type node points to a cache address of the slice information of the tail slice and a tail slice identifie
- the slice information of the slice is enqueued, if the slice is a tail slice of the data packet, a first type node is generated, and the first type node points to the cache address of the slice information of the tail slice and the tail slice of the tail slice.
- An identifier wherein the tail slice identifier is used to indicate that the tail slice is the last slice of the data packet.
- the slice information of the tail slice is written into the target queue, and the first type node is added to the queue sub-list. Then, when the slice information is dequeued, when the slice information is dequeued according to the head pointer, if the head pointer points to the first type of node at this time, the slice to be dequeued is known in advance according to the tail slice identifier pointed to by the first type of node. For the tail slice of the packet, and then the queue information of the tail slice is dequeued, the queue priority switching in the same port is performed, so that the same communication port can be sliced according to the priority on the basis of the whole packet dequeue. Dequeue and transmission.
- FIG. 2 is a schematic flowchart of a data enrollment method according to some embodiments of the present invention. As shown in FIG. 2, a data enrollment method may include:
- 201 Receive a data packet that needs to be enqueued, divide the data packet into a plurality of slices, obtain slice information of the slice, and mark a tail slice identifier of the tail slice of the data packet, where the slice information includes at least a port number. a priority address and a cache address of the slice in the memory, the tail slice identifier being used to indicate that the tail slice is the last slice of the data packet;
- the data packet is from a network or a terminal connected to the communication processing chip.
- the port number is used to indicate the communication port corresponding to the queue where the data packet is located, in the embodiment of the present invention.
- the communication port can be the MAC port described above.
- Priority refers to the priority of the queue where the packet is located in the communication port.
- the queue header table corresponding to the queue further includes queue indication information, where the queue indication information is used to indicate that the queue is empty or non-empty, wherein the queue is empty, that is, no slice information is written in the queue. Therefore, the step 203 specifically includes: reading the queue indication information in the queue header table corresponding to the target queue, and determining, according to the queue indication information, whether the target queue is empty.
- the first type of node is generated when the slice is a tail slice of the packet. After generating the first type of node, if the target queue is empty, go to step 204; if the target queue is not empty, go to step 205.
- the tail pointer of the queue tail table is further updated according to the first type node, and the last slice information in the queue is indicated by the tail pointer.
- the tail pointer of the queue tail table is updated according to the second type of node.
- the slice in the process of enqueuing the corresponding slice information, if the slice is not marked with a tail slice identifier, determining that the slice is a non-tail slice of the data packet, generating a second type of node, The second type of node points to a cache address of the slice information of the non-tailed slice, the non-tailed slice is another slice of the data packet except the last slice; and determining whether the target queue is empty, if The target queue is empty, the slice information of the non-tailed slice is written into the target queue, and the head pointer of the queue header table is updated according to the second type node; if the target queue is non-empty, the The slice information of the non-tailed slice is written into the target queue, and the queue corresponding to the target queue
- the second list node is added to the tail of the child chain table, and the tail pointer of the queue tail table is updated according to the second type node.
- the queue indication information needs to be updated, so that the queue indication information indicates that the queue is not empty.
- the slice information of the tail slice is written to the target queue, or the target queue is empty, the non-tailed slice is After the slice information is written to the target queue, the queue indication information is updated.
- FIG. 3 is a schematic flowchart of a data enrollment method according to some embodiments of the present invention. As shown in FIG. 3, a data enrollment method may include:
- step 303 If the queue status information of the target queue indicates that the target queue is empty, the process proceeds to step 303. If the queue status information of the target queue indicates that the target queue is not empty, the process proceeds to step 304.
- Generate a second type of node read queue state information of the target queue, and determine whether the queue status information indicates that the target queue is empty.
- Case 1 The slice is the tail slice of the packet, and the first type of node is generated, and then processed in the following two cases:
- the target queue is empty, the slice information of the tail slice is written into the target queue, and the queue state information of the queue header table is updated according to the first type node updating the head pointer of the queue header table and the tail pointer of the queue tail table;
- the target queue is not empty.
- the slice information of the tail slice is written into the target queue, and the first type node is added to the queue sub-list, and the tail pointer of the queue tail table is updated according to the first type node.
- Case 2 A slice is a non-tailed slice of a packet, and a second type of node is generated, and then processed in accordance with the following two conditions:
- the target queue is empty, the slice information of the non-tail slice is written into the target queue, and the queue state information of the queue header table is updated according to the second type node updating the head pointer of the queue header table and the tail pointer of the queue tail table;
- the target queue is not empty, the slice information of the non-tail slice is written into the target queue, the second type node is added in the queue sub-list, and the tail pointer of the queue tail table is updated according to the second type node.
- the queue management unit can determine in advance whether the slice to be dequeued is a tail slice of the data packet according to the head pointer of the queue header table, and determine whether it is necessary to communicate in advance. The purpose of priority switching within the port.
- FIG. 4 is a schematic flowchart of a data dequeuing method according to some embodiments of the present invention. As shown in FIG. 4, a data dequeuing method may include:
- the slice information of the slice is written in the queue, and the slice is divided by a data packet, and the tail slice of the data packet is marked with a tail slice identifier, and the slice information includes at least a port number and an a cache address of the slice and the slice in memory, the header pointer points to a first type node or a second type node, the first type node points to a cache address of slice information of the tail slice and a tail of the tail slice a slice identifier, the second type of node points to a cache address of the slice information of the non-tailed slice of the data packet, the tail slice identifier is used to indicate that the tail slice is the last slice of the data packet, and the non-tail slice is the Remove the other slices from the last slice in the packet.
- the slice is prioritized and dequeued based on a communication port, and the communication port may be the MAC port introduced above.
- head pointer of the embodiment of the present invention points to the first type node or the second type node.
- the tail slice is dequeued by the cache address of the slice information of the tail slice pointed to by the first type of node.
- the priorities of different queues in the same MAC port are scheduled to be switched by using a strict priority (Strict Priority, SP) scheduling algorithm or a Round Robin (RR) scheduling algorithm.
- SP strict Priority
- RR Round Robin
- the head pointer when dequeuing, according to the head pointer of the queue header table, the head pointer can be used to determine in advance whether the priority switching in the communication port is needed, specifically when the head pointer points to the first type node, because the first The class node points to the tail slice identifier, so that it can know that the tail slice of the packet is about to be dequeued, and after the tail slice is completed, the priority switch in the communication port is performed.
- the location is read from the target queue according to the cache address of the slice information of the non-tailed slice pointed by the second type of node.
- the slice information of the non-tailed slice is read from the target queue according to the cache address of the slice information of the non-tailed slice pointed by the second type of node.
- the slice information of the non-tailed slice is read from the target queue according to the cache address of the slice information of the non-tailed slice pointed by the second type of node.
- the slice information of the non-tailed slice is read from the target queue according to the cache address of the slice information of the non-tailed slice pointed by the second type of node.
- the slice information of the non-tailed slice is read from the target queue according to the cache address of the slice information of the non-tailed slice pointed by the second type of node.
- the slice information of the non-tailed slice is read from the target queue according to the cache address of the slice information of the non-tailed slice pointed by the second type of node.
- the next section is read from the queue sub-list corresponding to the target queue. And updating a head pointer of the queue header table according to the next node, where the next node is the first type node or the second type node.
- the next node is read from the queue sub-list, and the head pointer is updated according to the next node, and the next node is the first type node or the second type node
- the first type of node is the cache address of the slice information pointing to the tail slice and the tail slice identifier of the tail slice. Therefore, each time the team is dequeued, the first type of node will be removed from the queue sub-list.
- the head pointer is updated, so that it can be ensured that it is always possible to determine the priority switching in the communication port according to the head pointer when dequeuing, so as to ensure priority in the same communication port on the basis of the whole package dequeue. Departure.
- the step 404 triggering the queue priority switching of the current communication port includes: triggering the scheduler to determine a highest priority queue from the non-empty queue of the current communication port, and then The target queue switches to the highest priority queue and sends a dequeue notification message, the dequeue notification message including a queue number.
- N queues can be scheduled for the same communication port, and N is a natural number greater than or equal to 1.
- N is a natural number greater than or equal to 1.
- Each queue has different priorities in the N queues. Therefore, after completing the dequeuing of a data packet, it is required.
- priority switching of N queues in the same communication port may be completed by the scheduler.
- priority switching may also be performed by the queue management unit.
- the scheduler completes the priority switching in the same communication port in detail.
- some queues may be empty queues in the N queues, that is, the queues do not write any slice information. Therefore, in the embodiment of the present invention, the empty queues do not participate in the priority judgment, that is, the scheduler is not empty from the communication port.
- the queue with the highest priority is determined in the queue, and then the queue management unit is notified to switch to the highest priority queue for slicing and dequeuing.
- the scheduler After the scheduler determines the highest priority queue, the scheduler sends a dequeue notification message to the queue management unit.
- the queue number included in the dequeue notification message is the queue number of the highest priority queue, and the queue management unit receives the queue number.
- the method before reading the head pointer of the queue header table of the target queue from the plurality of queues of the current communication port, the method includes: receiving a dequeue notification message sent by the scheduler, where the dequeue notification message includes A queue number, the queue indicated by the queue number is the target queue.
- the step 404 includes: when the communication port scheduling time is reached, triggering the scheduler to schedule the corresponding communication port according to the configured time slot, and reading the last dequeue information of the last corresponding communication port.
- the last dequeuing information includes at least a queue number of the last deported corresponding communication port, a last dequeued slice information, and a node for dequeuing the last dequeued slice information, the scheduler according to Determining, by the last dequeuing information, whether to trigger a queue priority switching of the current communication port; if not, the scheduler sends a dequeue notification request to the queue management unit, where the dequeue notification request includes a queue number as described The queue number of the last dequeue, if yes, triggers the queue priority switching of the current communication port.
- the fastest interval is scheduled to the same MAC port.
- the A, B, and C cycles are taken as an example.
- the MAC port 1 is scheduled in the period A, and the MAC is scheduled in the period B.
- Port 2 will be scheduled to port 1 again in cycle C.
- the queue management unit reads the slice information from the queue according to the queue header table to complete the slice dequeuing in cycle B.
- the queue management unit not only completes the dequeue of the slice in MAC port 2, but also completes the sending of the slice dequeued in MAC port 1 in the A cycle, and the slice information of the last dequeued slice in the A cycle and its queue.
- the information in the header table is registered together, so in the period C, the queue management unit can read the information from the register. If the information includes the tail slice identifier, the scheduler completes the MAC port 1 according to the completion. Queue switching, selecting the highest priority queue from the non-empty queue of MAC port 1 as the target queue, and then notifying the queue management unit.
- FIG. 5 is a schematic structural diagram of a queue management unit according to an embodiment of the present invention. As shown in FIG. 5, a queue management unit may include:
- the determining module 510 is configured to receive a data packet that needs to be enqueued, divide the data packet into a plurality of slices, obtain slice information of the slice, and mark a tail slice identifier of the tail slice of the data packet, where the slice information Include at least a port number, a priority, and a cache address of the slice in memory, the tail slice identifier being used to indicate that the tail slice is the last slice of the data packet;
- the enqueue processing module 520 is configured to enqueue the corresponding slice information according to the sequence of the slice in the data packet, and if the slice identifier has a tail slice identifier, determine The slice is a tail slice of the data packet, and generates a first type node, where the first type node points to a cache address of the slice information of the tail slice and a tail slice identifier of the tail slice; Whether the target queue is empty, if the target queue is empty, the slice information of the tail slice is written into the target queue, and the head pointer of the queue header table is updated according to the first type node; The target queue is non-empty, the slice information of the tail slice is written into the target queue, the first type node is added at the tail of the queue sub-list corresponding to the target queue, and the node is updated according to the first type node.
- the tail pointer of the queue tail table is configured to enqueue the corresponding slice information according to the sequence of the slice in the data packet,
- the enqueue processing module 520 is further configured to: if the slice is not marked with a tail slice identifier during the enqueue process of the corresponding slice information, determine that the slice is the data packet. a non-tailed slice, generating a second type of node, the second type of node pointing to a cache address of the slice information of the non-tailed slice, the non-tailed slice being another slice of the data packet except the last slice; Whether the target queue is empty, if the target queue is empty, the slice information of the non-tailed slice is written into the target queue, and the head pointer of the queue header table is updated according to the second type node; The target queue is non-empty, the slice information of the non-tailed slice is written into the target queue, the second type of node is added at the tail of the queue sub-list corresponding to the target queue, and according to the second type of node Update the tail pointer of the queue tail table.
- the queue header table further includes queue indication information
- the enqueue processing module 520 is configured to: read queue indication information in a queue header table corresponding to the target queue, and determine the target according to the queue indication information. Whether the queue is empty.
- the enqueue processing module 520 is further configured to: after the target queue is empty, write the slice information of the tail slice to the target queue, or if the target queue is Empty, after the slice information of the non-tailed slice is written into the target queue, update the queue indication information, and send a dequeue request message to a scheduler of the queue management system, where the dequeue request message includes a queue number.
- the enqueue processing unit 520 is further configured to: after the slice information of the tail slice is written into the target queue, update a tail pointer of the queue tail table according to the first type node And after writing the slice information of the non-tailed slice to the target queue, updating a tail pointer of the queue tail table according to the second type of node.
- FIG. 6 is a schematic structural diagram of a queue management unit according to another embodiment of the present invention. As shown in FIG. 6, a queue management unit may include:
- the reading module 610 is configured to read the queue header of the target queue from several queues of the current communication port.
- a header pointer of the table in which the sliced slice information is written the slice is obtained by dividing the data packet, and the tail slice of the data packet is marked with a tail slice identifier, and the slice information includes at least a port number, a priority, and a cache address of the slice in the memory
- the head pointer points to a first type node or a second type node
- the first type node points to a cache address of the slice information of the tail slice and a tail slice identifier of the tail slice
- the second type of node points to a cache address of the slice information of the non-tailed slice of the data packet
- the tail slice identifier is used to indicate that the tail slice is the last slice of the data packet
- the non-tailed slice is the data packet Remove the other slices from the last slice;
- the dequeue processing module 620 is configured to: if the head pointer is determined to point to the first type of node, read the cached address from the target queue according to a cache address of the slice information of the tail slice pointed to by the first type of node The slice information of the tail slice triggers queue priority switching of the current communication port after the slice information of the tail slice is read.
- the dequeue processing module 620 is further configured to: if it is determined that the head pointer points to the second type of node, according to the slice information of the non-tailed slice pointed by the second type of node Cache address, reading slice information of the non-tailed slice from the target queue. And after completing the reading of the slice information of the non-tailed slice, triggering queue priority switching of the current communication port.
- the dequeue processing module 620 is further configured to: after completing the reading of the slice information of the tail slice, or after completing the reading of the slice information of the non-tail slice, The next node is read in the queue sub-list corresponding to the target queue, and the head pointer of the queue header table is updated according to the next node, and the next node is the first type node or the second type node.
- the dequeuing management module 620 is specifically configured to: before the head pointer of the queue header table of the target queue is read from the queues of the current communication port, the dequeue sent by the scheduler is sent. a notification message, the dequeue notification message includes a queue number, and the queue indicated by the queue number is the target queue.
- the dequeuing processing module 620 is specifically configured to: when the communication port scheduling time is reached, trigger the scheduler to schedule a corresponding communication port according to the configured time slot, and read the previous corresponding correspondence.
- the last dequeue information of the communication port, the last dequeuing information includes at least the last queue number of the corresponding communication port, the last dequeued slice information, and the use Determining, according to the last dequeuing information, whether to trigger queue priority switching of the current communication port according to the last dequeue information; if not, the scheduler to the queue management unit Sending a dequeue notification request, the queue number included in the dequeue notification request is the queue number of the last dequeue, and if so, triggering queue priority switching of the current communication port.
- the dequeue processing module 620 is specifically configured to: trigger the scheduler to determine a highest priority queue from a non-empty queue of a current communication port, and then switch the target queue to the The highest priority queue is described, and the dequeue notification message is sent.
- FIG. 7 is a schematic structural diagram of a queue management apparatus according to an embodiment of the present invention, which may include at least one processor 701 (for example, a CPU, Central Processing Unit), at least one network interface or other communication interface, and a memory 702. And at least one communication bus for implementing connection communication between the devices.
- the processor 701 is configured to execute an executable module, such as a computer program, stored in a memory.
- the memory 702 may include a high speed random access memory (RAM), and may also include a non-volatile memory, such as at least one disk storage.
- the communication connection between the system gateway and at least one other network element is implemented by at least one network interface (which may be wired or wireless), and an Internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
- the memory 702 stores program instructions, which may be executed by the processor 701.
- the processor 701 specifically performs the following steps: receiving a data packet that needs to be enqueued, Dividing the data packet into a plurality of slices, obtaining slice information of the slice, and marking a tail slice identifier of the tail slice of the data packet, where the slice information includes at least a port number, a priority, and the slice in a memory Cache address, the tail slice identifier is used to indicate that the tail slice is the last slice of the data packet; the corresponding slice information is enqueued according to the order of the slice in the data packet, where the corresponding slice is In the process of enqueuing the information, if the slice tag has a tail slice identifier, determining that the slice is a tail slice of the data packet, generating a first type of node, where the first type node points to the slice information of the tail slice a cache address and a tail slice identifier of the
- Reading a head pointer of a queue header table of the target queue from a plurality of queues of the current communication port wherein the slice is written with sliced slice information, the slice is obtained by dividing the data packet, and the tail slice of the data packet is marked with a tail a slice identifier, the slice information includes at least a port number, a priority, and a cache address of the slice in the memory, the head pointer points to a first type node or a second type node, and the first type node points to a tail slice a cache address of the slice information and a tail slice identifier of the tail slice, the second type node points to a cache address of the slice information of the non-tail slice of the data packet, and the tail slice identifier is used to indicate that the tail slice is a data packet a last slice, the other slice of the data packet except the last slice; if it is determined that the head pointer points to the first type of node, according to the tail slice pointed by the first type of node a cache address of the
- the processor 701 may further perform the following steps: in the process of enqueuing the corresponding slice information, if the slice is not marked with a trailing slice identifier, determining that the slice is a non-packet a tail slice, generating a second type of node, the second type of node points to a cache address of the slice information of the non-tailed slice, and the non-tailed slice is another slice of the data packet except the last slice; Whether the target queue is empty, if the target queue is empty, the slice information of the non-tailed slice is written into the target queue, and the head pointer of the queue header table is updated according to the second type node; The target queue is non-empty, and the slice information of the non-tailed slice is written into the target queue, and the second type of node is added to the tail of the queue sub-list corresponding to the target queue.
- the processor 701 may further perform the steps of: updating queue status information and queue length information of the target queue, and transmitting scheduling request information to the scheduler, where the scheduling request information includes a queue number.
- the processor 701 may further perform the following steps: the queue header table further includes queue indication information, and the queue indication information in the queue header table corresponding to the target queue is read, according to the queue indication The information determines whether the target queue is empty.
- the processor 701 may further perform the steps of: after the target queue is empty, after writing slice information of the tail slice to the target queue, or when the target queue is empty, After the slice information of the non-tailed slice is written into the target queue, the queue indication information is updated.
- the processor 701 may further perform the following steps: after writing slice information of the tail slice to the target queue, updating a tail pointer of the queue tail table according to the first class node .
- the processor 701 may further perform the step of: updating the tail of the queue tail table according to the second type of node after writing the slice information of the non-tailed slice to the target queue pointer.
- the processor 701 may further perform the following steps: if it is determined that the head pointer points to the second type of node, according to a cache address of the slice information of the non-tailed slice pointed by the second type of node, The slice information of the non-tailed slice is read from the target queue. And after completing the reading of the slice information of the non-tailed slice, triggering queue priority switching of the current communication port.
- the processor 701 may further perform the following steps: after completing the reading of the slice information of the tail slice, or after completing the reading of the slice information of the non-tailed slice, from the target queue
- the next node is read in the corresponding queue sub-list, and the head pointer of the queue header table is updated according to the next node, and the next node is the first type node or the second type node.
- the processor 701 may further perform the steps of: receiving a dequeue notification message sent by the scheduler, the dequeue notification message including a queue number, and the queue indicated by the queue number is the target queue.
- the processor 701 may further perform the steps of: when the communication port scheduling time is reached, triggering the scheduler to schedule a corresponding communication port according to the configured time slot, and reading the last corresponding communication port.
- the last dequeuing information, the last dequeuing information includes at least a queue number of the last deported corresponding communication port, a slice information of the last dequeue, and a node for dequeuing the last dequeued slice information.
- the number is the queue number of the last dequeue, and if so, the queue priority switching of the current communication port is triggered.
- the processor 701 may further perform the steps of: triggering the scheduler to determine a highest priority queue from a non-empty queue of a current communication port, and then switching the target queue to the priority The highest queue, and the dequeue notification message is sent.
- the disclosed system, apparatus, and method may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
- the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
- the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
- a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
- the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
本发明实施例公开一种数据入队与出队方法及队列管理单元,用于在整包出队基础上实现同一个通信端口内按照优先级进行切片出队与传输。本发明提供的数据入队方法包括:接收需要入队的数据包,将数据包划分成若干切片,得到所述切片的切片信息,以及对数据包的尾切片标记尾切片标识;按照切片在数据包中的顺序对对应切片信息进行入队,在对应切片信息入队过程中,若切片标记有尾切片标识,确定切片为数据包的尾切片,生成第一类节点;判断目标队列是否为空,若目标队列为空,将尾切片的切片信息写入目标队列,根据第一类节点更新队列头表的头指针;若目标队列非空,将尾切片的切片信息写入目标队列,在目标队列对应的队列子链表尾部增加第一类节点。
Description
本发明涉及通信技术领域,具体涉及一种数据入队与出队方法及队列管理单元。
队列管理(Queue Manager,简称QM)是通信处理芯片中的常见功能,主要通过缓存地址来管理数据包。由于通信处理芯片处理的业务较多,在队列管理中,一般按照业务分类在共享缓存中建立队列,在数据包到达时,在内存为到达的数据包分配一个缓存地址,然后将包括缓存地址在内的包描述符(Package Cell Descriptor,简称PCD)按分类写入队列进行管理。由于属于不同分类的数据包交错到达,一个分类的数据包在内存的缓存地址无法保证连续,同样,数据包的包描述符在缓存中的缓存地址也无法保证连续,因此需要给每个队列生成一个队列子链表,在队列子链表中把属于同一个队列的数据包的包描述符链接起来。具体做法为:除了队列和队列子链表,还建立队列头表和队列尾表,队列头表的指针指向队列中的首个包描述符的缓存地址,队列尾表的指针指向队列中的最后一个包描述符的缓存地址,而且在队列子链表中将队列中的包描述符的缓存地址链接起来。在数据包入队时,若队列不为空,将包描述符写入队列,包描述符的缓存地址写入队列子链表。在数据包出队时,读取队列头表指针,根据队列头表指针指向的缓存地址从队列中读取包描述符,然后根据包描述符中的缓存地址从内存读取数据包送到相应的媒体接入控制(Media Access Control,简称MAC)端口进行传输,并从队列子链表中读取下一个包描述符的缓存地址,以更新队列头表的指针。
由于一些数据包较大,而MAC端口带宽有限,为了避免MAC端口带宽太小无法顺畅传输较大数据包而造成的数据突发,现有一些队列管理中先将数据包分成若干切片,然后在内存中为每一个切片分配一个缓存地址,则在入队时,切片的切片信息取代包描述符入队进行管理,在出队时,读取队列头表指针,根据队列头表指针指向的缓存地址从队列中读取切片信息,然后
根据切片信息中的缓存地址从内存读取切片送到相应的MAC端口进行传输,并从队列子链表中读取下一个切片信息的缓存地址,以更新队列头表的指针。但是,MAC端口对通信处理芯片的宏观约束中,要求通信处理芯片按照队列之间的优先级将切片发送到MAC端口,MAC端口按照优先级进行切片传输。在以数据包出队时,优先级的调度策略是每一个数据包完成出队都可以进行优先级判断和切换,但是由于在以切片出队时,若采用数据包出队时的优先级调度策略,在每一个切片完成出队后进行优先级判断和切换,又与整包出队相违背,若要基于整包出队,现有优先级调度策略又无法实现,因此,在以切片出队时,同一个MAC端口内难以基于现有优先级调度策略进行优先级处理。
发明内容
本发明实施例提供了一种数据入队与出队方法及队列管理单元,用于在整包出队基础上实现同一个通信端口按照优先级进行切片出队与传输。
本发明第一方面提供了一种数据入队方法,应用于通信处理芯片的队列管理系统的队列管理单元中,所述队列管理系统中建立有若干队列、队列头表和队列尾表、以及一个队列总链表,所述队列总链表中包括若干队列子链表,每一个队列对应一个队列头表、一个队列尾表和一个队列子链表,所述队列头表中至少包括有头指针,所述队列尾表至少包括有尾指针,所述通信处理芯片还设置有若干通信端口,每一个所述通信端口对应至少两个队列,所述方法可包括:
接收需要入队的数据包,将所述数据包划分成若干切片,得到所述切片的切片信息,以及对所述数据包的尾切片标记尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述尾切片标识用于指示所述尾切片为所述数据包的最后一个切片;
按照所述切片在所述数据包中的顺序对对应切片信息进行入队,在所述对应切片信息入队过程中,若所述切片标记有尾切片标识,确定所述切片为所述数据包的尾切片,生成第一类节点,所述第一类节点指向所述尾切片的切片信息的缓存地址和所述尾切片的尾切片标识;
判断目标队列是否为空,若所述目标队列为空,将所述尾切片的切片信息写入所述目标队列,根据所述第一类节点更新所述队列头表的头指针;
若所述目标队列为非空,将所述尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列子链表尾部增加所述第一类节点。
结合第一方面,在第一种可能的实现方式中,所述方法还包括:在所述对应切片信息入队过程中,若所述切片未标记有尾切片标识,确定所述切片为所述数据包的非尾切片,生成第二类节点,所述第二类节点指向所述非尾切片的切片信息的缓存地址,所述非尾切片为所述数据包中除去最后一个切片的其它切片;判断所述目标队列是否为空,若所述目标队列为空,将所述非尾切片的切片信息写入所述目标队列,根据所述第二类节点更新所述队列头表的头指针;若所述目标队列为非空,将所述非尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列子链表尾部增加所述第二类节点。
结合第一方面、或者第一方面的第一种可能的实现方式,在第二种可能的实现方式中,所述队列头表还包括队列指示信息,所述判断目标队列是否为空包括:读取所述目标队列对应的队列头表中的队列指示信息,根据所述队列指示信息判断所述目标队列是否为空。
结合第一方面的第二种可能的实现方式,在第三种可能的实现方式中,所述方法还包括:在所述目标队列为空,将所述尾切片的切片信息写入所述目标队列之后,或者在所述目标队列为空,将所述非尾切片的切片信息写入所述目标队列之后,更新所述队列指示信息。
结合第一方面,在第四种可能的实现方式中,所述方法还包括:在将所述尾切片的切片信息写入所述目标队列之后,根据所述第一类节点更新所述队列尾表的尾指针。
结合第一方面的第一种可能的实现方式,在第四种可能的实现方式中,所述方法还包括:在将所述非尾切片的切片信息写入所述目标队列之后,根据所述第二类节点更新所述队列尾表的尾指针。
本发明第二方面提供了一种数据出队方法,应用于通信处理芯片的队列管理系统的队列管理单元中,所述队列管理系统中建立有若干队列、队列头表和队列尾表、以及一个队列总链表,所述队列总链表中包括若干队列子链
表,每一个队列对应一个队列头表、一个队列尾表和一个队列子链表,所述队列头表中至少包括有头指针,所述队列尾表至少包括有尾指针,所述通信处理芯片还设置有若干通信端口,每一个所述通信端口对应至少两个队列,所述方法可包括:从当前通信端口的若干队列中读取目标队列的队列头表的头指针,所述队列中写有切片的切片信息,所述切片由数据包划分得到,所述数据包的尾切片标记有尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述头指针指向第一类节点或第二类节点,所述第一类节点指向尾切片的切片信息的缓存地址和所述尾切片的尾切片标识,所述第二类节点指向数据包的非尾切片的切片信息的缓存地址,所述尾切片标识用于指示所述尾切片为数据包的最后一个切片,所述非尾切片为所述数据包中除去最后一个切片的其它切片;若确定所述头指针指向所述第一类节点,根据所述第一类节点指向的尾切片的切片信息的缓存地址,从所述目标队列中读取所述尾切片的切片信息;在完成所述尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换。
结合第二方面,在第一种可能的实现方式中,所述方法还包括:若确定所述头指针指向所述第二类节点,根据所述第二类节点指向的非尾切片的切片信息的缓存地址,从所述目标队列中读取所述非尾切片的切片信息。并在完成所述非尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换。
结合第二方面、或者第二方面的第一种可能的实现方式,在第二种可能的实现方式中,所述方法还包括:在完成所述尾切片的切片信息读取后,或者在完成所述非尾切片的切片信息读取后,从所述目标队列对应的队列子链表中读取下一个节点,并根据所述下一个节点更新所述队列头表的头指针,所述下一个节点为所述第一类节点或第二类节点。
结合第二方面、或者第二方面的第一种可能的实现方式,在第三种可能的实现方式中,所述从当前通信端口的若干队列中读取目标队列的队列头表的头指针之前包括:接收调度器发送的出队通知消息,所述出队通知消息包括队列号,所述队列号指示的队列为所述目标队列。
结合第二方面的第三种可能的实现方式,在第四种可能的实现方式中,
所述在完成所述尾切片的切片信息读取后或者在完成所述非尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换包括:在到达通信端口调度时间时,触发所述调度器根据配置时隙调度对应通信端口,并读取上一次所述对应通信端口的最后出队信息,所述最后出队信息至少包括上一次所述对应通信端口最后出队的队列号、最后出队的切片信息以及用于所述最后出队的切片信息出队的节点,所述调度器根据所述最后出队信息判断是否触发所述当前通信端口的队列优先级切换;若否,所述调度器向队列管理单元发送出队通知请求,所述出队通知请求包括的队列号为所述最后出队的队列号,若是,则触发所述当前通信端口的队列优先级切换。
结合第二方面的第四种可能的实现方式,在第五种可能的实现方式中,所述触发所述当前通信端口的队列优先级切换包括:触发所述调度器从当前通信端口的非空队列中确定出优先级最高队列,然后将所述目标队列切换到所述优先级最高队列,并且发送所述出队通知消息。
本发明第三方面提供了一种队列管理单元,所述队列管理单元设置于通信处理芯片的队列管理系统,所述队列管理系统中建立有若干队列、队列头表和队列尾表、以及一个队列总链表,所述队列总链表中包括若干队列子链表,每一个队列对应一个队列头表、一个队列尾表和一个队列子链表,所述队列头表中至少包括有头指针,所述队列尾表至少包括有尾指针,所述通信处理芯片还设置有若干通信端口,每一个所述通信端口对应至少两个队列,所述队列管理单元包括:
接收模块,用于接收需要入队的数据包,将所述数据包划分成若干切片,得到所述切片的切片信息,以及对所述数据包的尾切片标记尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述尾切片标识用于指示所述尾切片为所述数据包的最后一个切片;
入队处理模块,用于按照所述切片在所述数据包中的顺序对对应切片信息进行入队,在所述对应切片信息入队过程中,若所述切片标记有尾切片标识,确定所述切片为所述数据包的尾切片,生成第一类节点,所述第一类节点指向所述尾切片的切片信息的缓存地址和所述尾切片的尾切片标识;判断目标队列是否为空,若所述目标队列为空,将所述尾切片的切片信息写入所
述目标队列,根据所述第一类节点更新所述队列头表的头指针;若所述目标队列为非空,将所述尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列子链表尾部增加所述第一类节点,以及根据所述第一类节点更新所述队列尾表的尾指针。
结合第三方面,在第一种可能的实现方式中,所述入队处理模块还用于,在所述对应切片信息入队过程中,若所述切片未标记有尾切片标识,确定所述切片为所述数据包的非尾切片,生成第二类节点,所述第二类节点指向所述非尾切片的切片信息的缓存地址,所述非尾切片为所述数据包中除去最后一个切片的其它切片;判断所述目标队列是否为空,若所述目标队列为空,将所述非尾切片的切片信息写入所述目标队列,根据所述第二类节点更新所述队列头表的头指针;若所述目标队列为非空,将所述非尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列子链表尾部增加所述第二类节点,以及根据所述第二类节点更新所述队列尾表的尾指针。
结合第三方面,或者第三方面的第一种可能的实现方式,在第二种可能的实现方式中,所述队列头表还包括队列指示信息,所述入队处理模块具体用于,读取所述目标队列对应的队列头表中的队列指示信息,根据所述队列指示信息判断所述目标队列是否为空。
结合第三方面的第二种可能的实现方式,在第三种可能的实现方式中,所述入队处理模块还用于,若所述目标队列为空,将所述尾切片的切片信息写入所述目标队列之后,或者,若所述目标队列为空,将所述非尾切片的切片信息写入所述目标队列之后,更新所述队列指示信息,以及向所述队列管理系统的调度器发送出队请求消息,所述出队请求消息包括队列号。
结合第三方面,在第四种可能的实现方式中,所述入队处理单元还用于,在将所述尾切片的切片信息写入所述目标队列之后,根据所述第一类节点更新所述队列尾表的尾指针。
结合第三方面的第一种可能的实现方式,在第五种可能的实现方式中,所述入队处理单元还用于,在将所述非尾切片的切片信息写入所述目标队列之后,根据所述第二类节点更新所述队列尾表的尾指针。
本发明第四方面提供了一种队列管理单元,所述队列管理单元设置于通
信处理芯片的队列管理系统,所述队列管理系统中建立有若干队列、队列头表和队列尾表、以及一个队列总链表,所述队列总链表中包括若干队列子链表,每一个队列对应一个队列头表、一个队列尾表和一个队列子链表,所述队列头表中至少包括有头指针,所述队列尾表至少包括有尾指针,所述通信处理芯片还设置有若干通信端口,每一个所述通信端口对应至少两个队列,所述队列管理单元包括:
读取模块,用于从当前通信端口的若干队列中读取目标队列的队列头表的头指针,所述队列中写有切片的切片信息,所述切片由数据包划分得到,所述数据包的尾切片标记有尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述头指针指向第一类节点或第二类节点,所述第一类节点指向尾切片的切片信息的缓存地址和所述尾切片的尾切片标识,所述第二类节点指向数据包的非尾切片的切片信息的缓存地址,所述尾切片标识用于指示所述尾切片为数据包的最后一个切片,所述非尾切片为所述数据包中除去最后一个切片的其它切片;
出队处理模块,用于若确定所述头指针指向所述第一类节点,根据所述第一类节点指向的尾切片的切片信息的缓存地址,从所述目标队列中读取所述尾切片的切片信息,在完成所述尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换。
结合第四方面,在第一种可能的实现方式中,所述出队处理模块还用于,若确定所述头指针指向所述第二类节点,根据所述第二类节点指向的非尾切片的切片信息的缓存地址,从所述目标队列中读取所述非尾切片的切片信息。并在完成所述非尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换。
结合第四方面、或者第四方面的第一种可能的实现方式,在第二种可能的实现方式中,所述出队处理模块还用于,在完成所述尾切片的切片信息读取后,或者在完成所述非尾切片的切片信息读取后,从所述目标队列对应的队列子链表中读取下一个节点,并根据所述下一个节点更新所述队列头表的头指针,所述下一个节点为所述第一类节点或第二类节点。
结合第四方面、或者第四方面的第一种可能的实现方式,在第三种可能
的实现方式中,所述出队管理模块还用于,在从当前通信端口的若干队列中读取目标队列的队列头表的头指针之前,接收调度器发送的出队通知消息,所述出队通知消息包括队列号,所述队列号指示的队列为所述目标队列。
结合第四方面的第三种可能的实现方式,在第四种可能的实现方式中,所述出队处理模块具体用于,在到达通信端口调度时间时,触发所述调度器根据配置时隙调度对应通信端口,并读取上一次所述对应通信端口的最后出队信息,所述最后出队信息至少包括上一次所述对应通信端口最后出队的队列号、最后出队的切片信息以及用于所述最后出队的切片信息出队的节点,所述调度器根据所述最后出队信息判断是否触发所述当前通信端口的队列优先级切换;若否,所述调度器向队列管理单元发送出队通知请求,所述出队通知请求包括的队列号为所述最后出队的队列号,若是,则触发所述当前通信端口的队列优先级切换。
结合第四方面的第四种可能的实现方式,在第五种可能的实现方式中,所述出队处理模块具体用于,触发所述调度器从当前通信端口的非空队列中确定出优先级最高队列,然后将所述目标队列切换到所述优先级最高队列,并且发送所述出队通知消息。
从以上技术方案可以看出,本发明实施例提供的数据入队与出队方法中,由于在切片的切片信息入队时,若切片是数据包的尾切片,生成第一类节点,第一类节点指向了尾切片的切片信息的缓存地址和该尾切片的尾切片标识,其中,尾切片标识用于指示该尾切片为数据包的最后一个切片。在生成第一类节点后,若目标队列为空,将该尾切片的切片信息写入目标队列,并根据该第一类节点更新队列头表的头指针,使得该头指针指向了尾切片的切片信息的缓存地址和该尾切片的尾切片标识。在目标队列不为空时,将尾切片的切片信息写入目标队列,以及将该第一类节点增加到队列子链表中。那么在切片信息出队时,根据该头指针对该切片信息进行出队时,若此时头指针指向第一类节点,则根据第一类节点指向的尾切片标识提前知道即将出队的切片为数据包的尾切片,进而在完成尾切片的切片信息出队后,进行同一个端口内的队列优先级切换,从而能够在整包出队的基础上实现同一个通信端口按照优先级进行切片出队与传输。
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明一些实施例提供的队列管理的应用示意图;
图2为本发明一些实施例提供的数据入队方法的流程示意图;
图3为本发明一些实施例提供的数据出队方法的流程示意图;
图4为本发明一些实施例提供的数据出队方法的流程示意图;
图5为本发明一些实施例提供的队列管理单元的结构示意图;
图6为本发明另一些实施例提供的队列管理单元的结构示意图;
图7为本发明一些实施例提供的队列管理装置的结构示意图。
下面将结合本发明实施例的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例提供了一种数据入队与出队方法,用于在整包出队基础上实现同一个通信端口按照优先级进行切片出队与传输。本发明实施例还提供了一种数据入队对应的队列管理单元及数据出队对应的队列管理单元。
首先,先简单介绍一下通信处理芯片。其中,通信处理芯片负责对需要发送的数据包进行发送前期处理和转发。在通信处理芯片中通常设置有若干MAC端口,每一个MAC端口负责转发相应业务类型的数据包,若干MAC端口之间通过一定调度策略调度,在本发明实施例中提供了一种调度策略,在该调度策略中为MAC端口配置调度时隙,在相应的调度时隙调度相应的MAC端口,每一个调度时隙只调度一个MAC端口。其中,每个MAC端口可以是不同带宽,但是由于受到系统总带宽限制,所有MAC端口的带宽总和
要不大于系统的总带宽,例如,若系统总带宽为320G/bps,通信处理芯片中设置有28个MAC端口,每个MAC端口10G/bps,那么28个MAC端口的带宽总和为280G/bps,小于系统总带宽320G/bps。
由于MAC端口的带宽有限,若在一段时间内通信处理芯片连续将大于其带宽的数据包发送到MAC端口,在接下来一端时间内又没有数据包发送到MAC端口,一段时间后又发送一些数据包到MAC端口,如此时有时无,而且数据包又大,导致MAC端口传输速度慢,则容易造成拥塞,引起数据突发。因此,在本发明实施例中,首先将需要入队的数据包划分成若干切片,具体可以根据MAC端口的最小带宽进行切片划分,再将切片发送到MAC端口,以降低数据突发。
进一步地,本发明实施例中要实现基于切片的整包出队,即在同一个MAC端口内,要发送完一个数据包再发送另外一个数据包,同时,一个MAC端口可能负责转发多个不同业务类型的数据包,不同业务类型的数据包具有不同的转发优先级,因此,MAC端口要受到优先级约束,而现有技术中提供的优先级调度策略是基于数据包实现的,也就是在一个数据包完成出队后调用优先级调度策略进行队列优先级调度,而本发明实施例中由于将数据包划分成切片,如果在每次切片出队后就调度优先级调度策略进行队列优先级调度,则违背了整包出队策略,现有技术还没有优先级调度策略适用于基于切片出队的情况。
因此,本发明实施例提供一种数据入队和出队方法,能够在以切片入队,且能够整包出队的前提条件下,实现在同一个端口内切片区分优先级出队与传输。
本发明实施例提供的数据入队和出队方法应用于通信处理芯片的队列管理系统中,队列管理系统包括队列管理单元和调度器。其中,通信处理芯片接收到需要入队的数据包,该数据包来自网络或者与该通信处理芯片连接的终端。数据包被送到队列管理单元,队列管理单元将数据包划分成若干切片,对于数据包的尾切片标记上尾切片标识,以说明此切片为数据包的最后一个切片,除去最后一个切片其它切片称为数据包的非尾切片。在通信处理芯片中为每一个切片分配存储空间,得到切片在内存的缓存地址,然后根据切片
在内存的缓存地址、该切片对应的MAC端口的端口号以及该切片的优先级等,得到切片的切片信息,然后将切片信息送入相应的队列,在本发明实施例中主要通过管理切片信息来实现切片的出队和入队。
首先,在通信处理芯片中根据不同业务类型建立队列,以及建立队列头表、队列尾表和队列总链表,一个队列对应着一个队列头表和一个队列尾表,队列总链表中包括若干队列子链表,每一个队列子链表对应一个队列,其中,队列头表可以使用寄存器组实现,可以实现1个周期内2读2写的操作。队列尾表可以使用一个双口的RAM实现。队列子链表也可以使用双口的RAM实现。
其中,上述队列用于写入切片的切片信息;
上述队列头表包括队列状态信息和头指针,头指针根据第一类节点或第二类节点进行更新,第一类节点指向尾切片的切片信息的缓存地址和该尾切片的尾切片标识,第二类节点指向非尾切片的切片信息的缓存地址,其中,尾切片标识用于指示该尾切片为数据包最后一个切片,队列状态信息用于指示队列为空或非空;
上述队列尾表包括尾指针,尾指针同样根据第一类节点或第二类节点进行更新。
上述队列子链表用于写入节点,通过节点将队列中所有切片信息的缓存地址链接起来,该节点包括上述的第一类节点和/或第二类节点,其中,对应队列中除去第一个切片信息,其余每个切片信息按顺序与队列子链表中的节点一一对应。例如,对应队列中包括切片信息A、切片信息B和切片信息C,切片信息A对应节点1,切片信息B对应节点2、切片信息C对应节点3,根据节点1更新队列头表的头指针,根据节点3更新队列尾表的尾指针,节点2和节点3在队列子链表中,并且节点2与节点3链接起来。在对应队列中只有1个切片信息时,队列子链表中没有节点,队列头表的头指针指向的是该切片信息对应的节点,队列尾表的尾指针指向的是该切片信息对应的节点。在对应队列中只有2个切片信息时,队列子链表中只有第2个切片信息对应的节点,队列头表的头指针指向的是第1个切片信息对应的节点,队列尾表的尾指针指向的是第2个切片信息对应的节点。
具体如图1所示,队列中写入了N个切片信息,队列子链表中有N-1个节点,其中,切片信息2对应节点2,切片信息3对应节点3,依次类推,切片信息N对应节点N。队列头表中的头指针指向了切片信息1对应的节点1,尾指针指向了节点N。
其次,队列管理单元在完成数据包划分成切片后,按照切片在数据包的顺序实现切片的切片信息的入队。在切片信息入队时,若是切片标记有尾切片标识,说明该切片为数据包的尾切片,若队列头表中的队列状态信息指示队列为空,除了将切片的切片信息写入队列外,生成第一类节点,第一类节点指向尾切片的切片信息的缓存地址和该尾切片的尾切片标识,根据该第一类节点更新队列头表的头指针和队列尾表的尾指针,使得该头指针指向尾切片的切片信息的缓存地址和尾切片的尾切片标识,和尾指针指向尾切片的切片信息的缓存地址和尾切片的尾切片标识;若队列状态信息指示队列非空时,除了将尾切片的切片信息写入队列外,在对应的队列子链表尾部增加第一类节点,同时,根据第一类节点更新队列尾表的尾指针。若切片没有标记尾切片标识,说明该切片为数据包的非尾切片,若队列头表中的队列状态信息指示队列为空时,将非尾切片的切片信息写入队列,生成第二类节点,第二类节点指向非尾切片的切片信息的缓存地址,然后根据第二类节点更新队列头表的头指针和队列尾表的尾指针;若队列状态信息指示队列为非空时,将非尾切片的切片信息写入队列,在队列子链表中增加生成的第二类节点,并根据生成的第二类节点更新队列尾表的尾指针。
在切片的切片信息出队时,队列管理单元读取队列头表的头指针,若头指针指示有尾切片标识,说明即将出队的切片是数据包的尾切片,在完成该尾切片出队后,将会完成整个数据包的出队,则可以在MAC端口内进行优先级切换,因此,在根据头指针指向的缓存地址从队列中读取切片信息后,触发MAC端口内的优先级切换处理。若头指针指示没有尾切片标识,根据头指针指向的缓存地址从队列中读取切片信息。由于头指针指示没有尾切片标识,说明即将出队的切片是数据包的非尾切片,在完成该非尾切片的出队后,不需要进行MAC端口内优先级切换。
另外需要说明,不管头指针指示有尾切片标识还是没有指示尾切片标识,
在完成尾切片的切片信息或者非尾切片的切片信息的出队后,从对应的队列子链表中读取下一个节点,若下一个节点为第一类节点,根据第一类节点新头指针;若下一个节点为第二类节点,根据第二类节点更新头指针。
结合上述介绍,本发明实施例提供了一种数据入队方法,应用于通信处理芯片的队列管理系统的队列管理单元中,所述队列管理系统中建立有若干队列、队列头表和队列尾表、以及一个队列总链表,所述队列总链表中包括若干队列子链表,每一个队列对应一个队列头表、一个队列尾表和一个队列子链表,所述队列头表中至少包括有头指针,所述队列尾表至少包括有尾指针,所述通信处理芯片还设置有若干通信端口,每一个所述通信端口对应至少两个队列,所述方法包括:接收需要入队的数据包,将所述数据包划分成若干切片,得到所述切片的切片信息,以及对所述数据包的尾切片标记尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述尾切片标识用于指示所述尾切片为所述数据包的最后一个切片;按照所述切片在所述数据包中的顺序对对应切片信息进行入队,在所述对应切片信息入队过程中,若所述切片标记有尾切片标识,确定所述切片为所述数据包的尾切片,生成第一类节点,所述第一类节点指向所述尾切片的切片信息的缓存地址和所述尾切片的尾切片标识;判断目标队列是否为空,若所述目标队列为空,将所述尾切片的切片信息写入所述目标队列,根据所述第一类节点更新所述队列头表的头指针;若所述目标队列为非空,将所述尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列子链表尾部增加所述第一类节点。
以及,本发明实施例还提供了一种数据出队方法,应用于通信处理芯片的队列管理系统的队列管理单元中,所述队列管理系统中建立有若干队列、队列头表和队列尾表、以及一个队列总链表,所述队列总链表中包括若干队列子链表,每一个队列对应一个队列头表、一个队列尾表和一个队列子链表,所述队列头表中至少包括有头指针,所述队列尾表至少包括有尾指针,所述通信处理芯片还设置有若干通信端口,每一个所述通信端口对应至少两个队列,所述方法可包括:从当前通信端口的若干队列中读取目标队列的队列头表的头指针,所述队列中写有切片的切片信息,所述切片由数据包划分得到,
所述数据包的尾切片标记有尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述头指针指向第一类节点或第二类节点,所述第一类节点指向尾切片的切片信息的缓存地址和所述尾切片的尾切片标识,所述第二类节点指向数据包的非尾切片的切片信息的缓存地址,所述尾切片标识用于指示所述尾切片为数据包的最后一个切片,所述非尾切片为所述数据包中除去最后一个切片的其它切片;若确定所述头指针指向所述第一类节点,根据所述第一类节点指向的尾切片的切片信息的缓存地址,从所述目标队列中读取所述尾切片的切片信息;在完成所述尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换。
可以看出,由于在切片的切片信息入队时,若切片是数据包的尾切片,生成第一类节点,第一类节点指向了尾切片的切片信息的缓存地址和该尾切片的尾切片标识,其中,尾切片标识用于指示该尾切片为数据包的最后一个切片。在生成第一类节点后,若目标队列为空,将该尾切片的切片信息写入目标队列,并根据该第一类节点更新队列头表的头指针,使得该头指针指向了尾切片的切片信息的缓存地址和该尾切片的尾切片标识。在目标队列不为空时,将尾切片的切片信息写入目标队列,以及将该第一类节点增加到队列子链表中。那么在切片信息出队时,根据该头指针对该切片信息进行出队时,若此时头指针指向第一类节点,则根据第一类节点指向的尾切片标识提前知道即将出队的切片为数据包的尾切片,进而在完成尾切片的切片信息出队后,进行同一个端口内的队列优先级切换,从而能够在整包出队的基础上实现同一个通信端口按照优先级进行切片出队与传输。
请参阅图2,图2为本发明一些实施例提供的数据入队方法的流程示意图;如图2所示,一种数据入队方法可包括:
201、接收需要入队的数据包,将所述数据包划分成若干切片,得到所述切片的切片信息,以及对所述数据包的尾切片标记尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述尾切片标识用于指示所述尾切片为所述数据包的最后一个切片;
其中,数据包来自网络或者与该通信处理芯片连接的终端。
端口号用于指示数据包所在队列所对应的通信端口,本发明实施例中的
通信端口可以是上述介绍的MAC端口。优先级是指数据包所在队列在该通信端口中出队的优先级。
202、按照所述切片在所述数据包中的顺序对对应切片信息进行入队,在所述对应切片信息入队过程中,若所述切片标记有尾切片标识,确定所述切片为所述数据包的尾切片,生成第一类节点,所述第一类节点指向所述尾切片的切片信息的缓存地址和所述尾切片的尾切片标识;
203、判断目标队列是否为空;
其中,队列对应的队列头表还包括了队列指示信息,该队列指示信息用于指示队列为空或者非空状态,其中,队列为空是指队列中没有写入任何切片信息。因此,步骤203具体包括:读取所述目标队列对应的队列头表中的队列指示信息,根据所述队列指示信息判断所述目标队列是否为空。
在切片为数据包的尾切片时,生成第一类节点。在生成第一类节点后,若目标队列为空,转向步骤204;若目标队列非空,转向步骤205。
204、若所述目标队列为空,将所述尾切片的切片信息写入所述目标队列,根据所述第一类节点更新所述队列头表的头指针;
在本发明一些实施例中,在尾切片的切片信息写入所述目标队列之后,还根据第一类节点更新队列尾表的尾指针,通过尾指针指示队列中的最后一个切片信息。
205、若所述目标队列为非空,将所述尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列子链表尾部增加所述第一类节点。
在本发明一些实施例中,在将所述非尾切片的切片信息写入所述目标队列之后,根据所述第二类节点更新所述队列尾表的尾指针。
在本发明一些实施例中,在所述对应切片信息入队过程中,若所述切片未标记有尾切片标识,确定所述切片为所述数据包的非尾切片,生成第二类节点,所述第二类节点指向所述非尾切片的切片信息的缓存地址,所述非尾切片为所述数据包中除去最后一个切片的其它切片;判断所述目标队列是否为空,若所述目标队列为空,将所述非尾切片的切片信息写入所述目标队列,根据所述第二类节点更新所述队列头表的头指针;若所述目标队列为非空,将所述非尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列
子链表尾部增加所述第二类节点,以及根据所述第二类节点更新所述队列尾表的尾指针。
可以理解,在本发明实施例中,在队列为空时,将尾切片的切片信息或者非尾切片的切片信息写入队列后,队列从空变换成非空状态。因此,在队列为空时,在将尾切片的切片信息或者非尾切片的切片信息写入队列后,需要更新队列指示信息,使得队列指示信息指示队列非空。
因此,在本发明一些实施例中,在所述目标队列为空,将所述尾切片的切片信息写入所述目标队列之后,或者在所述目标队列为空,将所述非尾切片的切片信息写入所述目标队列之后,更新所述队列指示信息。
请参阅图3,图3为本发明一些实施例提供的数据入队方法的流程示意图;如图3所示,一种数据入队方法可包括:
301、接收切片,判断上述切片是否标记有尾切片标识;
其中,若是,确定出所述切片为数据包的尾切片,转向步骤302;若否,确定出所述切片为数据包的非尾切片,转向步骤305。
302、生成第一类节点,读取目标队列的队列状态信息,判断所述队列状态信息是否指示上述目标队列为空;
其中,若目标队列的队列状态信息指示该目标队列为空时,转向步骤303,若目标队列的队列状态信息指示该目标队列非空时,转向步骤304。
303、将上述切片的切片信息写入上述目标队列,根据上述第一类节点更新对应队列头表的头指针和对应队列尾表的尾指针,更新对应队列头表的队列状态信息;
304、将上述切片的切片信息写入上述目标队列,在对应队列子链表尾部增加上述第一类节点,并根据上述第一类节点更新对应队列尾表的尾指针;
305、生成第二类节点,读取目标队列的队列状态信息,判断所述队列状态信息是否指示上述目标队列为空;
其中,若目标队列的队列状态信息指示该目标队列为空时,转向步骤306,若目标队列的队列状态信息指示该目标队列非空时,转向步骤307。
306、将上述切片的切片信息写入上述目标队列,根据上述第二类节点更新对应队列头表的头指针和对应队列尾表的尾指针,更新对应队列头表的队
列状态信息;
307、将上述切片的切片信息写入上述目标队列,在对应队列子链表尾部增加上述第二类节点,并根据上述第二类节点更新对应队列尾表的尾指针。
可以看出,本发明实施例提供的数据入队方法中,在切片入队时,主要存在以下两种情况:
情况一、切片是数据包的尾切片,生成第一类节点,然后再按照以下两种情况入队处理:
A1、目标队列为空,将尾切片的切片信息写入目标队列,根据第一类节点更新队列头表的头指针和队列尾表的尾指针,更新队列头表的队列状态信息;
A2、目标队列非空,将尾切片的切片信息写入目标队列,在队列子链表中增加第一类节点,根据第一类节点更新队列尾表的尾指针。
情况二、切片是数据包的非尾切片,生成第二类节点,然后再按照以下两种情况入队处理:
B1、目标队列为空,将非尾切片的切片信息写入目标队列,根据第二类节点更新队列头表的头指针和队列尾表的尾指针,更新队列头表的队列状态信息;
B2、目标队列非空,将非尾切片的切片信息写入目标队列,在队列子链表中增加第二类节点,根据第二类节点更新队列尾表的尾指针。
可见,基于上述方式进行切片入队后,在切片出队时,队列管理单元根据队列头表的头指针可以提前确定即将出队的切片是不是数据包的尾切片,达到提前判断是否需要在通信端口内进行优先级切换的目的。
以上在切片入队的基础上对本发明技术方案进行了详细介绍,下面将在切片出队的基础上进一步对本发明技术方案进行详细介绍。
请参阅图4,图4为本发明一些实施例提供的数据出队方法的流程示意图;如图4所示,一种数据出队方法可包括:
401、从当前通信端口的若干队列中读取目标队列的队列头表的头指针;
其中,所述队列中写有切片的切片信息,所述切片由数据包划分得到,所述数据包的尾切片标记有尾切片标识,所述切片信息至少包括端口号、优
先级和所述切片在内存中的缓存地址,所述头指针指向第一类节点或第二类节点,所述第一类节点指向尾切片的切片信息的缓存地址和所述尾切片的尾切片标识,所述第二类节点指向数据包的非尾切片的切片信息的缓存地址,所述尾切片标识用于指示所述尾切片为数据包的最后一个切片,所述非尾切片为所述数据包中除去最后一个切片的其它切片。
本发明实施例是基于一个通信端口实现切片区分优先级出队,通信端口具体可以是上述介绍的MAC端口。
402、判断所述头指针是否指向第一类节点;
可以理解,本发明实施例的头指针指向第一类节点或者第二类节点。
若是,转向步骤403。
403、根据所述第一类节点指向的尾切片的切片信息的缓存地址,从所述目标队列中读取所述尾切片的切片信息;
若头指针指向了第一类节点,通过第一类节点所指向的尾切片的切片信息的缓存地址,完成尾切片的出队。
404、在完成所述尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换。
可选地,同一个MAC端口内不同队列的优先级采用严格优先级(Strict Priority,简称SP)调度算法或者公平轮转(Round Robin,简称RR)调度算法进行调度切换。
可以看出,在出队时,根据队列头表的头指针进行出队,通过头指针可以提前判断是否需要进行通信端口内优先级切换,具体是头指针指向第一类节点时,由于第一类节点指向了尾切片标识,从而可以知道即将出队的是数据包的尾切片,在该尾切片完成出队后,进行通信端口内的优先级切换。
在本发明一些实施例中,若确定所述头指针指向所述第二类节点,根据所述第二类节点指向的非尾切片的切片信息的缓存地址,从所述目标队列中读取所述非尾切片的切片信息。并在完成所述非尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换。
进一步地,在完成所述尾切片的切片信息读取后,或者在完成所述非尾切片的切片信息读取后,从所述目标队列对应的队列子链表中读取下一个节
点,并根据所述下一个节点更新所述队列头表的头指针,所述下一个节点为所述第一类节点或第二类节点。
可以看出,在每次根据头指针进行切片出队后,从队列子链表中读取下一个节点,根据下一个节点更新头指针,下一个节点为上述第一类节点或第二类节点,其中,第一类节点是指向尾切片的切片信息的缓存地址和尾切片的尾切片标识的,因此,随着每一次出队,将会在某一次从队列子链表中移出第一类节点来更新头指针,因而,能够保证在出队时总是能够根据头指针提前判断是否需要在通信端口内进行优先级切换,以确保在整包出队的基础上,实现同一通信端口内区分优先级出队。
在本发明一些可实施的方式中,上述步骤404的触发当前通信端口的队列优先级切换包括:触发所述调度器从当前通信端口的非空队列中确定出优先级最高队列,然后将所述目标队列切换到所述优先级最高队列,并且发送出队通知消息,所述出队通知消息包括有队列号。
可以理解,对于同一个通信端口可以调度N个队列,N为大于或等于1的自然数,在该N个队列中每个队列的优先级不同,因此,在完成一个数据包的出队后,需要进行一次队列优先级切换。在本发明实施例中可以由调度器完成同一个通信端口内N个队列的优先级切换,当然,还可以由队列管理单元完成优先级切换。在本发明实施例中对调度器完成同一个通信端口内的优先级切换进行详细说明。
需要说明,N个队列中可能存在一些队列为空队列,即队列没有写入任何切片信息,因此,在本发明实施例中,空队列不参与优先级判断,即调度器从通信端口的非空队列中确定出优先级最高队列,然后通知队列管理单元,切换到优先级最高队列进行切片出队。
其中,调度器在确定出优先级最高队列后,将会向队列管理单元发送出队通知消息,出队通知消息中包括的队列号为该优先级最高队列的队列号,队列管理单元在接收到出队通知消息后,将依次执行上述步骤401-404。
因此,在本发明一些实施例中,上述从当前通信端口的若干队列中读取目标队列的队列头表的头指针之前包括:接收调度器发送的出队通知消息,所述出队通知消息包括队列号,所述队列号指示的队列为所述目标队列。
在本发明一些实施例中,上述步骤404包括:在到达通信端口调度时间时,触发所述调度器根据配置时隙调度对应通信端口,并读取上一次所述对应通信端口的最后出队信息,所述最后出队信息至少包括上一次所述对应通信端口最后出队的队列号、最后出队的切片信息以及用于所述最后出队的切片信息出队的节点,所述调度器根据所述最后出队信息判断是否触发所述当前通信端口的队列优先级切换;若否,所述调度器向队列管理单元发送出队通知请求,所述出队通知请求包括的队列号为所述最后出队的队列号,若是,则触发所述当前通信端口的队列优先级切换。
举例来说,具有MAC端口1和MAC端口2,最快间隔一个周期会调度到同一个MAC端口,以A、B、C周期为例,分别在周期A调度MAC端口1,在周期B调度MAC端口2,在周期C会再次调度到端口1。由于本发明实施例中根据队列头表确定是否有尾切片标识以确定尾切片,那么,在周期A时队列管理单元根据队列头表,从队列中读取切片信息完成切片出队,在周期B时,队列管理单元不但完成MAC端口2中切片的出队,还要完成A周期中MAC端口1中出队的切片的发送,以及将A周期中最后一个出队切片的切片信息和其在队列头表中的信息一起进行寄存器寄存,那么在周期C时,队列管理单元则可以从寄存器中读取到这些信息,若该这些信息中包括有尾切片标识,调度器则根据完成MAC端口1的队列切换,从MAC端口1的非空队列中选出优先级最高的队列作为目标队列,然后通知队列管理单元。
请参阅图5,图5为本发明实施例提供的队列管理单元的结构示意图;如图5所示,一种队列管理单元可包括:
判断模块510,用于接收需要入队的数据包,将所述数据包划分成若干切片,得到所述切片的切片信息,以及对所述数据包的尾切片标记尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述尾切片标识用于指示所述尾切片为所述数据包的最后一个切片;
入队处理模块520,用于按照所述切片在所述数据包中的顺序对对应切片信息进行入队,在所述对应切片信息入队过程中,若所述切片标记有尾切片标识,确定所述切片为所述数据包的尾切片,生成第一类节点,所述第一类节点指向所述尾切片的切片信息的缓存地址和所述尾切片的尾切片标识;判
断目标队列是否为空,若所述目标队列为空,将所述尾切片的切片信息写入所述目标队列,根据所述第一类节点更新所述队列头表的头指针;若所述目标队列为非空,将所述尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列子链表尾部增加所述第一类节点,以及根据所述第一类节点更新所述队列尾表的尾指针。
在本发明一些实施例中,上述入队处理模块520还用于,在所述对应切片信息入队过程中,若所述切片未标记有尾切片标识,确定所述切片为所述数据包的非尾切片,生成第二类节点,所述第二类节点指向所述非尾切片的切片信息的缓存地址,所述非尾切片为所述数据包中除去最后一个切片的其它切片;判断所述目标队列是否为空,若所述目标队列为空,将所述非尾切片的切片信息写入所述目标队列,根据所述第二类节点更新所述队列头表的头指针;若所述目标队列为非空,将所述非尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列子链表尾部增加所述第二类节点,以及根据所述第二类节点更新所述队列尾表的尾指针。
进一步地,上述队列头表还包括队列指示信息,上述入队处理模块520具体用于,读取所述目标队列对应的队列头表中的队列指示信息,根据所述队列指示信息判断所述目标队列是否为空。
在本发明一些实施例中,上述入队处理模块520还用于,若所述目标队列为空,将所述尾切片的切片信息写入所述目标队列之后,或者,若所述目标队列为空,将所述非尾切片的切片信息写入所述目标队列之后,更新所述队列指示信息,以及向所述队列管理系统的调度器发送出队请求消息,所述出队请求消息包括队列号。
在本发明一些实施例中,入队处理单元520还用于,在将所述尾切片的切片信息写入所述目标队列之后,根据所述第一类节点更新所述队列尾表的尾指针,以及在将所述非尾切片的切片信息写入所述目标队列之后,根据所述第二类节点更新所述队列尾表的尾指针。
请参阅图6,图6为本发明另一些实施例提供的队列管理单元的结构示意图;如图6所示,一种队列管理单元可包括:
读取模块610,用于从当前通信端口的若干队列中读取目标队列的队列头
表的头指针,所述队列中写有切片的切片信息,所述切片由数据包划分得到,所述数据包的尾切片标记有尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述头指针指向第一类节点或第二类节点,所述第一类节点指向尾切片的切片信息的缓存地址和所述尾切片的尾切片标识,所述第二类节点指向数据包的非尾切片的切片信息的缓存地址,所述尾切片标识用于指示所述尾切片为数据包的最后一个切片,所述非尾切片为所述数据包中除去最后一个切片的其它切片;
出队处理模块620,用于若确定所述头指针指向所述第一类节点,根据所述第一类节点指向的尾切片的切片信息的缓存地址,从所述目标队列中读取所述尾切片的切片信息,在完成所述尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换。
在本发明一些可实施的方式中,上述出队处理模块620还用于,若确定所述头指针指向所述第二类节点,根据所述第二类节点指向的非尾切片的切片信息的缓存地址,从所述目标队列中读取所述非尾切片的切片信息。并在完成所述非尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换。
在本发明一些可实施的方式中,上述出队处理模块620还用于,在完成所述尾切片的切片信息读取后,或者在完成所述非尾切片的切片信息读取后,从所述目标队列对应的队列子链表中读取下一个节点,并根据所述下一个节点更新所述队列头表的头指针,所述下一个节点为所述第一类节点或第二类节点。
在本发明一些可实施的方式中,上述出队管理模块620具体用于,在从当前通信端口的若干队列中读取目标队列的队列头表的头指针之前之前,接收调度器发送的出队通知消息,所述出队通知消息包括队列号,所述队列号指示的队列为所述目标队列。
在本发明一些可实施的方式中,上述出队处理模块620具体用于,在到达通信端口调度时间时,触发所述调度器根据配置时隙调度对应通信端口,并读取上一次所述对应通信端口的最后出队信息,所述最后出队信息至少包括上一次所述对应通信端口最后出队的队列号、最后出队的切片信息以及用
于所述最后出队的切片信息出队的节点,所述调度器根据所述最后出队信息判断是否触发所述当前通信端口的队列优先级切换;若否,所述调度器向队列管理单元发送出队通知请求,所述出队通知请求包括的队列号为所述最后出队的队列号,若是,则触发所述当前通信端口的队列优先级切换。
在本发明一些可实施的方式中,上述出队处理模块620具体用于,触发所述调度器从当前通信端口的非空队列中确定出优先级最高队列,然后将所述目标队列切换到所述优先级最高队列,并且发送所述出队通知消息。
请参考图7,图7为本发明实施例提供的队列管理装置的结构示意图,其中,可包括至少一个处理器701(例如CPU,Central Processing Unit),至少一个网络接口或者其它通信接口,存储器702,和至少一个通信总线,用于实现这些装置之间的连接通信。所述处理器701用于执行存储器中存储的可执行模块,例如计算机程序。所述存储器702可能包含高速随机存取存储器(RAM,Random Access Memory),也可能还包括非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。通过至少一个网络接口(可以是有线或者无线)实现该系统网关与至少一个其它网元之间的通信连接,可以使用互联网,广域网,本地网,城域网等。
如图7所示,在一些实施方式中,所述存储器702中存储了程序指令,程序指令可以被处理器701执行,所述处理器701具体执行以下步骤:接收需要入队的数据包,将所述数据包划分成若干切片,得到所述切片的切片信息,以及对所述数据包的尾切片标记尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述尾切片标识用于指示所述尾切片为所述数据包的最后一个切片;按照所述切片在所述数据包中的顺序对对应切片信息进行入队,在所述对应切片信息入队过程中,若所述切片标记有尾切片标识,确定所述切片为所述数据包的尾切片,生成第一类节点,所述第一类节点指向所述尾切片的切片信息的缓存地址和所述尾切片的尾切片标识;判断目标队列是否为空,若所述目标队列为空,将所述尾切片的切片信息写入所述目标队列,根据所述第一类节点更新所述队列头表的头指针;若所述目标队列为非空,将所述尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列子链表尾部增加所述第一类节点;或者,
从当前通信端口的若干队列中读取目标队列的队列头表的头指针,所述队列中写有切片的切片信息,所述切片由数据包划分得到,所述数据包的尾切片标记有尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述头指针指向第一类节点或第二类节点,所述第一类节点指向尾切片的切片信息的缓存地址和所述尾切片的尾切片标识,所述第二类节点指向数据包的非尾切片的切片信息的缓存地址,所述尾切片标识用于指示所述尾切片为数据包的最后一个切片,所述非尾切片为所述数据包中除去最后一个切片的其它切片;若确定所述头指针指向所述第一类节点,根据所述第一类节点指向的尾切片的切片信息的缓存地址,从所述目标队列中读取所述尾切片的切片信息;在完成所述尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换。
在一些实施方式中,所述处理器701还可以执行以下步骤:在所述对应切片信息入队过程中,若所述切片未标记有尾切片标识,确定所述切片为所述数据包的非尾切片,生成第二类节点,所述第二类节点指向所述非尾切片的切片信息的缓存地址,所述非尾切片为所述数据包中除去最后一个切片的其它切片;判断所述目标队列是否为空,若所述目标队列为空,将所述非尾切片的切片信息写入所述目标队列,根据所述第二类节点更新所述队列头表的头指针;若所述目标队列为非空,将所述非尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列子链表尾部增加所述第二类节点。
在一些实施方式中,所述处理器701还可以执行以下步骤:更新所述目标队列的队列状态信息和队列长度信息,向调度器发送调度请求信息,所述调度请求信息包括队列号。
在一些实施方式中,所述处理器701还可以执行以下步骤:所述队列头表还包括队列指示信息,读取所述目标队列对应的队列头表中的队列指示信息,根据所述队列指示信息判断所述目标队列是否为空。
在一些实施方式中,所述处理器701还可以执行以下步骤:在所述目标队列为空,将所述尾切片的切片信息写入所述目标队列之后,或者在所述目标队列为空,将所述非尾切片的切片信息写入所述目标队列之后,更新所述队列指示信息。
在一些实施方式中,所述处理器701还可以执行以下步骤:在将所述尾切片的切片信息写入所述目标队列之后,根据所述第一类节点更新所述队列尾表的尾指针。
在一些实施方式中,所述处理器701还可以执行以下步骤:在将所述非尾切片的切片信息写入所述目标队列之后,根据所述第二类节点更新所述队列尾表的尾指针。
在一些实施方式中,所述处理器701还可以执行以下步骤:若确定所述头指针指向所述第二类节点,根据所述第二类节点指向的非尾切片的切片信息的缓存地址,从所述目标队列中读取所述非尾切片的切片信息。并在完成所述非尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换。
在一些实施方式中,所述处理器701还可以执行以下步骤:在完成所述尾切片的切片信息读取后,或者在完成所述非尾切片的切片信息读取后,从所述目标队列对应的队列子链表中读取下一个节点,并根据所述下一个节点更新所述队列头表的头指针,所述下一个节点为所述第一类节点或第二类节点。
在一些实施方式中,所述处理器701还可以执行以下步骤:接收调度器发送的出队通知消息,所述出队通知消息包括队列号,所述队列号指示的队列为所述目标队列。
在一些实施方式中,所述处理器701还可以执行以下步骤:在到达通信端口调度时间时,触发所述调度器根据配置时隙调度对应通信端口,并读取上一次所述对应通信端口的最后出队信息,所述最后出队信息至少包括上一次所述对应通信端口最后出队的队列号、最后出队的切片信息以及用于所述最后出队的切片信息出队的节点,所述调度器根据所述最后出队信息判断是否触发所述当前通信端口的队列优先级切换;若否,所述调度器向队列管理单元发送出队通知请求,所述出队通知请求包括的队列号为所述最后出队的队列号,若是,则触发所述当前通信端口的队列优先级切换。
在一些实施方式中,所述处理器701还可以执行以下步骤:触发所述调度器从当前通信端口的非空队列中确定出优先级最高队列,然后将所述目标队列切换到所述优先级最高队列,并且发送所述出队通知消息。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上对本发明所提供的一种数据入队与出队方法及队列管理单元进行了详细介绍,对于本领域的一般技术人员,依据本发明实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。
Claims (24)
- 一种数据入队方法,其特征在于,应用于通信处理芯片的队列管理系统的队列管理单元中,所述队列管理系统中建立有若干队列、队列头表和队列尾表、以及一个队列总链表,所述队列总链表中包括若干队列子链表,每一个队列对应一个队列头表、一个队列尾表和一个队列子链表,所述队列头表中至少包括有头指针,所述队列尾表至少包括有尾指针,所述通信处理芯片还设置有若干通信端口,每一个所述通信端口对应至少两个队列,所述方法包括:接收需要入队的数据包,将所述数据包划分成若干切片,得到所述切片的切片信息,以及对所述数据包的尾切片标记尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述尾切片标识用于指示所述尾切片为所述数据包的最后一个切片;按照所述切片在所述数据包中的顺序对对应切片信息进行入队,在所述对应切片信息入队过程中,若所述切片标记有尾切片标识,确定所述切片为所述数据包的尾切片,生成第一类节点,所述第一类节点指向所述尾切片的切片信息的缓存地址和所述尾切片的尾切片标识;判断目标队列是否为空,若所述目标队列为空,将所述尾切片的切片信息写入所述目标队列,根据所述第一类节点更新所述队列头表的头指针;若所述目标队列为非空,将所述尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列子链表尾部增加所述第一类节点。
- 根据权利要求1所示的方法,其特征在于,所述方法还包括:在所述对应切片信息入队过程中,若所述切片未标记有尾切片标识,确定所述切片为所述数据包的非尾切片,生成第二类节点,所述第二类节点指向所述非尾切片的切片信息的缓存地址,所述非尾切片为所述数据包中除去最后一个切片的其它切片;判断所述目标队列是否为空,若所述目标队列为空,将所述非尾切片的切片信息写入所述目标队列,根据所述第二类节点更新所述队列头表的头指针;若所述目标队列为非空,将所述非尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列子链表尾部增加所述第二类节点。
- 根据权利要求1或2所述的方法,其特征在于,所述队列头表还包括队列指示信息,所述判断目标队列是否为空包括:读取所述目标队列对应的队列头表中的队列指示信息,根据所述队列指示信息判断所述目标队列是否为空。
- 根据权利要求3所述的方法,其特征在于,所述方法还包括:在所述目标队列为空,将所述尾切片的切片信息写入所述目标队列之后,或者在所述目标队列为空,将所述非尾切片的切片信息写入所述目标队列之后,更新所述队列指示信息。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:在将所述尾切片的切片信息写入所述目标队列之后,根据所述第一类节点更新所述队列尾表的尾指针。
- 根据权利要求2所述的方法,其特征在于,所述方法还包括:在将所述非尾切片的切片信息写入所述目标队列之后,根据所述第二类节点更新所述队列尾表的尾指针。
- 一种数据出队方法,其特征在于,应用于通信处理芯片的队列管理系统的队列管理单元中,所述队列管理系统中建立有若干队列、队列头表和队列尾表、以及一个队列总链表,所述队列总链表中包括若干队列子链表,每一个队列对应一个队列头表、一个队列尾表和一个队列子链表,所述队列头表中至少包括有头指针,所述队列尾表至少包括有尾指针,所述通信处理芯片还设置有若干通信端口,每一个所述通信端口对应至少两个队列,所述方法包括:从当前通信端口的若干队列中读取目标队列的队列头表的头指针,所述队列中写有切片的切片信息,所述切片由数据包划分得到,所述数据包的尾切片标记有尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述头指针指向第一类节点或第二类节点,所述第一类节点指向尾切片的切片信息的缓存地址和所述尾切片的尾切片标识,所述第二类节点指向数据包的非尾切片的切片信息的缓存地址,所述尾切片标识用于指示所述尾切片为数据包的最后一个切片,所述非尾切片为所述数据包中除去最后一个切片的其它切片;若确定所述头指针指向所述第一类节点,根据所述第一类节点指向的尾切片的切片信息的缓存地址,从所述目标队列中读取所述尾切片的切片信息;在完成所述尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换。
- 根据权利要求7所述的方法,其特征在于,所述方法还包括:若确定所述头指针指向所述第二类节点,根据所述第二类节点指向的非尾切片的切片信息的缓存地址,从所述目标队列中读取所述非尾切片的切片信息。并在完成所述非尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换。
- 根据权利要求7或8所述的方法,其特征在于,所述方法还包括:在完成所述尾切片的切片信息读取后,或者在完成所述非尾切片的切片信息读取后,从所述目标队列对应的队列子链表中读取下一个节点,并根据所述下一个节点更新所述队列头表的头指针,所述下一个节点为所述第一类节点或第二类节点。
- 根据权利要求7或8所述的方法,其特征在于,所述从当前通信端口的若干队列中读取目标队列的队列头表的头指针之前包括:接收调度器发送的出队通知消息,所述出队通知消息包括队列号,所述队列号指示的队列为所述目标队列。
- 根据权利要求10所述的方法,其特征在于,所述在完成所述尾切片的切片信息读取后或者在完成所述非尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换包括:在到达通信端口调度时间时,触发所述调度器根据配置时隙调度对应通信端口,并读取上一次所述对应通信端口的最后出队信息,所述最后出队信息至少包括上一次所述对应通信端口最后出队的队列号、最后出队的切片信息以及用于所述最后出队的切片信息出队的节点,所述调度器根据所述最后出队信息判断是否触发所述当前通信端口的队列优先级切换;若否,所述调度器向队列管理单元发送出队通知请求,所述出队通知请求包括的队列号为所述最后出队的队列号,若是,则触发所述当前通信端口的队列优先级切换。
- 根据权利要求11所述的方法,其特征在于,所述触发所述当前通信端口的队列优先级切换包括:触发所述调度器从当前通信端口的非空队列中确定出优先级最高队列,然后将所述目标队列切换到所述优先级最高队列,并且发送所述出队通知消息。
- 一种队列管理单元,其特征在于,所述队列管理单元设置于通信处理芯片的队列管理系统,所述队列管理系统中建立有若干队列、队列头表和队列 尾表、以及一个队列总链表,所述队列总链表中包括若干队列子链表,每一个队列对应一个队列头表、一个队列尾表和一个队列子链表,所述队列头表中至少包括有头指针,所述队列尾表至少包括有尾指针,所述通信处理芯片还设置有若干通信端口,每一个所述通信端口对应至少两个队列,所述队列管理单元包括:接收模块,用于接收需要入队的数据包,将所述数据包划分成若干切片,得到所述切片的切片信息,以及对所述数据包的尾切片标记尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述尾切片标识用于指示所述尾切片为所述数据包的最后一个切片;入队处理模块,用于按照所述切片在所述数据包中的顺序对对应切片信息进行入队,在所述对应切片信息入队过程中,若所述切片标记有尾切片标识,确定所述切片为所述数据包的尾切片,生成第一类节点,所述第一类节点指向所述尾切片的切片信息的缓存地址和所述尾切片的尾切片标识;判断目标队列是否为空,若所述目标队列为空,将所述尾切片的切片信息写入所述目标队列,根据所述第一类节点更新所述队列头表的头指针;若所述目标队列为非空,将所述尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列子链表尾部增加所述第一类节点,以及根据所述第一类节点更新所述队列尾表的尾指针。
- 根据权利要求13所述的队列管理单元,其特征在于,所述入队处理模块还用于,在所述对应切片信息入队过程中,若所述切片未标记有尾切片标识,确定所述切片为所述数据包的非尾切片,生成第二类节点,所述第二类节点指向所述非尾切片的切片信息的缓存地址,所述非尾切片为所述数据包中除去最后一个切片的其它切片;判断所述目标队列是否为空,若所述目标队列为空,将所述非尾切片的切片信息写入所述目标队列,根据所述第二类节点更新所述队列头表的头指针;若所述目标队列为非空,将所述非尾切片的切片信息写入所述目标队列,在所述目标队列对应的队列子链表尾部增加所述第二类节点,以及根据所述第二类节点更新所述队列尾表的尾指针。
- 根据权利要求13或14所述的队列管理单元,其特征在于,所述队列头表还包括队列指示信息,所述入队处理模块具体用于,读取所述目标队列对应的队列头表中的队列指示信息,根据所述队列指示信息判断所 述目标队列是否为空。
- 根据权利要求15所述的队列管理单元,其特征在于,所述入队处理模块还用于,若所述目标队列为空,将所述尾切片的切片信息写入所述目标队列之后,或者,若所述目标队列为空,将所述非尾切片的切片信息写入所述目标队列之后,更新所述队列指示信息,以及向所述队列管理系统的调度器发送出队请求消息,所述出队请求消息包括队列号。
- 根据权利要求13所述的队列管理单元,其特征在于,所述入队处理单元还用于,在将所述尾切片的切片信息写入所述目标队列之后,根据所述第一类节点更新所述队列尾表的尾指针。
- 根据权利要求14所述的队列管理单元,其特征在于,所述入队处理单元还用于,在将所述非尾切片的切片信息写入所述目标队列之后,根据所述第二类节点更新所述队列尾表的尾指针。
- 一种队列管理单元,其特征在于,所述队列管理单元设置于通信处理芯片的队列管理系统,所述队列管理系统中建立有若干队列、队列头表和队列尾表、以及一个队列总链表,所述队列总链表中包括若干队列子链表,每一个队列对应一个队列头表、一个队列尾表和一个队列子链表,所述队列头表中至少包括有头指针,所述队列尾表至少包括有尾指针,所述通信处理芯片还设置有若干通信端口,每一个所述通信端口对应至少两个队列,所述队列管理单元包括:读取模块,用于从当前通信端口的若干队列中读取目标队列的队列头表的头指针,所述队列中写有切片的切片信息,所述切片由数据包划分得到,所述数据包的尾切片标记有尾切片标识,所述切片信息至少包括端口号、优先级和所述切片在内存中的缓存地址,所述头指针指向第一类节点或第二类节点,所述第一类节点指向尾切片的切片信息的缓存地址和所述尾切片的尾切片标识,所述第二类节点指向数据包的非尾切片的切片信息的缓存地址,所述尾切片标识用于指示所述尾切片为数据包的最后一个切片,所述非尾切片为所述数据包中除去最后一个切片的其它切片;出队处理模块,用于若确定所述头指针指向所述第一类节点,根据所述第一类节点指向的尾切片的切片信息的缓存地址,从所述目标队列中读取所述尾切片的切片信息,在完成所述尾切片的切片信息读取后,触发所述当前通信端 口的队列优先级切换。
- 根据权利要求19所述的队列管理单元,其特征在于,所述出队处理模块还用于,若确定所述头指针指向所述第二类节点,根据所述第二类节点指向的非尾切片的切片信息的缓存地址,从所述目标队列中读取所述非尾切片的切片信息。并在完成所述非尾切片的切片信息读取后,触发所述当前通信端口的队列优先级切换。
- 根据权利要求19或20所述的队列管理单元,其特征在于,所述出队处理模块还用于,在完成所述尾切片的切片信息读取后,或者在完成所述非尾切片的切片信息读取后,从所述目标队列对应的队列子链表中读取下一个节点,并根据所述下一个节点更新所述队列头表的头指针,所述下一个节点为所述第一类节点或第二类节点。
- 根据权利要求19或20所述的队列管理单元,其特征在于,所述出队管理模块还用于,在从当前通信端口的若干队列中读取目标队列的队列头表的头指针之前,接收调度器发送的出队通知消息,所述出队通知消息包括队列号,所述队列号指示的队列为所述目标队列。
- 根据权利要求22所述的队列管理单元,其特征在于,所述出队处理模块具体用于,在到达通信端口调度时间时,触发所述调度器根据配置时隙调度对应通信端口,并读取上一次所述对应通信端口的最后出队信息,所述最后出队信息至少包括上一次所述对应通信端口最后出队的队列号、最后出队的切片信息以及用于所述最后出队的切片信息出队的节点,所述调度器根据所述最后出队信息判断是否触发所述当前通信端口的队列优先级切换;若否,所述调度器向队列管理单元发送出队通知请求,所述出队通知请求包括的队列号为所述最后出队的队列号,若是,则触发所述当前通信端口的队列优先级切换。
- 根据权利要求23所述的队列管理单元,其特征在于,所述出队处理模块具体用于,触发所述调度器从当前通信端口的非空队列中确定出优先级最高队列,然后将所述目标队列切换到所述优先级最高队列,并且发送所述出队通知消息。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/883,465 US10326713B2 (en) | 2015-07-30 | 2018-01-30 | Data enqueuing method, data dequeuing method, and queue management circuit |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510459221.X | 2015-07-30 | ||
CN201510459221.XA CN105162724B (zh) | 2015-07-30 | 2015-07-30 | 一种数据入队与出队方法及队列管理单元 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/883,465 Continuation US10326713B2 (en) | 2015-07-30 | 2018-01-30 | Data enqueuing method, data dequeuing method, and queue management circuit |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017016505A1 true WO2017016505A1 (zh) | 2017-02-02 |
Family
ID=54803480
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/092055 WO2017016505A1 (zh) | 2015-07-30 | 2016-07-28 | 一种数据入队与出队方法及队列管理单元 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10326713B2 (zh) |
CN (1) | CN105162724B (zh) |
WO (1) | WO2017016505A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190112804A (ko) * | 2017-02-17 | 2019-10-07 | 후아웨이 테크놀러지 컴퍼니 리미티드 | 패킷 처리 방법 및 장치 |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105162724B (zh) * | 2015-07-30 | 2018-06-26 | 华为技术有限公司 | 一种数据入队与出队方法及队列管理单元 |
CN106789716B (zh) * | 2016-12-02 | 2019-08-13 | 西安电子科技大学 | Tdma自组网的mac层队列调度方法 |
CN106899516B (zh) * | 2017-02-28 | 2020-07-28 | 华为技术有限公司 | 一种队列清空方法以及相关设备 |
CN107018095B (zh) * | 2017-03-29 | 2020-08-11 | 西安电子科技大学 | 基于离散事件的交换单元仿真系统及方法 |
CN108664335B (zh) * | 2017-04-01 | 2020-06-30 | 北京忆芯科技有限公司 | 通过代理进行队列通信的方法与装置 |
CN107220187B (zh) * | 2017-05-22 | 2020-06-16 | 北京星网锐捷网络技术有限公司 | 一种缓存管理方法、装置及现场可编程门阵列 |
CN112506623A (zh) * | 2019-09-16 | 2021-03-16 | 瑞昱半导体股份有限公司 | 消息请求方法及其装置 |
US11836645B2 (en) * | 2020-06-15 | 2023-12-05 | Nvidia Corporation | Generating augmented sensor data for testing operational capability in deployed environments |
CN112433839B (zh) * | 2020-12-16 | 2024-04-02 | 苏州盛科通信股份有限公司 | 实现网络芯片高速调度的方法、设备及存储介质 |
CN112905638B (zh) * | 2021-02-02 | 2022-05-17 | 浙江邦盛科技有限公司 | 一种基于喇叭状的时间切片处理方法 |
WO2022225500A1 (en) * | 2021-04-18 | 2022-10-27 | Zeku, Inc. | Apparatus and method of multiple carrier slice-based uplink grant scheduling |
CN114817091B (zh) * | 2022-06-28 | 2022-09-27 | 井芯微电子技术(天津)有限公司 | 基于链表的fwft fifo系统、实现方法及设备 |
CN115242726B (zh) * | 2022-07-27 | 2024-03-01 | 阿里巴巴(中国)有限公司 | 队列的调度方法和装置及电子设备 |
CN115373645B (zh) * | 2022-10-24 | 2023-02-03 | 济南新语软件科技有限公司 | 一种基于可动态定义的复杂数据包操作方法及系统 |
CN116991609B (zh) * | 2023-09-26 | 2024-01-16 | 珠海星云智联科技有限公司 | 队列公平处理方法、设备以及可读存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070121499A1 (en) * | 2005-11-28 | 2007-05-31 | Subhasis Pal | Method of and system for physically distributed, logically shared, and data slice-synchronized shared memory switching |
CN103220230A (zh) * | 2013-05-14 | 2013-07-24 | 中国人民解放军国防科学技术大学 | 支持报文交叉存储的动态共享缓冲方法 |
CN103546392A (zh) * | 2012-07-12 | 2014-01-29 | 中兴通讯股份有限公司 | 队列单周期调度方法和装置 |
CN103581055A (zh) * | 2012-08-08 | 2014-02-12 | 华为技术有限公司 | 报文的保序方法、流量调度芯片及分布式存储系统 |
CN105162724A (zh) * | 2015-07-30 | 2015-12-16 | 华为技术有限公司 | 一种数据入队与出队方法及队列管理单元 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002164916A (ja) | 2000-11-22 | 2002-06-07 | Fujitsu Ltd | 中継装置 |
IL155742A0 (en) * | 2003-05-04 | 2006-12-31 | Teracross Ltd | Method and apparatus for fast contention-free, buffer management in a muti-lane communication system |
US20050286526A1 (en) * | 2004-06-25 | 2005-12-29 | Sood Sanjeev H | Optimized algorithm for stream re-assembly |
JP4550728B2 (ja) * | 2005-12-14 | 2010-09-22 | アラクサラネットワークス株式会社 | パケット転送装置及びマルチキャスト展開方法 |
US20070174411A1 (en) * | 2006-01-26 | 2007-07-26 | Brokenshire Daniel A | Apparatus and method for efficient communication of producer/consumer buffer status |
CN101179486B (zh) * | 2006-11-10 | 2010-07-14 | 中兴通讯股份有限公司 | 一种计算机网络数据包转发的car队列管理方法 |
CN100521655C (zh) * | 2006-12-22 | 2009-07-29 | 清华大学 | 按每流排队的物理队列动态共享装置 |
US9262554B1 (en) * | 2010-02-16 | 2016-02-16 | Pmc-Sierra Us, Inc. | Management of linked lists within a dynamic queue system |
CN202061027U (zh) * | 2011-04-25 | 2011-12-07 | 浙江华鑫实业有限公司 | 医用腿托架 |
CN102377682B (zh) * | 2011-12-12 | 2014-07-23 | 西安电子科技大学 | 基于定长单元存储变长分组的队列管理方法及设备 |
CN102437929B (zh) * | 2011-12-16 | 2014-05-07 | 华为技术有限公司 | 队列管理中的数据出队方法及装置 |
US9436634B2 (en) * | 2013-03-14 | 2016-09-06 | Seagate Technology Llc | Enhanced queue management |
AU2014280800A1 (en) * | 2013-06-13 | 2015-12-10 | Tsx Inc | Low latency device interconnect using remote memory access with segmented queues |
CN103530130B (zh) * | 2013-10-28 | 2016-12-07 | 迈普通信技术股份有限公司 | 多核系统中实现多入多出队列的方法和设备 |
EP3166269B1 (en) * | 2014-08-07 | 2019-07-10 | Huawei Technologies Co., Ltd. | Queue management method and apparatus |
US10484311B2 (en) * | 2015-03-31 | 2019-11-19 | Cavium, Llc | Method and apparatus for using multiple linked memory lists |
-
2015
- 2015-07-30 CN CN201510459221.XA patent/CN105162724B/zh active Active
-
2016
- 2016-07-28 WO PCT/CN2016/092055 patent/WO2017016505A1/zh active Application Filing
-
2018
- 2018-01-30 US US15/883,465 patent/US10326713B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070121499A1 (en) * | 2005-11-28 | 2007-05-31 | Subhasis Pal | Method of and system for physically distributed, logically shared, and data slice-synchronized shared memory switching |
CN103546392A (zh) * | 2012-07-12 | 2014-01-29 | 中兴通讯股份有限公司 | 队列单周期调度方法和装置 |
CN103581055A (zh) * | 2012-08-08 | 2014-02-12 | 华为技术有限公司 | 报文的保序方法、流量调度芯片及分布式存储系统 |
CN103220230A (zh) * | 2013-05-14 | 2013-07-24 | 中国人民解放军国防科学技术大学 | 支持报文交叉存储的动态共享缓冲方法 |
CN105162724A (zh) * | 2015-07-30 | 2015-12-16 | 华为技术有限公司 | 一种数据入队与出队方法及队列管理单元 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190112804A (ko) * | 2017-02-17 | 2019-10-07 | 후아웨이 테크놀러지 컴퍼니 리미티드 | 패킷 처리 방법 및 장치 |
EP3573297A4 (en) * | 2017-02-17 | 2019-12-25 | Huawei Technologies Co., Ltd. | PACKET PROCESSING METHOD AND APPARATUS |
KR102239717B1 (ko) * | 2017-02-17 | 2021-04-12 | 후아웨이 테크놀러지 컴퍼니 리미티드 | 패킷 처리 방법 및 장치 |
US11218572B2 (en) | 2017-02-17 | 2022-01-04 | Huawei Technologies Co., Ltd. | Packet processing based on latency sensitivity |
Also Published As
Publication number | Publication date |
---|---|
CN105162724B (zh) | 2018-06-26 |
CN105162724A (zh) | 2015-12-16 |
US10326713B2 (en) | 2019-06-18 |
US20180159802A1 (en) | 2018-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017016505A1 (zh) | 一种数据入队与出队方法及队列管理单元 | |
US9608926B2 (en) | Flexible recirculation bandwidth management | |
KR100716184B1 (ko) | 네트워크 프로세서에서의 큐 관리 방법 및 그 장치 | |
US11171891B2 (en) | Congestion drop decisions in packet queues | |
US10193831B2 (en) | Device and method for packet processing with memories having different latencies | |
US8867559B2 (en) | Managing starvation and congestion in a two-dimensional network having flow control | |
CN113711547A (zh) | 促进网络接口控制器(nic)中的高效包转发的系统和方法 | |
WO2017000657A1 (zh) | 一种缓存管理的方法、装置和计算机存储介质 | |
US9485326B1 (en) | Scalable multi-client scheduling | |
US20070140282A1 (en) | Managing on-chip queues in switched fabric networks | |
US20130343398A1 (en) | Packet-based communication system with traffic prioritization | |
US11153221B2 (en) | Methods, systems, and devices for classifying layer 4-level data from data queues | |
JP2011024027A (ja) | パケット送信制御装置、ハードウェア回路およびプログラム | |
US8514700B2 (en) | MLPPP occupancy based round robin | |
CN108206787A (zh) | 一种拥塞避免方法和装置 | |
CN109962859A (zh) | 一种报文调度方法及设备 | |
US10021035B1 (en) | Queuing methods and apparatus in a network device | |
US20040252711A1 (en) | Protocol data unit queues | |
US8930604B2 (en) | Reliable notification of interrupts in a network processor by prioritization and policing of interrupts | |
CN107682282B (zh) | 保障业务带宽的服务质量方法及网络设备 | |
CN102594670B (zh) | 多端口多流的调度方法、装置及设备 | |
CN107005487B (zh) | 用于支持联网设备中的高效虚拟输出队列(voq)资源利用的系统和方法 | |
JP5183460B2 (ja) | パケットスケジューリング方法および装置 | |
JP6620760B2 (ja) | 管理ノード、端末、通信システム、通信方法、および、プログラム | |
JP2014179838A (ja) | 通信装置、およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16829870 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16829870 Country of ref document: EP Kind code of ref document: A1 |