WO2022121808A1 - 直通转发模式的调度方法、设备及存储介质 - Google Patents

直通转发模式的调度方法、设备及存储介质 Download PDF

Info

Publication number
WO2022121808A1
WO2022121808A1 PCT/CN2021/135506 CN2021135506W WO2022121808A1 WO 2022121808 A1 WO2022121808 A1 WO 2022121808A1 CN 2021135506 W CN2021135506 W CN 2021135506W WO 2022121808 A1 WO2022121808 A1 WO 2022121808A1
Authority
WO
WIPO (PCT)
Prior art keywords
linked list
main
message
queue
packet
Prior art date
Application number
PCT/CN2021/135506
Other languages
English (en)
French (fr)
Inventor
徐子轩
夏杰
Original Assignee
苏州盛科通信股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州盛科通信股份有限公司 filed Critical 苏州盛科通信股份有限公司
Publication of WO2022121808A1 publication Critical patent/WO2022121808A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9015Buffering arrangements for supporting a linked list
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements

Definitions

  • the invention belongs to the technical field of communication, and mainly relates to a scheduling method, device and storage medium in a cut-through forwarding mode.
  • FIG. 1 A typical data packet storage-scheduling model is shown in Figure 1, and the input signal includes: ⁇ queue number, data, linked list address (write information address) ⁇ .
  • the storage scheduling model is mainly composed of the following modules: a data memory, which caches "data” according to the "write information address" of the input signal.
  • the linked list control module is used to control the conventional linked list "enqueue” and “dequeue” operations; the control of the linked list belongs to the general technical category, which will not be repeated here; the linked list control module mainly includes four sub-modules: ⁇ head pointer memory, tail pointer memory, linked list memory, queue read status ⁇ .
  • the head pointer memory is set to store the storage address pointed to by the data head pointer
  • the tail pointer memory is set to store the storage address pointed to by the data tail pointer
  • the linked list memory is set to store the storage address corresponding to the data;
  • the queue read status is used to indicate the status of the linked list control module. When it is “0”, it means that there is no other data waiting to be scheduled in the queue at this time. When it is “1”, it means that there is other data waiting to be scheduled in the queue.
  • Scheduler if the queue read status is 1, the scheduler participates in scheduling, the scheduler will send the scheduled queue to the "linked list control module" to obtain the read "linked list address" of the queue and trigger the "linked list control module” to update the queue read status information.
  • the read information module accesses the data memory according to the read "linked list address” obtained by the scheduler, obtains data, and outputs it.
  • the core function of the network chip is to realize packet forwarding. There are two forwarding modes: store-forwarding and cut-through forwarding.
  • the message In the store-and-forward mode, the message needs to be completely cached in the memory in the form of a linked list (the linked list is represented as a message data linked list), and then the destination port of the message is determined according to the forwarding logic.
  • Generate an enqueue request write the key information of the message (the start address of the message, the length of the message, etc.) into the linked list of the queue (the linked list is represented as an information linked list), and wait for the scheduling of the queue; then after a certain QoS (Quality of Service, quality of service) strategy, select the queue, schedule the key information in it, and send it to the "message data reading module"; and according to the start address of the message in the key information, the message is sent from It is read out from the memory and sent to the destination port.
  • QoS Quality of Service, quality of service
  • the cut-through forwarding mode can determine the destination port of the packet according to the forwarding logic without waiting for the packet to be completely cached in the memory of the chip.
  • an independent QoS module is set up, and the key information of different packets is connected in series by means of a linked list; after a certain strategy, the queue is selected, and the key information corresponding to the first address in the information linked list is read out, and the key information is read out.
  • the "message data link list first address" in the message is sent to the "message reading module” for message reading operation, and at the same time, the message length in the key information is used to update the QoS status of the queue, and the queue itself is also updated.
  • the linked list state is waiting for the next scheduling.
  • scheduling starts because the complete packet is not buffered into the memory of the chip; when generating key information, the real packet length cannot be obtained; thus, the QoS module cannot use the real packet when updating the internal state.
  • the message length information updates the internal state (such as traffic shaping, etc.), which affects the accuracy of QoS; that is, when reading the memory message, the message may not be fully cached into the chip, so it may happen that the message cannot be read. the case of text data.
  • the existing cut-through forwarding mode requires two independent linked lists, the "data linked list” and the "information linked list”.
  • the packet cache of the network chip is large, the physical area consumed is large; in addition, due to the existence of two read In the linked list write operation, the forwarding delay of the message is relatively large; the two read and write linked list operations will inevitably introduce other memory for caching, which further increases the physical overhead of the logic.
  • the purpose of the embodiments of the present invention is to provide a scheduling method in a cut-through forwarding mode, a network chip, and a readable storage medium.
  • an embodiment of the present invention provides a scheduling method in a cut-through forwarding mode, the method includes: configuring a master linked list and a slave linked list with the same structure;
  • the master linked list includes: a master linked list memory, a master linked list A pointer register, a master and tail pointer register;
  • the slave linked list includes: a slave linked list memory, a scratch pointer register, and a slave tail pointer register;
  • each packet includes at least one packet fragment, one of which carries a start bit identifier, and/or carries an end bit identifier;
  • the main linked list is empty, the current message will be stored, and the storage address of the message will be linked to the main linked list synchronously;
  • the main linked list is not empty and the previous message is linked in the main linked list, the current message is stored, and the storage address of the current message is linked to the main linked list synchronously;
  • the status of the primary linked list is monitored in real time.
  • the contents of the secondary linked list will be transferred and linked to the currently stored end bit identifier of the primary linked list. at the corresponding address.
  • the main linked list is empty, the current message is stored, and the storage address of the message is synchronously linked to the main linked list, including:
  • the storage addresses corresponding to the other sequentially arranged packet fragments are stored in the main linked list respectively. memory, and link them in the order in which they are listed.
  • the main linked list is not empty and the previous message is linked in the main linked list, the current message is stored, and the storage address of the current message is synchronously linked to the main linked list, including :
  • the storage addresses corresponding to the other sequentially arranged packet fragments are stored in the main linked list respectively. memory, and link them in the order in which they are listed.
  • the main linked list is not empty and the linking of the previous message in the main linked list is not completed, the current message is stored, and after the current message is stored, the current message is stored.
  • the storage address is linked to the slave linked list including:
  • the storage addresses corresponding to the other sequentially arranged packet fragments are stored in the slave linked list respectively. memory, and link them in the order in which they are listed.
  • the status of the primary linked list is monitored in real time, and when the end bit identifier carried by the latest packet fragment is completed in the primary linked list, the content of the secondary linked list will be transferred.
  • link to the address corresponding to the end bit identifier currently stored in the main linked list including:
  • the queue tail address stored from the tail pointer register is shifted and replaced with the main tail pointer register.
  • a main queue read status register is configured for the main linked list, and whether the current main linked list is empty is obtained by querying whether the main queue read status register is enabled;
  • the slave queue read status register For configuring the slave queue read status register for the slave linked list, it is possible to obtain whether the current slave linked list is empty by querying whether the slave queue read register is enabled.
  • the method further includes: configuring a main queue write status register, where the main queue write status register is set to identify whether the message fragment can perform a link operation on the main linked list;
  • the status of the main queue write status register is disabled, it means that the previous packet is performing the linking operation on the main linked list, and the packet fragment carrying the end bit identifier in the previous packet has not completed the linking operation, only the previous packet
  • the message fragment corresponding to the message can perform the link operation on the main linked list.
  • the method further includes:
  • each storage space of the linked list status register stores the linked list write status of each source channel correspondingly;
  • the storage location corresponding to any source channel is enabled, it means that the current packet fragment from the source channel can be forcibly linked to the main linked list;
  • an embodiment of the present invention provides an electronic device, including a memory and a processor, where the memory stores a computer program that can run on the processor, and the processor executes the program When implementing the steps in the scheduling method in the cut-through forwarding mode as described above.
  • an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the above-mentioned scheduling method in the cut-through forwarding mode. step.
  • the beneficial effects of the embodiment of the present invention are: the scheduling method, device and storage medium of the cut-through forwarding mode of the embodiment of the present invention can optimize the logical and physical overhead and improve the QoS on the basis of realizing the chip unicast cut-through forwarding function. accuracy.
  • FIG. 1 is a schematic structural diagram of a data storage-scheduling model provided by the background technology
  • FIG. 2 is a schematic flowchart of a scheduling method in a cut-through forwarding mode provided by an embodiment of the present invention
  • FIG. 3 and FIG. 4 are schematic diagrams of the writing data control principle of a specific example of the present invention.
  • a scheduling method in a cut-through forwarding mode includes:
  • the master linked list and the slave linked list with the same configuration structure includes: a master linked list memory, a master head pointer register, and a master tail pointer register;
  • the slave linked list includes: a slave linked list memory, a scratch pointer register, and a slave tail pointer register ;
  • each message includes at least one message fragment, and one of the message fragments carries a start bit identifier, and/or carries an end bit identifier;
  • the secondary linked list If the secondary linked list is not empty, monitor the status of the primary linked list in real time. When the end bit identifier carried by the latest packet fragment is completed in the primary linked list, the content of the secondary linked list will be transferred and linked to the end of the current storage of the primary linked list. The bit identifies the corresponding address.
  • the packets in the network chip are aggregated by MAC (Media Access Control, media access control layer), and each packet will be written into the data memory in the order of packet fragmentation L0, L1, ... LN-1
  • MAC Media Access Control, media access control layer
  • each packet will be written into the data memory in the order of packet fragmentation L0, L1, ... LN-1
  • the message of LN-1 carries the eop (end of packet) flag bit, that is, to carry the end Bit identification; further distinguish the attributes of each fragment of the packet; it is understandable that when the packet length is less than or equal to W, the packet fragment also carries the sop/eop flag bit.
  • the memory When the memory needs to cache the message fragments, it first applies for an address from the "data memory”, and sets it as ptr_X; whether the current message is written to the main linked list or the secondary linked list, the following operations are required. If the current packet fragment is sop, it means that the current fragment is the first fragment of the entire packet data. At this time, use ptr_X to update the "first address memory” and "tail address memory” of the source channel where the packet is located; If the current slice is not sop, use the value in the "tail address memory” as the address, use ptr_X as the value, write into the "data linked list", and write ptr_X into the "tail address memory” at the same time; Write linked list operations, which will be described in detail below.
  • steps S31 , S32 , S33 and S34 this is only for the convenience of description, which may be steps executed sequentially or may be parallel steps, and the sequence of each step does not affect the output result.
  • step S31 it specifically includes: writing the queue head address corresponding to the message fragment carrying the start bit identifier into the main head pointer register, and writing the message fragment carrying the end bit identifier.
  • the corresponding queue tail address is written to the main tail pointer register;
  • the storage addresses corresponding to the other sequentially arranged packet fragments are stored in the main linked list respectively. memory, and link them in the order in which they are listed.
  • step S32 it specifically includes: writing the queue head address corresponding to the message fragment carrying the start bit identifier into the main linked list memory, and linking to the main linked list memory corresponding to the current main and tail pointer registers. Stored at the end of the queue address;
  • the storage addresses corresponding to the other sequentially arranged packet fragments are stored in the main linked list respectively. memory, and link them in the order in which they are listed.
  • step S33 it specifically includes: if the main linked list is not empty and the linking of the previous message in the main linked list has not been completed, the current message is stored, and after the storage of the current message is completed, the storage of the current message is completed.
  • the storage address of the current message is linked to the slave linked list including:
  • the storage addresses corresponding to the other sequentially arranged packet fragments are stored in the slave linked list respectively. memory, and link them in the order in which they are listed.
  • step S34 it specifically includes: if the slave linked list is not empty, monitoring the status of the master linked list in real time, and when the end bit identifier carried by the latest packet fragment is completed in the master linked list, the slave linked list will be linked.
  • the contents of the linked list are transferred and linked to the address corresponding to the end bit identifier currently stored in the main linked list, including:
  • the master queue read status register, the slave queue read status register, the master queue write status register, and the linked list status register are configured to read the status of each linked list, which will be described in detail below.
  • the main queue write status register is set to identify whether the message fragment can be linked to the main linked list; if the status of the main queue write status register is enabled, it means that any packet fragment can be linked to the main linked list. ;If the status of the main queue write status register is disabled, it means that the previous packet is performing the linking operation on the main linked list, and the packet fragment carrying the end bit identifier in the previous packet has not completed the linking operation, only the previous packet The message fragment corresponding to the message can perform the link operation on the main linked list.
  • Each storage space of the linked list status register corresponds to storing the linked list write status of each source channel; if the storage location corresponding to any source channel is enabled, it means that the current message from the source channel can be forced to the host.
  • the linked list is linked; if the storage location corresponding to any source channel is disabled, it means that the current message from the source channel cannot be linked to the main linked list.
  • the linked list status register is set to identify, when the queue write status register bit is disabled, which source channel packet fragment can continue to be linked to the main linked list.
  • A1. Receive the message M1, and generate an enqueue request.
  • the enqueue request may be generated in any packet fragment of the message, which can be specifically determined by the actual policy, which is not limited in the embodiment of the present invention;
  • A2 Use the "queue number" of the message M1 as the address to read the main queue read status register. If the status of the main queue read status register is "0", it means that the main linked list is empty, enter A3, if it is "1", Indicates that the main linked list is not empty, enter A4;
  • Step S31 is executed, the packet M1 is divided into two packet fragments, the packet fragment M11 and the packet fragment M12, the packet fragment M11 carries sop, and the packet fragment M12 carries eop.
  • the address of the message fragmentation M11 is used as the value
  • the queue number of the message fragmentation M11 is used as the address to write into the main head pointer register and the main tail pointer register respectively, and read the status register from the main queue.
  • the status of the queue write status register is set to "1"
  • the status of the queue write status register is set to "0”
  • the status of the queue write status register is set to "0" to indicate non-enable;
  • the address of the message fragment M12 is used as the value, and the address pointed to by the current main and tail pointers is used as the address, and is written into the main linked list memory to complete the link operation; the address of the message fragment M12 is used.
  • Write the main and tail pointer registers as the value to complete the update of the main and tail pointer registers, keep the status of the main queue read status register set to "1", set the status of the queue write status register to "1", and set the status of the queue write status register to "1". Set to "1" to enable;
  • the query obtains: the main queue write status register is "1", that is, the main queue write status register is enabled, the main linked list is not empty, and the previous message has been linked in the main linked list, and the current new message is fragmented
  • the main linked list can be linked, and then go to step A5;
  • the query obtains: the main queue write status register is "0", that is, the main queue write status register is disabled, which means that the linking of the previous message in the main linked list has not been completed, the main linked list is not empty, and the previous message is in the The link in the main linked list is not completed; at this time, the main linked list can only continue to receive the link operation of the previous message, and the new message executes step A6;
  • step S32 is executed, the packet M2 is divided into two packet fragments, the packet fragment M21 and the packet fragment M22, the packet fragment M21 carries sop, and the packet fragment M22 carries eop.
  • the address of the message fragment M21 is used as the value, and the address pointed to by the current main and tail pointers is used as the address, and is written into the main linked list memory to complete the link operation; the address of the message fragment M21 is used.
  • Write the main and tail pointer registers as the value to complete the update of the main and tail pointer registers, keep the status of the main queue read status register set to "1", and set the status of the queue write status register to "0";
  • the address of the message fragment M22 is used as the value, and the address pointed to by the current main and tail pointers is used as the address, and is written into the main linked list memory to complete the link operation; the address of the message fragment M22 is used.
  • Write the main and tail pointer registers as the value to complete the update of the main and tail pointer registers, keep the status of the main queue read status register set to "1", and set the status of the queue write status register to "1";
  • the message M3 is linking the main linked list, and the message fragment carrying the eop in the message M3 has not been linked to the main linked list. At this time, the storage of the message M4 is completed, and a link request is issued to the linked list;
  • the message M4 is divided into the message fragment M41 and the message fragment M42, the message fragment M41 carries sop, and the message fragment M42 carries the eop; it should be noted that, in order to avoid link errors and save hardware space, execute the steps One of the conditions of M32 is complete storage of the message.
  • step M33 is performed;
  • A7 Take the address of the message fragment M41 as the value, use the queue number of the message fragment M41 as the address, write the head pointer register and the tail pointer register respectively, and set the status of the read status register from the queue to "1" ;
  • A8 Use the address of the message fragment M41 as the value, and the address pointed to by the current master and tail pointers as the address, and write it into the slave linked list memory to complete the link operation; use the address of the message fragment M41 as the value to write into the slave tail pointer register Complete the update of the slave tail pointer register, and keep the state of the read status register from the queue set to "1";
  • the message fragment carrying the eop in the message M3 has not been linked yet. Therefore, it is necessary to continue to link the message M5 to the slave linked list and link it to the message M4.
  • the linking method is the same as the aforementioned linking method, and will not be further described here.
  • the execution order of the above steps A1 to A8 is in no particular order.
  • the message of the link operation on the main linked list is completely stored, and the link is completed on the main linked list, Then read the status of the status register read from the queue. If the status of the status register read from the queue is 0, it means that no other messages are linked from the linked list, and steps A1 to A8 are executed; if the status of the status register read from the queue is 1, it means There are other messages linked from the linked list, and it is necessary to go to step A9 and execute step S34.
  • the message fragment M32 carrying eop in the message M3 is received and completed on the main linked list; at this time, the content of the linked list needs to be linked to the main linked list. ; Transfer the queue head address corresponding to the message fragment M41 stored from the head pointer register into the main linked list memory, and link to the queue tail address currently stored in the main linked list memory corresponding to the main tail pointer register;
  • the information from the linked list memory is directly transferred to the main linked list memory; that is, the message fragment M42 is kept in the main linked list memory to link the message fragment M41, and the message M5 is linked to the message M42;
  • the queue tail address corresponding to the message fragment M5 stored from the tail pointer register is transferred and replaced with the main tail pointer register.
  • the dequeue operation there may be multiple queues for the same destination port; when the destination port selects a queue to dequeue, a packet fragment with the eop flag in the queue must be scheduled out. After that, the reselection operation can be performed, and other queues can be switched to dequeue or stop dequeuing according to the traffic shaping result. At the same time, the state of the read status register of the main queue needs to be updated; when dequeuing, the address carried by the packet fragment is used. By accessing the data memory, the data of the packet fragment can be obtained without any additional scheduling behavior.
  • an embodiment of the present invention provides an electronic device, including a memory and a processor, the memory stores a computer program that can run on the processor, and the processor implements the above when executing the program.
  • the steps in the scheduling method in the cut-through forwarding mode are described.
  • an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps in the scheduling method in the cut-through forwarding mode as described above.
  • the scheduling method, device, and storage medium of the cut-through forwarding mode combine the physical memory of the "message data linked list" and the "information linked list", and use only one physical linked list through special enqueue, Dequeue operation, realize unicast direct forwarding function, update QoS status in real time according to the real packet length, improve the accuracy of QoS, improve system performance, and reduce the physical overhead of logic.
  • dequeuing the fragment address of the packet is directly obtained to optimize the packet forwarding delay.
  • modules described as separate components may or may not be physically separated, and the components shown as modules are logic modules, that is, one of the logic modules that may be located in the chip modules, or can also be distributed to multiple data processing modules in a chip. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this implementation manner. Those of ordinary skill in the art can understand and implement it without creative effort.
  • the present invention can be used in many general-purpose or special-purpose communication chips. For example: switch chips, router chips, server chips, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明实施例提供一种直通转发模式的调度方法、设备及存储介质,所述方法包括:在直通转发模式下接收报文;若主链表为空,则存储当前报文,并同步将报文的存储地址链接至主链表;若主链表不为空,且前一个报文在主链表中链接完成,则存储当前报文,并同步将当前报文的存储地址链接至主链表;若主链表不为空,前一个报文在主链表中链接未完成,则存储当前报文,并在当前报文存储完成后,将当前报文的存储地址链接至从链表;若从链表不为空,则实时监测主链表状态,在最近一个报文分片携带的结束位标识在主链表链接完成时,将从链表的内容转移并链接至主链表当前存储的结束位标识对应的地址上。本发明实施例实现芯片单播直通转发功能基础上,优化逻辑物理开销。

Description

直通转发模式的调度方法、设备及存储介质
本发明要求于2020年12月7日提交中国专利局、申请号为202011427671.8、发明名称“直通转发模式的调度方法、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本发明中。
技术领域
本发明属于通信技术领域,主要涉及一种直通转发模式的调度方法、设备及存储介质。
背景技术
在高密度网络芯片中,存在大量的数据包存储-调度需求。典型的数据包存储-调度模型如图1所示,输入信号包括:{队列编号,数据,链表地址(写信息地址)}。
存储调度模型主要由以下几个模块组成:数据存储器,所述数据存储器根据输入信号的“写信息地址”将“数据”缓存起来。链表控制模块,用来控制常规的链表“入队”“出队”操作;链表的控制属于通用的技术范畴,此处不作赘述;链表控制模块主要包括四个子模块:{头指针存储器,尾指针存储器,链表存储器,队列读状态}。所述头指针存储器被设置为存储数据头指针指向的存储地址,所述尾指针存储器被设置为存储数据尾指针指向的存储地址,所述链表存储器被设置为存储数据对应的存储地址;所述队列读状态用于指示链表控制模块的状态,当其为“0”时,说明此时队列中没有其他数据等待调度,当其为“1”时,说明队列中有其他数据等待调度。调度器,如果队列读状态为1,则调度器参与调度,调度器会将调度出的队列发送给“链表控制模块”获取该队列的读“链表地址”并且触发“链表控制模块”更新队列读状态信息。读信息模块,根据调度器获取的读“链表地址”访问数据存储器,得到数据,并输出。
网络芯片的核心功能,就是实现报文的转发。转发模式可以分为两种:存储转发模式和直通转发模式。
存储转发模式,需要将报文采用链表(该链表表示为报文数据链表)的方式,在芯片中完整的缓存在存储器中,再根据转发逻辑,决定报文的目的端口。产生入队请求,将该报文的关键信息(报文的起始地址,报文长度等)写进该队列的链表(该链表表示为信息链表)中,等待队列的调度;再经过一定QoS(Quality of Service,服务质量)策略,选中该队列,将其中的关键信息调度出来,发送给“报文数据读取模块”;并根据关键信息中的报文起始地址,将该报文从存储器中读出来,发送给目的端口。
为了加快报文存储及读取,提升网络芯片性能,通常采用直通转发模式,直通转发模式不需要等到报文完整的缓存在芯片的存储器中,即可根据转发逻辑,决定报文的目的端口。
直通转发模式中,设置独立的QoS模块,采用链表的方式将不同报文的关键信息串联起来;经过一定的策略,选中该队列,将信息链表中首地址对应的关键信息读取出来,将关键信息中的“报文数据链表首地址”发送给“报文读取模块”进行报文读取操作,同时使用关键信息中的报文长度,更新该队列的QoS状态,还要更新队列本身的链表状态等待下次调度。
在直通转发模式下,由于没有将完整的报文缓存进芯片的存储器便开始调度;在产生关键信息时,无法得到真实的报文长度;从而导致QoS模块在更新内部状态时,无法使用真实的报文长度信息更新内部状态(譬如流量整形等),从而影响QoS的准确性;即:在读取存储器报文时,可能该报文还未全部缓存进芯片,如此,可能发生读不到报文数据的情况。
此外,现有的直通转发模式,需要“数据链表”和“信息链表”两份独立的链表,在网络芯片的包缓存很大时,其消耗的物理面积较大;另外,由于存在两次读写链表操作,报文的转发延时较大;两次读写链表操作,必然会引入其他的存储器用来缓存,进一步增加了逻辑的物理开销。
发明内容
为解决上述技术问题,本发明实施例的目的在于提供一种直通转发模式的调度方法、网络芯片及可读存储介质。
为了实现上述发明目的之一,本发明一实施方式提供一种直通转发模式的调度方法,所述方法包括:配置结构相同的主链表和从链表;所述主链表包括:主链表存储器,主头指针寄存器,主尾指针寄存器;所述从链表包括:从链表存储器,从头指针寄存器,从尾指针寄存器;
在直通转发模式下接收报文,每一报文包括至少一个报文分片,所述报文分片其中之一携带起始位标识,和/或携带结束位标识;
若主链表为空,则存储当前报文,并同步将报文的存储地址链接至主链表;
若主链表不为空,且前一个报文在主链表中链接完成,则存储当前报文,并同步将当前报文的存储地址链接至主链表;
若主链表不为空,前一个报文在主链表中链接未完成,则存储当前报文,并在当前报文存储完成后,将当前报文的存储地址链接至从链表;
若从链表不为空,则实时监测主链表状态,在最近一个报文分片携带的结束位标识在主链表链接完成时,将从链表的内容转移并链接至主链表当前存储的结束位标识对应的地址上。
作为本发明一实施方式的可选改进,若主链表为空,则存储当前报文,并同步将报文的存储地址链接至主链表包括:
将携带起始位标识的报文分片所对应的队列首地址写入主头指针寄存器,将携带结束位标识的报文分片所对应的队列尾地址写入主尾指针寄存器;
其中,若当前报文包括的报文分片超过1个,则将排除携带起始位标识的报文分片后,其它依序排列的报文分片所对应的存储地址分别存储在 主链表存储器,并按照其排列次序进行链接。
作为本发明一实施方式的可选改进,若主链表不为空,且前一个报文在主链表中链接完成,则存储当前报文,并同步将当前报文的存储地址链接至主链表包括:
将携带起始位标识的报文分片所对应的队列首地址写入主链表存储器,并链接到主链表存储器对应主尾指针寄存器当前存储的队列尾地址上;
同时,将携带结束位标识的报文分片所对应的队列尾地址写入并替换主尾指针寄存器;
其中,若当前报文包括的报文分片超过1个,则将排除携带起始位标识的报文分片后,其它依序排列的报文分片所对应的存储地址分别存储在主链表存储器,并按照其排列次序进行链接。
作为本发明一实施方式的可选改进,若主链表不为空,前一个报文在主链表中链接未完成,则存储当前报文,并在当前报文存储完成后,将当前报文的存储地址链接至从链表包括:
若当前从链表为空,则将携带起始位标识的报文分片所对应的队列首地址写入从头指针寄存器,将携带结束位标识的报文分片所对应的队列尾地址写入从尾指针寄存器;
若当前从链表不为空,则将携带起始位标识的报文分片所对应的队列首地址写入从链表存储器,并链接到从链表存储器对应从尾指针寄存器当前存储的队列尾地址上;同时,将携带结束位标识的报文分片所对应的队列尾地址写入并替换从尾指针寄存器;
其中,若当前报文包括的报文分片超过1个,则将排除携带起始位标识的报文分片后,其它依序排列的报文分片所对应的存储地址分别存储在从链表存储器,并按照其排列次序进行链接。
作为本发明一实施方式的可选改进,若从链表不为空,则实时监测主 链表状态,在最近一个报文分片携带的结束位标识在主链表链接完成时,将从链表的内容转移并链接至主链表当前存储的结束位标识对应的地址上包括:
将从头指针寄存器存储的队列首地址转移写入主链表存储器,并链接到主链表存储器对应主尾指针寄存器当前存储的队列尾地址上;
同时,将从尾指针寄存器存储的队列尾地址转移并替换主尾指针寄存器。
作为本发明一实施方式的可选改进,为主链表配置主队列读状态寄存器,通过查询所述主队列读状态寄存器是否使能,以获取当前主链表是否为空;
为从链表配置从队列读状态寄存器,通过查询所述从队列读寄存器是否使能,以获取当前从链表是否为空。
作为本发明一实施方式的可选改进,所述方法还包括:配置主队列写状态寄存器,所述主队列写状态寄存器被设置为标识报文分片是否可以对主链表做链接操作;
若主队列写状态寄存器状态为使能,表示任一报文分片均可以对主链表做链接操作;
若主队列写状态寄存器状态为非使能,表示前一报文正在对主链表做链接操作,且前一报文中携带结束位标识的报文分片还未完成链接操作,仅前一报文所对应的报文分片可以对主链表做链接操作。
作为本发明一实施方式的可选改进,所述方法还包括:
配置链表状态寄存器,所述链表状态寄存器的每一存储空间对应存储每一源通道的链表写入状态;
若对应任一源通道的存储位置为使能,表示当前来自于所述源通道的报文分片可以强制对主链表进行链接;
若对应任一源通道的存储位置为非使能,表示当前来自于所述源通道的报文分片不可以对主链表做链接操作。
为了实现上述发明目的之一,本发明一实施方式提供一种电子设备,包括存储器和处理器,所述存储器存储有可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现如上所述直通转发模式的调度方法中的步骤。
为了实现上述发明目的之一,本发明一实施方式提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述直通转发模式的调度方法中的步骤。
与现有技术相比,本发明实施例的有益效果是:本发明实施例的直通转发模式的调度方法、设备及存储介质,实现芯片单播直通转发功能基础上,优化逻辑物理开销,提高QoS的准确性。
附图说明
图1是背景技术提供的数据存储-调度模型结构示意图;
图2是本发明一实施方式提供的直通转发模式的调度方法的流程示意图;
图3、图4是本发明一具体示例的写数据控制原理示意图。
具体实施方式
以下将结合附图所示的具体实施方式对本发明进行详细描述。但这些实施方式并不限制本发明,本领域的普通技术人员根据这些实施方式所做出的结构、方法、或功能上的变换均包含在本发明的保护范围内。
如图2所示,本发明一实施方式提供的直通转发模式的调度方法,所述方法包括:
S1、配置结构相同的主链表和从链表;所述主链表包括:主链表存储 器,主头指针寄存器,主尾指针寄存器;所述从链表包括:从链表存储器,从头指针寄存器,从尾指针寄存器;
S2、在直通转发模式下接收报文,每一报文包括至少一个报文分片,所述报文分片其中之一携带起始位标识,和/或携带结束位标识;
S31、若主链表为空,则存储当前报文,并同步将报文的存储地址链接至主链表;
S32、若主链表不为空,且前一个报文在主链表中链接完成,则存储当前报文,并同步将当前报文的存储地址链接至主链表;
S33、若主链表不为空,前一个报文在主链表中链接未完成,则存储当前报文,并在当前报文存储完成后,将当前报文的存储地址链接至从链表;
S34、若从链表不为空,则实时监测主链表状态,在最近一个报文分片携带的结束位标识在主链表链接完成时,将从链表的内容转移并链接至主链表当前存储的结束位标识对应的地址上。
对于步骤S1,本发明实施例的存储调度模型同样包括被设置为存储数据的数据存储器;假设传递报文(数据)的真实长度为L,而数据存储器的位宽为W;那么,报文将被切分为N个分片,N=int(L/W),int为向上取整函数;以存储在数据存储器中。
对于步骤S2,网络芯片中的报文由MAC(Media Access Control,介质访问控制层)汇聚而来,每一报文将按照报文分片L0,L1,…LN-1的顺序写进数据存储器;本发明具体示例中,为报文分片L0配置携带sop(start of packet)标志位,即携带起始位标识;LN-1的报文携带eop(end of packet)标志位,即携带结束位标识;进而区分报文每一分片的属性;可以理解的是,当报文长度小于等于W时,该报文分片同时携带sop/eop标志位。
当存储器需要缓存报文分片时,首先从“数据存储器”中申请得到一个地址,设其为ptr_X;无论将当前报文写入主链表还是从链表,其均需要如下操作。若当前报文分片为sop,则表示当前分片为整个报文数据的第一分片,此时,使用ptr_X更新该报文所在源通道的“首地址存储器”和“尾地址存储器”;如果当前分片不是sop,则使用“尾地址存储器”中的值作为地址,使用ptr_X作为值,写进“数据链表”中,同时将ptr_X写进“尾地址存储器”;对于本发明实施例的写链表操作,以下内容中还会详细描述。
对于步骤S31、S32、S33、S34,这里仅是为了描述方便,其可以为先后执行的步骤,也可以为并行的步骤,各个步骤的前后顺序,不影响输出结果。
本发明具体实施方式中,对于步骤S31,其具体包括:将携带起始位标识的报文分片所对应的队列首地址写入主头指针寄存器,将携带结束位标识的报文分片所对应的队列尾地址写入主尾指针寄存器;
其中,若当前报文包括的报文分片超过1个,则将排除携带起始位标识的报文分片后,其它依序排列的报文分片所对应的存储地址分别存储在主链表存储器,并按照其排列次序进行链接。
本发明具体实施方式中,对于步骤S32,其具体包括:将携带起始位标识的报文分片所对应的队列首地址写入主链表存储器,并链接到主链表存储器对应主尾指针寄存器当前存储的队列尾地址上;
同时,将携带结束位标识的报文分片所对应的队列尾地址写入并替换主尾指针寄存器;
其中,若当前报文包括的报文分片超过1个,则将排除携带起始位标识的报文分片后,其它依序排列的报文分片所对应的存储地址分别存储在主链表存储器,并按照其排列次序进行链接。
本发明具体实施方式中,对于步骤S33,其具体包括:若主链表不为 空,前一个报文在主链表中链接未完成,则存储当前报文,并在当前报文存储完成后,将当前报文的存储地址链接至从链表包括:
若当前从链表为空,则将携带起始位标识的报文分片所对应的队列首地址写入从头指针寄存器,将携带结束位标识的报文分片所对应的队列尾地址写入从尾指针寄存器;
若当前从链表不为空,则将携带起始位标识的报文分片所对应的队列首地址写入从链表存储器,并链接到从链表存储器对应从尾指针寄存器当前存储的队列尾地址上;同时,将携带结束位标识的报文分片所对应的队列尾地址写入并替换从尾指针寄存器;
其中,若当前报文包括的报文分片超过1个,则将排除携带起始位标识的报文分片后,其它依序排列的报文分片所对应的存储地址分别存储在从链表存储器,并按照其排列次序进行链接。
本发明具体实施方式中,对于步骤S34,其具体包括:若从链表不为空,则实时监测主链表状态,在最近一个报文分片携带的结束位标识在主链表链接完成时,将从链表的内容转移并链接至主链表当前存储的结束位标识对应的地址上包括:
将从头指针寄存器存储的队列首地址转移写入主链表存储器,并链接到主链表存储器对应主尾指针寄存器当前存储的队列尾地址上;
同时,将从尾指针寄存器存储的队列尾地址转移并替换主尾指针寄存器;
如此操作,从链表的全部内容将自从链表存储器转移并链接至主链表存储器。
本发明可选实施方式中,配置主队列读状态寄存器、从队列读状态寄存器、主队列写状态寄存器,链表状态寄存器,以用于读取各链表的状态,以下内容中将会详细描述。
为主链表配置主队列读状态寄存器,通过查询所述主队列读状态寄存器是否使能,以获取当前主链表是否为空;通过查询所述从队列读寄存器是否使能,以获取当前从链表是否为空。
所述主队列写状态寄存器被设置为标识报文分片是否可以对主链表做链接操作;若主队列写状态寄存器状态为使能,表示任一报文分片均可以对主链表做链接操作;若主队列写状态寄存器状态为非使能,表示前一报文正在对主链表做链接操作,且前一报文中携带结束位标识的报文分片还未完成链接操作,仅前一报文所对应的报文分片可以对主链表做链接操作。
所述链表状态寄存器的每一存储空间对应存储每一源通道的链表写入状态;若对应任一源通道的存储位置为使能,表示当前来自于所述源通道的报文可以强制对主链表进行链接;若对应任一源通道的存储位置为非使能,表示当前来自于所述源通道的报文不可以对主链表做链接操作。本质上,该链表状态寄存器被设置为标识,在队列写状态寄存器位非使能时,哪个源通道上的报文分片可以继续往主链表上进行链接操作。
结合图3、图4所示,以下内容中描述一具体示例以便于理解。
A1、接收报文M1,并产生入队请求,入队请求可以在报文的任一个报文分片产生,即可由实际策略具体决定,本发明实施例不做限制;
A2、使用报文M1的“队列编号”作为地址,读取主队列读状态寄存器,若主队列读状态寄存器的状态为“0”,表示主链表为空,进入A3,若为“1”,表示主链表不为空,进入A4;
A3、执行步骤S31,报文M1被分割为2个报文分片,报文分片M11和报文分片M12,报文分片M11携带sop,报文分片M12携带eop,当报文分片M11存储完成时,则将报文分片M11的地址作为值,使用报文分片M11队列编号作为地址,分别写入主头指针寄存器和主尾指针寄存器,并将主队列读状态寄存器的状态置为“1”,队列写状态寄存器的状态置为 “0”,队列写状态寄存器的状态置为“0”表示非使能;
当报文分片M12存储完成时,将报文分片M12的地址作为值,当前主尾指针指向的地址作为地址,写进主链表存储器中,完成链接操作;使用报文分片M12的地址作为值写入主尾指针寄存器完成主尾指针寄存器的更新,并保持主队列读状态寄存器的状态置为“1”,将队列写状态寄存器的状态置为“1”,队列写状态寄存器的状态置为“1”表示使能;
A4、新的报文到来,查询主队列读状态寄存器的状态为“1”,表示主链表不为空,此时,存在两种情况:
情况1,查询获得:主队列写状态寄存器为“1”,即主队列写状态寄存器使能,主链表不为空,且前一个报文在主链表中链接完成,当前新的报文分片可以对主链表进行链接操作,进入步骤A5;
情况2,查询获得:主队列写状态寄存器为“0”,即主队列写状态寄存器非使能,表示前一个报文在主链表中链接未完成,主链表不为空,前一个报文在主链表中链接未完成;此时,主链表只能继续接收前一个报文的链接操作,新的报文执行步骤A6;
A5、新的报文M2到来,执行步骤S32,报文M2被分割为2个报文分片,报文分片M21和报文分片M22,报文分片M21携带sop,报文分片M22携带eop。
报文M2到来,查询其所对应的链表状态寄存器为“1”;即发送报文M2可以链接到主链表;
当报文分片M21存储完成时,将报文分片M21的地址作为值,当前主尾指针指向的地址作为地址,写进主链表存储器中,完成链接操作;使用报文分片M21的地址作为值写入主尾指针寄存器完成主尾指针寄存器的更新,并保持主队列读状态寄存器的状态置为“1”,将队列写状态寄存器的状态置为“0”;
当报文分片M22存储完成时,将报文分片M22的地址作为值,当前主尾指针指向的地址作为地址,写进主链表存储器中,完成链接操作;使用报文分片M22的地址作为值写入主尾指针寄存器完成主尾指针寄存器的更新,并保持主队列读状态寄存器的状态置为“1”,将队列写状态寄存器的状态置为“1”;
A6、报文M3正在链接主链表,且报文M3携带eop的报文分片还未链接至主链表,此时报文M4存储完成,且对链表发出链接请求;
报文M4被分割报文分片M41和报文分片M42,报文分片M41携带sop,报文分片M42携带eop;需要说明的是,为了避免链接出错,以及节约硬件空间,执行步骤M32的条件其中之一为报文完整存储。
相应的,在报文M4存储完成后,执行步骤M33;
使用报文M4的“队列编号”作为地址,读取从队列读状态寄存器,若从队列读状态寄存器的状态为“0”,表示从链表为空,进入A7,若为“1”,表示从链表不为空,进入A8;
A7、将报文分片M41的地址作为值,使用报文分片M41队列编号作为地址,分别写入从头指针寄存器和从尾指针寄存器,并将从队列读状态寄存器的状态置为“1”;
将报文分片M42的地址作为值,当前主尾指针指向的地址作为地址,写进从链表存储器中,完成链接操作;使用报文分片M42的地址作为值写入从尾指针寄存器完成从尾指针寄存器的更新,并保持从队列读状态寄存器的状态置为“1”;
A8、将报文分片M41的地址作为值,当前主尾指针指向的地址作为地址,写进从链表存储器中,完成链接操作;使用报文分片M41的地址作为值写入从尾指针寄存器完成从尾指针寄存器的更新,并保持从队列读状态寄存器的状态置为“1”;
将报文分片M42的地址作为值,当前从尾指针指向的地址作为地址,写进从链表存储器中,完成链接操作;使用报文分片M42的地址作为值写入从尾指针寄存器完成从尾指针寄存器的更新,并保持从队列读状态寄存器的状态置为“1”。
报文M5到来时,在该示例中,报文M3中携带eop的报文分片还未链接完成,如此,需要继续将报文M5链接到从链表上,且链接在报文M4上,其链接方式与前述链接方式相同,在此不做进一步的赘述。
另外,需要说明的是,上述步骤A1至A8的执行顺序不分先后,在上述步骤A1至A8执行过程中,若对主链表进行链接操作的报文完整存储,且在主链表上链接完成,则读取从队列读状态寄存器的状态,若从队列读状态寄存器的状态为0,表示从链表上没有链接其他报文,执行步骤A1至A8;若从队列读状态寄存器的状态为1,表示从链表上链接有其他报文,需要进入步骤A9,执行步骤S34。
接续图3,并结合图4所示,A9,报文M3中携带eop的报文分片M32接收完成,且在主链表上链接完成;此时,需要将从链表的内容链接到主链表上;将从头指针寄存器存储的报文分片M41所对应的队列首地址转移写入主链表存储器,并链接到主链表存储器对应主尾指针寄存器当前存储的队列尾地址上;
将从链表存储器的信息直接转移至主链表存储器;即在主链表存储器中保持报文分片M42链接报文分片M41,报文M5链接报文M42;
将从尾指针寄存器存储的报文分片M5对应的队列尾地址转移并替换主尾指针寄存器。
可选的,对于出队操作,相同的目的端口,可能存在多个队列;当目的端口选中某个队列进行出队时,必须将该队列的某个带eop标志位的报文分片调度出去后,才可以进行重选操作,切换其他的队列出队或者根据流量整形结果停止出队,同时,需要更新主队列读状态寄存器的状态;出 队时,使用报文分片携带的地址,访问数据存储器,即可得到该报文分片的数据,不需要其他额外的调度行为。
可选的,本发明一实施方式提供一种电子设备,包括存储器和处理器,所述存储器存储有可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现如上所述直通转发模式的调度方法中的步骤。
可选的,本发明一实施方式提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述直通转发模式的调度方法中的步骤。
综上所述,本发明实施例的直通转发模式的调度方法、设备及存储介质,合并“报文数据链表”和“信息链表”的物理存储器,仅使用一份物理链表通过特殊的入队、出队操作,实现单播直通转发功能,按照真实包长实时更新QoS状态,提高QoS的准确性,以提高系统性能,降低逻辑的物理开销。出队时,直接得到报文的分片地址,优化报文转发延时。
以上所描述的系统实施方式仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件是逻辑模块,即可以位于芯片逻辑中的一个模块中,或者也可以分布到芯片内的多个数据处理模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施方式方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
本发明可用于众多通用或专用的通信芯片中。例如:交换芯片、路由器芯片,服务器芯片等等。
应当理解,虽然本说明书按照实施方式加以描述,但并非每个实施方式仅包含一个独立的技术方案,说明书的这种叙述方式仅仅是为清楚起见,本领域技术人员应当将说明书作为一个整体,各实施方式中的技术方案也可以经适当组合,形成本领域技术人员可以理解的其他实施方式。
上文所列出的一系列的详细说明仅仅是针对本发明的可行性实施方 式的具体说明,它们并非用以限制本发明的保护范围,凡未脱离本发明技艺精神所作的等效实施方式或变更均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种直通转发模式的调度方法,所述方法包括:
    配置结构相同的主链表和从链表;所述主链表包括:主链表存储器,主头指针寄存器,主尾指针寄存器;所述从链表包括:从链表存储器,从头指针寄存器,从尾指针寄存器;
    在直通转发模式下接收报文,每一报文包括至少一个报文分片,所述报文分片其中之一携带起始位标识,和/或携带结束位标识;
    若主链表为空,则存储当前报文,并同步将报文的存储地址链接至主链表;
    若主链表不为空,且前一个报文在主链表中链接完成,则存储当前报文。
    并同步将当前报文的存储地址链接至主链表;
    若主链表不为空,前一个报文在主链表中链接未完成,则存储当前报文,并在当前报文存储完成后,将当前报文的存储地址链接至从链表;
    若从链表不为空,则实时监测主链表状态,在最近一个报文分片携带的结束位标识在主链表链接完成时,将从链表的内容转移并链接至主链表当前存储的结束位标识对应的地址上。
  2. 根据权利要求1所述的直通转发模式的调度方法,其中,若主链表为空,则存储当前报文,并同步将报文的存储地址链接至主链表包括:
    将携带起始位标识的报文分片所对应的队列首地址写入主头指针寄存器,将携带结束位标识的报文分片所对应的队列尾地址写入主尾指针寄存器;
    其中,若当前报文包括的报文分片超过1个,则将排除携带起始 位标识的报文分片后,其它依序排列的报文分片所对应的存储地址分别存储在主链表存储器,并按照其排列次序进行链接。
  3. 根据权利要求1所述的直通转发模式的调度方法,其中,若主链表不为空,且前一个报文在主链表中链接完成,则存储当前报文,并同步将当前报文的存储地址链接至主链表包括:
    将携带起始位标识的报文分片所对应的队列首地址写入主链表存储器,并链接到主链表存储器对应主尾指针寄存器当前存储的队列尾地址上;
    同时,将携带结束位标识的报文分片所对应的队列尾地址写入并替换主尾指针寄存器;
    其中,若当前报文包括的报文分片超过1个,则将排除携带起始位标识的报文分片后,其它依序排列的报文分片所对应的存储地址分别存储在主链表存储器,并按照其排列次序进行链接。
  4. 根据权利要求1所述的直通转发模式的调度方法,其中,若主链表不为空,前一个报文在主链表中链接未完成,则存储当前报文,并在当前报文存储完成后,将当前报文的存储地址链接至从链表包括:
    若当前从链表为空,则将携带起始位标识的报文分片所对应的队列首地址写入从头指针寄存器,将携带结束位标识的报文分片所对应的队列尾地址写入从尾指针寄存器;
    若当前从链表不为空,则将携带起始位标识的报文分片所对应的队列首地址写入从链表存储器,并链接到从链表存储器对应从尾指针寄存器当前存储的队列尾地址上;同时,将携带结束位标识的报文分片所对应的队列尾地址写入并替换从尾指针寄存器;
    其中,若当前报文包括的报文分片超过1个,则将排除携带起始位标识的报文分片后,其它依序排列的报文分片所对应的存储地址分别存储在从链表存储器,并按照其排列次序进行链接。
  5. 根据权利要求1所述的直通转发模式的调度方法,其中,若从链表不为空,则实时监测主链表状态,在最近一个报文分片携带的结束位标识在主链表链接完成时,将从链表的内容转移并链接至主链表当前存储的结束位标识对应的地址上包括:
    将从头指针寄存器存储的队列首地址转移写入主链表存储器,并链接到主链表存储器对应主尾指针寄存器当前存储的队列尾地址上;
    同时,将从尾指针寄存器存储的队列尾地址转移并替换主尾指针寄存器。
  6. 根据权利要求2至5任一项所述的直通转发模式的调度方法,其中,为主链表配置主队列读状态寄存器,通过查询所述主队列读状态寄存器是否使能,以获取当前主链表是否为空:
    为从链表配置从队列读状态寄存器,通过查询所述从队列读寄存器是否使能,以获取当前从链表是否为空。
  7. 据权利要求2至5任一项所述的直通转发模式的调度方法,其中,所述方法还包括:配置主队列写状态寄存器,所述主队列写状态寄存器被设置为标识报文分片是否可以对主链表做链接操作;
    若主队列写状态寄存器状态为使能,表示任一报文分片均可以对主链表做链接操作;
    若主队列写状态寄存器状态为非使能,表示前一报文正在对主链表做链接操作,且前一报文中携带结束位标识的报文分片还未完成链接操作,仅前一报文所对应的报文分片可以对主链表做链接操作。
  8. 据权利要求2至5任一项所述的直通转发模式的调度方法,其中,所述方法还包括:
    配置链表状态寄存器,所述链表状态寄存器的每一存储空间对应存储每一源通道的链表写入状态;
    若对应任一源通道的存储位置为使能,表示当前来自于所述源通道的报文分片可以强制对主链表进行链接;
    若对应任一源通道的存储位置为非使能,表示当前来自于所述源通道的报文分片不可以对主链表做链接操作。
  9. 一种电子设备,包括存储器和处理器,所述存储器存储有可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1-8任意一项所述直通转发模式的调度方法中的步骤。
  10. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-8任意一项所述直通转发模式的调度方法中的步骤。
PCT/CN2021/135506 2020-12-07 2021-12-03 直通转发模式的调度方法、设备及存储介质 WO2022121808A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011427671.8A CN112600764B (zh) 2020-12-07 2020-12-07 直通转发模式的调度方法、设备及存储介质
CN202011427671.8 2020-12-07

Publications (1)

Publication Number Publication Date
WO2022121808A1 true WO2022121808A1 (zh) 2022-06-16

Family

ID=75191335

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/135506 WO2022121808A1 (zh) 2020-12-07 2021-12-03 直通转发模式的调度方法、设备及存储介质

Country Status (2)

Country Link
CN (1) CN112600764B (zh)
WO (1) WO2022121808A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112600764B (zh) * 2020-12-07 2022-04-15 苏州盛科通信股份有限公司 直通转发模式的调度方法、设备及存储介质
CN114553789B (zh) * 2022-02-24 2023-12-12 昆高新芯微电子(江苏)有限公司 直通转发模式下TSN Qci流过滤功能的实现方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070258362A1 (en) * 2006-04-28 2007-11-08 Samsung Electronics Co., Ltd. Data flow control apparatus and method of mobile terminal for reverse communication from high speed communication device to wireless network
CN106789259A (zh) * 2016-12-26 2017-05-31 中国科学院信息工程研究所 一种LoRa核心网系统及实现方法
CN109246036A (zh) * 2017-07-10 2019-01-18 深圳市中兴微电子技术有限公司 一种处理分片报文的方法和装置
CN110806986A (zh) * 2019-11-04 2020-02-18 盛科网络(苏州)有限公司 提高网络芯片报文存储效率的方法、设备及存储介质
CN112600764A (zh) * 2020-12-07 2021-04-02 盛科网络(苏州)有限公司 直通转发模式的调度方法、设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102130833A (zh) * 2011-03-11 2011-07-20 中兴通讯股份有限公司 一种高速路由器流量管理芯片链表存储管理方法及系统
CN104125169B (zh) * 2013-04-26 2017-12-01 联发科技股份有限公司 链表处理装置、链表处理方法及相关网络交换机
CN109587084A (zh) * 2015-12-30 2019-04-05 华为技术有限公司 一种报文存储转发方法和电路及设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070258362A1 (en) * 2006-04-28 2007-11-08 Samsung Electronics Co., Ltd. Data flow control apparatus and method of mobile terminal for reverse communication from high speed communication device to wireless network
CN106789259A (zh) * 2016-12-26 2017-05-31 中国科学院信息工程研究所 一种LoRa核心网系统及实现方法
CN109246036A (zh) * 2017-07-10 2019-01-18 深圳市中兴微电子技术有限公司 一种处理分片报文的方法和装置
CN110806986A (zh) * 2019-11-04 2020-02-18 盛科网络(苏州)有限公司 提高网络芯片报文存储效率的方法、设备及存储介质
CN112600764A (zh) * 2020-12-07 2021-04-02 盛科网络(苏州)有限公司 直通转发模式的调度方法、设备及存储介质

Also Published As

Publication number Publication date
CN112600764B (zh) 2022-04-15
CN112600764A (zh) 2021-04-02

Similar Documents

Publication Publication Date Title
WO2022121808A1 (zh) 直通转发模式的调度方法、设备及存储介质
US7546399B2 (en) Store and forward device utilizing cache to store status information for active queues
WO2021088466A1 (zh) 提高网络芯片报文存储效率的方法、设备及存储介质
CN108809854B (zh) 一种用于大流量网络处理的可重构芯片架构
US7313140B2 (en) Method and apparatus to assemble data segments into full packets for efficient packet-based classification
JP4078445B2 (ja) データ識別子を複製することによって複数のコピーを送信するための方法および装置
WO2012162949A1 (zh) 一种报文重组重排序方法、装置和系统
CN110808910B (zh) 一种支持QoS的OpenFlow流表节能存储架构及其方法
US9584332B2 (en) Message processing method and device
US20020041520A1 (en) Scratchpad memory
WO2015078219A1 (zh) 一种信息缓存方法、装置和通信设备
JP2004524781A (ja) マルチキャスト伝送の効率的処理
US8432908B2 (en) Efficient packet replication
WO2020038009A1 (zh) 报文处理方法及相关设备
CN106789734B (zh) 在交换控制电路中巨帧的控制系统及方法
WO2022127873A1 (zh) 实现网络芯片高速调度的方法、设备及存储介质
US11949601B1 (en) Efficient buffer utilization for network data units
US20060123139A1 (en) Intelligent memory interface
US10999223B1 (en) Instantaneous garbage collection of network data units
CN1842059B (zh) 在计算机网络中接收包的方法和系统
WO2024021801A1 (zh) 报文转发装置及方法、通信芯片及网络设备
WO2022143678A1 (zh) 报文存储方法、报文出入队列方法及存储调度装置
WO2017215466A1 (zh) 一种网络处理器查表方法、网络处理器、查表系统及存储介质
US7525962B2 (en) Reducing memory access bandwidth consumption in a hierarchical packet scheduler
US11201831B1 (en) Packed ingress interface for network apparatuses

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21902515

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC ( EPO FORM 1205A DATED 16/08/2023 )

122 Ep: pct application non-entry in european phase

Ref document number: 21902515

Country of ref document: EP

Kind code of ref document: A1