CN112600764B - Scheduling method, device and storage medium of cut-through forwarding mode - Google Patents

Scheduling method, device and storage medium of cut-through forwarding mode Download PDF

Info

Publication number
CN112600764B
CN112600764B CN202011427671.8A CN202011427671A CN112600764B CN 112600764 B CN112600764 B CN 112600764B CN 202011427671 A CN202011427671 A CN 202011427671A CN 112600764 B CN112600764 B CN 112600764B
Authority
CN
China
Prior art keywords
message
chain table
main
slave
linked list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011427671.8A
Other languages
Chinese (zh)
Other versions
CN112600764A (en
Inventor
徐子轩
夏杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Centec Communications Co Ltd
Original Assignee
Suzhou Centec Communications Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Centec Communications Co Ltd filed Critical Suzhou Centec Communications Co Ltd
Priority to CN202011427671.8A priority Critical patent/CN112600764B/en
Publication of CN112600764A publication Critical patent/CN112600764A/en
Priority to PCT/CN2021/135506 priority patent/WO2022121808A1/en
Application granted granted Critical
Publication of CN112600764B publication Critical patent/CN112600764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9015Buffering arrangements for supporting a linked list
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements

Abstract

The invention provides a method, equipment and a storage medium for scheduling a cut-through forwarding mode, wherein the method comprises the following steps: receiving a message in a direct forwarding mode; if the main linked list is empty, storing the current message, and synchronously linking the storage address of the message to the main linked list; if the main chain table is not empty and the previous message is linked in the main chain table, storing the current message and synchronously linking the storage address of the current message to the main chain table; if the main chain table is not empty, the previous message is not linked in the main chain table completely, the current message is stored, and after the current message is stored completely, the storage address of the current message is linked to the slave chain table; and if the slave linked list is not empty, monitoring the state of the main linked list in real time, and transferring and linking the content of the slave linked list to an address corresponding to the currently stored end bit identifier of the main linked list when the end bit identifier carried by the latest message fragment is linked to the main linked list. The invention optimizes the logic physical expense on the basis of realizing the unicast direct forwarding function of the chip.

Description

Scheduling method, device and storage medium of cut-through forwarding mode
Technical Field
The invention belongs to the technical field of communication, and mainly relates to a scheduling method, equipment and a storage medium of a direct forwarding mode.
Background
In high density network chips, there are a large number of packet storage-scheduling requirements. A typical packet store-and-dispatch model is shown in fig. 1, where the input signals include: { queue number, data, linked list address (write information address) }.
The storage scheduling model mainly comprises the following modules: and a data storage which buffers 'data' according to a 'write information address' of the input signal. The linked list control module is used for controlling the conventional linked list enqueue and dequeue operations; the control of the linked list belongs to the general technical category, and the invention is not described in detail; the linked list control module mainly comprises four sub-modules: { head pointer memory, tail pointer memory, linked list memory, queue read status }. The head pointer memory is used for storing a storage address pointed by the data head pointer, the tail pointer memory is used for storing a storage address pointed by the data tail pointer, and the linked list memory is used for storing a storage address corresponding to data; the queue reading state is used for indicating the state of the linked list control module, when the state is '0', the queue does not have other data to be scheduled, and when the state is '1', the queue has other data to be scheduled. And if the queue reading state is 1, the scheduler participates in scheduling, and sends the scheduled queue to the linked list control module to acquire the read linked list address of the queue and trigger the linked list control module to update the queue reading state information. And the information reading module accesses the data memory according to the read 'linked list address' acquired by the scheduler to obtain data and outputs the data.
The core function of the network chip is to implement forwarding of the message. The forwarding modes can be divided into two types: a store-and-forward mode and a cut-through forwarding mode.
In the store-and-forward mode, a message needs to be completely cached in a memory in a chip in a linked list (the linked list is represented as a message data linked list), and a destination port of the message is determined according to forward logic. Specifically, an enqueue request is generated, key information (a starting address of a message, a message length and the like) of the message is written into a linked list (the linked list is expressed as an information linked list) of the queue, and the scheduling of the queue is waited; then selecting the queue through a certain QOS (Quality of Service) strategy, scheduling key information in the queue, and sending the queue to a 'message data reading module'; and reading the message from the memory according to the message initial address in the key information, and sending the message to the destination port.
In order to speed up message storage and reading and improve network chip performance, a direct forwarding mode is usually adopted, and the direct forwarding mode does not need to wait until a message is completely cached in a memory of a chip, and can determine a destination port of the message according to forwarding logic.
In the direct forwarding mode, an independent QoS module is arranged, and key information of different messages is connected in series in a linked list mode; and selecting the queue through a certain strategy, reading the key information corresponding to the first address in the information linked list, sending the 'message data linked list first address' in the key information to a 'message reading module' for message reading operation, updating the QoS state of the queue by using the message length in the key information, and updating the linked list state of the queue to wait for next scheduling.
In the direct forwarding mode, the scheduling is started because the complete message is not cached in a memory of the chip; when key information is generated, the real message length cannot be obtained; therefore, when the QoS module updates the internal state, the QoS module cannot update the internal state (such as traffic shaping) using the real message length information, thereby affecting the accuracy of QoS; namely: when reading a memory packet, the packet may not be completely cached in the chip, and thus, a situation that the packet data cannot be read may occur.
In addition, the existing direct forwarding mode needs two independent linked lists, namely a data linked list and an information linked list, and when the packet buffer of a network chip is large, the physical area consumed by the network chip is large; in addition, due to the existence of two linked list reading and writing operations, the forwarding delay of the message is large; the two linked list reading and writing operations inevitably introduce other memories for caching, and further increase the physical expense of logic.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide a method for scheduling a cut-through forwarding mode, a network chip and a readable storage medium.
In order to achieve one of the above objects, an embodiment of the present invention provides a method for scheduling a cut-through forwarding mode, where the method includes: configuring a main chain table and a slave chain table with the same structure; the main chain table comprises: a main chain table memory, a main head pointer register and a main tail pointer register; the slave linked list includes: a slave linked list memory, a slave head pointer register, and a slave tail pointer register;
receiving messages in a direct forwarding mode, wherein each message comprises at least one message fragment, and one of the message fragments carries a start bit identifier and/or an end bit identifier;
if the main linked list is empty, storing the current message, and synchronously linking the storage address of the message to the main linked list;
if the main chain table is not empty and the previous message is linked in the main chain table, storing the current message and synchronously linking the storage address of the current message to the main chain table;
if the main chain table is not empty, the previous message is not linked in the main chain table completely, the current message is stored, and after the current message is stored completely, the storage address of the current message is linked to the slave chain table;
and if the slave linked list is not empty, monitoring the state of the main linked list in real time, and transferring and linking the content of the slave linked list to an address corresponding to the currently stored end bit identifier of the main linked list when the end bit identifier carried by the latest message fragment is linked to the main linked list.
As a further improvement of an embodiment of the present invention, if the primary linked list is empty, storing the current packet, and synchronously linking the storage address of the packet to the primary linked list includes:
writing a queue head address corresponding to the message fragment carrying the start bit identifier into a main head pointer register, and writing a queue tail address corresponding to the message fragment carrying the end bit identifier into a main tail pointer register;
if the number of the message fragments included in the current message exceeds 1, the message fragments carrying the start bit identifier are excluded, and the storage addresses corresponding to other sequentially arranged message fragments are respectively stored in the main chain table memory and are linked according to the arrangement order.
As a further improvement of an embodiment of the present invention, if the primary chain table is not empty and the previous packet is linked in the primary chain table, storing the current packet and synchronously linking the storage address of the current packet to the primary chain table includes:
writing a queue head address corresponding to the message fragment carrying the start bit identifier into a main chain table memory, and linking to a queue tail address currently stored in a main tail pointer register corresponding to a main chain table memory;
meanwhile, writing the queue tail address corresponding to the message fragment carrying the end bit identifier into a main tail pointer register and replacing the main tail pointer register;
if the number of the message fragments included in the current message exceeds 1, the message fragments carrying the start bit identifier are excluded, and the storage addresses corresponding to other sequentially arranged message fragments are respectively stored in the main chain table memory and are linked according to the arrangement order.
As a further improvement of an embodiment of the present invention, if the master chain table is not empty and the previous packet is not linked in the master chain table completely, storing the current packet, and linking the storage address of the current packet to the slave chain table after the current packet is stored completely includes:
if the current slave linked list is empty, writing a queue head address corresponding to the message fragment carrying the start bit identifier into a head pointer register, and writing a queue tail address corresponding to the message fragment carrying the end bit identifier into a slave tail pointer register;
if the current slave linked list is not empty, writing the queue head address corresponding to the message fragment carrying the start bit identifier into a slave linked list memory, and linking to the queue tail address corresponding to the slave tail pointer register of the slave linked list memory; meanwhile, writing the queue tail address corresponding to the message fragment carrying the end bit identifier into a slave tail pointer register and replacing the slave tail pointer register;
if the number of the message fragments included in the current message exceeds 1, excluding the message fragments carrying the start bit identifier, storing the storage addresses corresponding to other sequentially arranged message fragments in the slave linked list memory respectively, and linking according to the arrangement order.
As a further improvement of an embodiment of the present invention, if the slave chain table is not empty, monitoring the state of the master chain table in real time, and when the end bit identifier carried by the latest packet fragment is linked to the master chain table, transferring and linking the content of the slave chain table to the address corresponding to the currently stored end bit identifier of the master chain table includes:
transferring and writing the queue head address stored in the head pointer register into a main chain table memory, and linking the queue head address to the queue tail address currently stored in a main tail pointer register corresponding to a main chain table memory;
meanwhile, the queue tail address stored from the tail pointer register is transferred and replaces the main tail pointer register.
As a further improvement of an embodiment of the present invention, a main queue read status register is configured for a main chain table, and whether the main queue read status register is enabled is queried to obtain whether a current main chain table is empty;
and configuring a slave queue reading state register for the slave linked list, and inquiring whether the slave queue reading register is enabled or not to acquire whether the current slave linked list is empty or not.
As a further improvement of an embodiment of the present invention, the method further comprises: configuring a main queue write state register, wherein the main queue write state register is used for identifying whether the message fragment can perform link operation on a main chain table;
if the write state register state of the main queue is enabled, the message fragmentation indicates that any message fragmentation can perform link operation on the main chain table;
if the write state register state of the main queue is disabled, it indicates that the previous message is performing a link operation on the main chain table, and the message fragment carrying the end bit identifier in the previous message has not completed the link operation, and only the message fragment corresponding to the previous message may perform the link operation on the main chain table.
As a further improvement of an embodiment of the present invention, the method further comprises:
configuring a linked list state register, wherein each storage space of the linked list state register correspondingly stores a linked list writing state of each source channel;
if the storage position corresponding to any source channel is enabled, the message fragment from the source channel can forcibly link the main chain table;
and if the storage position corresponding to any source channel is not enabled, indicating that the message fragment from the source channel cannot perform link operation on the main chain table.
In order to achieve one of the above objects, an embodiment of the present invention provides an electronic device, which includes a memory and a processor, wherein the memory stores a computer program operable on the processor, and the processor executes the computer program to implement the steps in the scheduling method of the cut-through forwarding mode as described above.
In order to achieve one of the above objects, an embodiment of the present invention provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the scheduling method of the cut-through forwarding mode as described above.
Compared with the prior art, the invention has the beneficial effects that: the method, the device and the storage medium for scheduling the cut-through forwarding mode optimize the logic physical overhead and improve the accuracy of QoS on the basis of realizing the unicast cut-through forwarding function of a chip.
Drawings
FIG. 1 is a diagram of a data storage-scheduling model provided in the background art;
fig. 2 is a schematic flowchart of a scheduling method in a cut-through forwarding mode according to an embodiment of the present invention;
fig. 3 and 4 are schematic diagrams illustrating a write data control principle according to a specific example of the present invention.
Detailed Description
The present invention will be described in detail below with reference to specific embodiments shown in the drawings. These embodiments are not intended to limit the present invention, and structural, methodological, or functional changes made by those skilled in the art according to these embodiments are included in the scope of the present invention.
As shown in fig. 2, a method for scheduling a cut-through forwarding mode according to an embodiment of the present invention includes:
s1, configuring a main chain table and a slave chain table with the same structure; the main chain table comprises: a main chain table memory, a main head pointer register and a main tail pointer register; the slave linked list includes: a slave linked list memory, a slave head pointer register, and a slave tail pointer register;
s2, receiving messages in a direct forwarding mode, wherein each message comprises at least one message fragment, and one of the message fragments carries a start bit identifier and/or an end bit identifier;
s31, if the primary linked list is empty, storing the current message, and synchronously linking the storage address of the message to the primary linked list;
s32, if the main chain table is not empty and the previous message is linked in the main chain table, storing the current message and synchronously linking the storage address of the current message to the main chain table;
s33, if the master chain table is not empty, the previous message is not linked in the master chain table completely, the current message is stored, and after the current message is stored completely, the storage address of the current message is linked to the slave chain table;
and S34, if the slave linked list is not empty, monitoring the state of the main linked list in real time, and transferring and linking the content of the slave linked list to the address corresponding to the currently stored end bit identifier of the main linked list when the end bit identifier carried by the latest message fragment is linked to the main linked list.
For step S1, the storage scheduling model of the present invention also includes a data store for storing data; suppose the real length of the transfer message (data) is L, and the bit width of the data memory is W; then, the packet is segmented into N segments, where N is int (L/W), and int is an upward rounding function; for storage in the data storage.
For step S2, the messages in the network chip are aggregated by the MAC, and each message is written into the data storage according to the sequence of the message fragments L0, L1, … LN-1; in a specific example of the present invention, a flag bit carrying a sop (start of packet) is configured for a packet fragment L0, that is, a flag carrying a start bit identifier; the message of LN-1 carries an eop (end of packet) flag bit, namely an end bit identifier; further distinguishing the attribute of each fragment of the message; it can be understood that when the message length is smaller than or equal to W, the message fragment carries the sop/eop flag bit at the same time.
When a memory needs to cache message fragments, firstly applying for an address from a data memory, and setting the address as ptr _ X; whether the current packet is written to the master chain table or the slave chain table, it needs to operate as follows. Specifically, if the current packet fragment is sop, it indicates that the current fragment is the first fragment of the entire packet data, and at this time, the ptr _ X is used to update the "head address memory" and the "tail address memory" of the source channel where the packet is located; if the current fragment is not sop, using the value in the 'tail address memory' as an address, using ptr _ X as a value, writing the value into a 'data linked list', and simultaneously writing the ptr _ X into the 'tail address memory'; the write chain table operation of the present invention is described in more detail below.
For the steps S31, S32, S33, and S34, this is for convenience of description only, and they may be executed sequentially or in parallel, and the order of the steps does not affect the output result.
In a specific embodiment of the present invention, regarding step S31, the method specifically includes: writing a queue head address corresponding to the message fragment carrying the start bit identifier into a main head pointer register, and writing a queue tail address corresponding to the message fragment carrying the end bit identifier into a main tail pointer register;
if the number of the message fragments included in the current message exceeds 1, the message fragments carrying the start bit identifier are excluded, and the storage addresses corresponding to other sequentially arranged message fragments are respectively stored in the main chain table memory and are linked according to the arrangement order.
In a specific embodiment of the present invention, regarding step S32, the method specifically includes: writing a queue head address corresponding to the message fragment carrying the start bit identifier into a main chain table memory, and linking to a queue tail address currently stored in a main tail pointer register corresponding to a main chain table memory;
meanwhile, writing the queue tail address corresponding to the message fragment carrying the end bit identifier into a main tail pointer register and replacing the main tail pointer register;
if the number of the message fragments included in the current message exceeds 1, the message fragments carrying the start bit identifier are excluded, and the storage addresses corresponding to other sequentially arranged message fragments are respectively stored in the main chain table memory and are linked according to the arrangement order.
In a specific embodiment of the present invention, regarding step S33, the method specifically includes: if the master chain table is not empty, the previous message is not linked in the master chain table completely, the current message is stored, and after the current message is stored completely, the step of linking the storage address of the current message to the slave chain table comprises the following steps:
if the current slave linked list is empty, writing a queue head address corresponding to the message fragment carrying the start bit identifier into a head pointer register, and writing a queue tail address corresponding to the message fragment carrying the end bit identifier into a slave tail pointer register;
if the current slave linked list is not empty, writing the queue head address corresponding to the message fragment carrying the start bit identifier into a slave linked list memory, and linking to the queue tail address corresponding to the slave tail pointer register of the slave linked list memory; meanwhile, writing the queue tail address corresponding to the message fragment carrying the end bit identifier into a slave tail pointer register and replacing the slave tail pointer register;
if the number of the message fragments included in the current message exceeds 1, excluding the message fragments carrying the start bit identifier, storing the storage addresses corresponding to other sequentially arranged message fragments in the slave linked list memory respectively, and linking according to the arrangement order.
In a specific embodiment of the present invention, regarding step S34, the method specifically includes: if the slave linked list is not empty, the state of the main linked list is monitored in real time, and when the end bit identifier carried by the latest message fragment is linked to the main linked list, the method for transferring and linking the content of the slave linked list to the address corresponding to the currently stored end bit identifier of the main linked list comprises the following steps:
transferring and writing the queue head address stored in the head pointer register into a main chain table memory, and linking the queue head address to the queue tail address currently stored in a main tail pointer register corresponding to a main chain table memory;
meanwhile, transferring the queue tail address stored in the slave tail pointer register and replacing the master tail pointer register;
in doing so, the entire contents of the slave linked list will be transferred from and linked to the slave linked list memory.
In a preferred embodiment of the present invention, a primary queue read status register, a secondary queue read status register, a primary queue write status register, and a linked list status register are configured for reading the status of each linked list, as will be described in detail below.
Specifically, a main queue reading state register is configured for the main chain table, and whether the main queue reading state register is enabled or not is inquired to obtain whether the current main chain table is empty or not; and inquiring whether the slave queue reading register is enabled or not to acquire whether the current slave linked list is empty or not.
The main queue write state register is used for identifying whether the message fragment can perform link operation on a main chain table; if the write state register state of the main queue is enabled, the message fragmentation indicates that any message fragmentation can perform link operation on the main chain table; if the write state register state of the main queue is disabled, it indicates that the previous message is performing a link operation on the main chain table, and the message fragment carrying the end bit identifier in the previous message has not completed the link operation, and only the message fragment corresponding to the previous message may perform the link operation on the main chain table.
Each storage space of the linked list state register correspondingly stores the linked list writing state of each source channel; if the storage position corresponding to any source channel is enabled, the message from the source channel can forcibly link the main chain table; and if the storage position corresponding to any source channel is not enabled, indicating that the current message from the source channel cannot perform link operation on the main chain table. Essentially, the linked list status register is used to identify which source channel message fragment can continue to be linked to the main chain table when the queue write status register bit is not enabled.
A specific example is described below in conjunction with fig. 3 and 4 to facilitate understanding.
A1, receiving a message M1, and generating an enqueue request, wherein the enqueue request can be generated in any message fragment of the message, and can be specifically determined by an actual strategy, which is not limited in the invention;
a2, using the 'queue number' of the message M1 as an address, reading a main queue reading state register, if the state of the main queue reading state register is '0', indicating that a main chain table is empty, entering A3, and if the state of the main queue reading state register is '1', indicating that the main chain table is not empty, entering A4;
a3, executing step S31, dividing the message M1 into 2 message fragments, namely a message fragment M11 and a message fragment M12, wherein the message fragment M11 carries sop, and the message fragment M12 carries eop, when the storage of the message fragment M11 is finished, writing the address of the message fragment M11 as a value, using the queue number of the message fragment M11 as an address, respectively writing the address into a main head pointer register and a main tail pointer register, setting the state of a main queue read state register to be '1', setting the state of a queue write state register to be '0', and setting the state of the queue write state register to be '0' to indicate that the state is not enabled;
when the message fragment M12 is stored, the address of the message fragment M12 is used as a value, the address pointed by the current main tail pointer is used as an address, and the address is written into a main linked list memory to finish the link operation; the address of the message fragment M12 is used as a value to be written into a main tail pointer register to complete the updating of the main tail pointer register, the state of a main queue read state register is kept to be 1, the state of a queue write state register is set to be 1, and the state of the queue write state register is set to be 1, which indicates enabling;
a4, when a new message arrives, inquiring that the state of the read state register of the main queue is '1', which indicates that the main chain table is not empty, at this time, two situations exist:
case 1, the query gets: the write status register of the main queue is '1', namely the write status register of the main queue is enabled, the main chain table is not empty, the previous message is linked in the main chain table, the current new message fragment can perform the link operation on the main chain table, and the step A5 is entered;
case 2, the query gets: the write status register of the main queue is '0', namely the write status register of the main queue is not enabled, which indicates that the previous message is not linked in the main chain table completely, the main chain table is not empty, and the previous message is not linked in the main chain table completely; at this time, the main chain table can only continue to receive the link operation of the previous message, and the new message executes the step a 6;
a5, when a new message M2 arrives, step S32 is executed, where the message M2 is divided into 2 message fragments, a message fragment M21 and a message fragment M22, the message fragment M21 carries sop, and the message fragment M22 carries eop.
When the message M2 comes, the corresponding linked list status register is inquired to be '1'; that is, the sending message M2 may be linked to the primary linked list;
when the message fragment M21 is stored, the address of the message fragment M21 is used as a value, the address pointed by the current main tail pointer is used as an address, and the address is written into a main linked list memory to finish the link operation; writing the address of the message fragment M21 as a value into a main tail pointer register to complete the updating of the main tail pointer register, keeping the state of a main queue read state register as '1', and setting the state of a queue write state register as '0';
when the message fragment M22 is stored, the address of the message fragment M22 is used as a value, the address pointed by the current main tail pointer is used as an address, and the address is written into a main linked list memory to finish the link operation; writing the address of the message fragment M22 as a value into a main tail pointer register to complete the updating of the main tail pointer register, keeping the state of a main queue read state register as '1', and setting the state of a queue write state register as '1';
a6, message M3 is linking with the main linked list, and message M3 carries eop message fragments which are not linked with the main linked list, at this time, message M4 is stored and sends a linking request to the linked list;
the message M4 is divided into a message fragment M41 and a message fragment M42, wherein the message fragment M41 carries sop, and the message fragment M42 carries eop; it should be noted that, in order to avoid link errors and save hardware space, one of the conditions for executing step M32 is to store the message completely.
Correspondingly, after the message M4 is stored, executing the step M33;
specifically, the "queue number" of the message M4 is used as an address to read the slave queue read status register, if the status of the slave queue read status register is "0", the slave linked list is empty, the access is made to a7, and if the status of the slave queue read status register is "1", the slave linked list is not empty, the access is made to a 8;
a7, using the address of the message fragment M41 as a value, using the queue number of the message fragment M41 as an address, respectively writing in a slave head pointer register and a slave tail pointer register, and setting the state of a slave queue reading state register to be '1';
writing the address of the message fragment M42 as a value and the address pointed by the current main tail pointer as an address into a slave linked list memory to finish the link operation; writing the slave tail pointer register with the address of the message fragment M42 as a value completes the update of the slave tail pointer register and keeps the state of the slave queue read state register set to "1";
a8, writing the address of the message fragment M41 as a value and the address pointed by the current main and tail pointer as an address into a slave linked list memory to complete the link operation; writing the slave tail pointer register with the address of the message fragment M41 as a value completes the update of the slave tail pointer register and keeps the state of the slave queue read state register set to "1";
writing the address of the message fragment M42 as a value and the address pointed by the current slave tail pointer as an address into a slave linked list memory to finish the link operation; writing to the slave tail pointer register using the address of message fragment M42 as a value completes the update of the slave tail pointer register and maintains the status of the read status register from the queue at "1".
When the message M5 arrives, in this example, the message fragments carrying eop in the message M3 are not linked, so that the message M5 needs to be linked to the slave link table and linked to the message M4, the linking mode is the same as the above-mentioned linking mode, and further description is not given here.
In addition, the execution sequence of the steps a1 to A8 is not sequential, and in the execution process of the steps a1 to A8, if the packet of the link operation performed on the main chain table is completely stored and the link operation is completed on the main chain table, the state of the status register read from the queue is read, and if the state of the status register read from the queue is 0, it indicates that no other packet is linked from the chain table, the steps a1 to A8 are executed; if the status of the slave queue read status register is 1, indicating that other messages are linked on the slave linked list, the process proceeds to step a9, and then step S34 is executed.
Continuing with fig. 3 and referring to fig. 4, a9 shows that the message M3 carries eop message fragment M32, and the receiving is completed and the linking on the main chain table is completed; at this time, the content of the slave linked list needs to be linked to the master linked list; specifically, a queue head address corresponding to the message fragment M41 stored in the head pointer register is transferred and written into the main chain table memory, and is linked to a queue tail address currently stored in the main tail pointer register corresponding to the main chain table memory;
directly transferring the information from the linked list memory to the main chain list memory; namely, keeping the message fragment M42 linked with the message fragment M41 and the message M5 linked with the message M42 in the main chain table memory;
and transferring the queue tail address corresponding to the message fragment M5 stored in the slave tail pointer register and replacing the master tail pointer register.
Further, for dequeue operations, there may be multiple queues for the same destination port; when a destination port selects a certain queue for dequeuing, a message with eop flag bit of the queue is dispatched in a fragmented manner, then reselection operation can be performed, other queues are switched to dequeue or dequeue is stopped according to flow shaping result, and simultaneously, the state of a read state register of a main queue needs to be updated; when dequeuing, the address carried by the message fragment is used to access the data memory, so that the data of the message fragment can be obtained without other extra scheduling behaviors.
Preferably, an embodiment of the present invention provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that can be executed on the processor, and the processor executes the computer program to implement the steps in the scheduling method in the cut-through forwarding mode as described above.
Preferably, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the scheduling method of the cut-through forwarding mode as described above.
In summary, the scheduling method, device and storage medium in the cut-through forwarding mode of the present invention merge the physical memories of the "message data linked list" and the "information linked list", and only use one physical linked list to implement the unicast cut-through forwarding function through special enqueue and dequeue operations, and update the QoS status in real time according to the real packet length, thereby improving the accuracy of QoS, improving the system performance, and reducing the logical physical overhead. And when dequeuing, directly obtaining the fragment address of the message, and optimizing the message forwarding delay.
The above described system embodiments are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts shown as modules are logic modules, i.e. may be located in one module in the chip logic, or may be distributed to a plurality of data processing modules in the chip. Some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The application can be used in a plurality of general-purpose or special-purpose communication chips. For example: switch chips, router chips, server chips, and the like.
It should be understood that although the present description refers to embodiments, not every embodiment contains only a single technical solution, and such description is for clarity only, and those skilled in the art should make the description as a whole, and the technical solutions in the embodiments can also be combined appropriately to form other embodiments understood by those skilled in the art.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for scheduling in a cut-through forwarding mode, the method comprising:
configuring a main chain table and a slave chain table with the same structure; the main chain table comprises: a main chain table memory, a main head pointer register and a main tail pointer register; the slave linked list includes: a slave linked list memory, a slave head pointer register, and a slave tail pointer register;
receiving messages in a direct forwarding mode, wherein each message comprises at least one message fragment, and one of the message fragments carries a start bit identifier and/or an end bit identifier;
if the main linked list is empty, storing the current message, and synchronously linking the storage address of the message to the main linked list;
if the main chain table is not empty and the previous message is linked in the main chain table, storing the current message and synchronously linking the storage address of the current message to the main chain table;
if the main chain table is not empty, the previous message is not linked in the main chain table completely, the current message is stored, and after the current message is stored completely, the storage address of the current message is linked to the slave chain table;
and if the slave linked list is not empty, monitoring the state of the main linked list in real time, and transferring and linking the content of the slave linked list to an address corresponding to the currently stored end bit identifier of the main linked list when the end bit identifier carried by the latest message fragment is linked to the main linked list.
2. The method of claim 1, wherein if the primary linked list is empty, storing the current packet, and synchronously linking the storage address of the packet to the primary linked list comprises:
writing a queue head address corresponding to the message fragment carrying the start bit identifier into a main head pointer register, and writing a queue tail address corresponding to the message fragment carrying the end bit identifier into a main tail pointer register;
if the number of the message fragments included in the current message exceeds 1, the message fragments carrying the start bit identifier are excluded, and the storage addresses corresponding to other sequentially arranged message fragments are respectively stored in the main chain table memory and are linked according to the arrangement order.
3. The method according to claim 1, wherein if the primary chain table is not empty and the previous packet is linked in the primary chain table, storing the current packet and synchronously linking the storage address of the current packet to the primary chain table comprises:
writing a queue head address corresponding to the message fragment carrying the start bit identifier into a main chain table memory, and linking to a queue tail address currently stored in a main tail pointer register corresponding to a main chain table memory;
meanwhile, writing the queue tail address corresponding to the message fragment carrying the end bit identifier into a main tail pointer register and replacing the main tail pointer register;
if the number of the message fragments included in the current message exceeds 1, the message fragments carrying the start bit identifier are excluded, and the storage addresses corresponding to other sequentially arranged message fragments are respectively stored in the main chain table memory and are linked according to the arrangement order.
4. The method according to claim 1, wherein if the master chain table is not empty and the previous packet is not linked in the master chain table completely, storing the current packet, and after the current packet is stored completely, linking the storage address of the current packet to the slave chain table comprises:
if the current slave linked list is empty, writing a queue head address corresponding to the message fragment carrying the start bit identifier into a head pointer register, and writing a queue tail address corresponding to the message fragment carrying the end bit identifier into a slave tail pointer register;
if the current slave linked list is not empty, writing the queue head address corresponding to the message fragment carrying the start bit identifier into a slave linked list memory, and linking to the queue tail address corresponding to the slave tail pointer register of the slave linked list memory; meanwhile, writing the queue tail address corresponding to the message fragment carrying the end bit identifier into a slave tail pointer register and replacing the slave tail pointer register;
if the number of the message fragments included in the current message exceeds 1, excluding the message fragments carrying the start bit identifier, storing the storage addresses corresponding to other sequentially arranged message fragments in the slave linked list memory respectively, and linking according to the arrangement order.
5. The method according to claim 1, wherein if the slave chain table is not empty, monitoring the state of the master chain table in real time, and when the end bit identifier carried by the latest packet fragment is linked to the master chain table, transferring and linking the content of the slave chain table to the address corresponding to the currently stored end bit identifier of the master chain table comprises:
transferring and writing the queue head address stored in the head pointer register into a main chain table memory, and linking the queue head address to the queue tail address currently stored in a main tail pointer register corresponding to a main chain table memory;
meanwhile, the queue tail address stored from the tail pointer register is transferred and replaces the main tail pointer register.
6. The method according to any of claims 2 to 5, wherein a main queue read status register is configured for the main chain table, and whether the main queue read status register is enabled is queried to obtain whether the current main chain table is empty;
and configuring a slave queue reading state register for the slave linked list, and inquiring whether the slave queue reading register is enabled or not to acquire whether the current slave linked list is empty or not.
7. The method for scheduling cut-through forwarding mode according to any of claims 2 to 5, wherein the method further comprises: configuring a main queue write state register, wherein the main queue write state register is used for identifying whether the message fragment can perform link operation on a main chain table;
if the write state register state of the main queue is enabled, the message fragmentation indicates that any message fragmentation can perform link operation on the main chain table;
if the write state register state of the main queue is disabled, it indicates that the previous message is performing a link operation on the main chain table, and the message fragment carrying the end bit identifier in the previous message has not completed the link operation, and only the message fragment corresponding to the previous message may perform the link operation on the main chain table.
8. The method for scheduling cut-through forwarding mode according to any of claims 2 to 5, wherein the method further comprises:
configuring a linked list state register, wherein each storage space of the linked list state register correspondingly stores a linked list writing state of each source channel;
if the storage position corresponding to any source channel is enabled, the message fragment from the source channel can forcibly link the main chain table;
and if the storage position corresponding to any source channel is not enabled, indicating that the message fragment from the source channel cannot perform link operation on the main chain table.
9. An electronic device comprising a memory and a processor, said memory storing a computer program operable on said processor, wherein said processor implements the steps in the method of scheduling in cut-through forwarding mode according to any of claims 1-8 when executing said program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method for scheduling in cut-through forwarding mode according to any one of claims 1 to 8.
CN202011427671.8A 2020-12-07 2020-12-07 Scheduling method, device and storage medium of cut-through forwarding mode Active CN112600764B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011427671.8A CN112600764B (en) 2020-12-07 2020-12-07 Scheduling method, device and storage medium of cut-through forwarding mode
PCT/CN2021/135506 WO2022121808A1 (en) 2020-12-07 2021-12-03 Direct forwarding mode scheduling method and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011427671.8A CN112600764B (en) 2020-12-07 2020-12-07 Scheduling method, device and storage medium of cut-through forwarding mode

Publications (2)

Publication Number Publication Date
CN112600764A CN112600764A (en) 2021-04-02
CN112600764B true CN112600764B (en) 2022-04-15

Family

ID=75191335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011427671.8A Active CN112600764B (en) 2020-12-07 2020-12-07 Scheduling method, device and storage medium of cut-through forwarding mode

Country Status (2)

Country Link
CN (1) CN112600764B (en)
WO (1) WO2022121808A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112600764B (en) * 2020-12-07 2022-04-15 苏州盛科通信股份有限公司 Scheduling method, device and storage medium of cut-through forwarding mode
CN114553789B (en) * 2022-02-24 2023-12-12 昆高新芯微电子(江苏)有限公司 Method and system for realizing TSN Qci flow filtering function in direct forwarding mode

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102130833A (en) * 2011-03-11 2011-07-20 中兴通讯股份有限公司 Memory management method and system of traffic management chip chain tables of high-speed router
CN104125169A (en) * 2013-04-26 2014-10-29 联发科技股份有限公司 Link list processing apparatus, link list processing method and related network switch
CN105635000A (en) * 2015-12-30 2016-06-01 华为技术有限公司 Message storing and forwarding method, circuit and device
CN109246036A (en) * 2017-07-10 2019-01-18 深圳市中兴微电子技术有限公司 A kind of method and apparatus handling fragment message

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100866204B1 (en) * 2006-04-28 2008-10-30 삼성전자주식회사 Data flow control apparatus and method of mobile terminal for reverse communication from high speed communication device to wireless network
CN106789259B (en) * 2016-12-26 2019-06-11 中国科学院信息工程研究所 A kind of LoRa core network system and implementation method
CN110806986B (en) * 2019-11-04 2022-02-15 苏州盛科通信股份有限公司 Method, equipment and storage medium for improving message storage efficiency of network chip
CN112600764B (en) * 2020-12-07 2022-04-15 苏州盛科通信股份有限公司 Scheduling method, device and storage medium of cut-through forwarding mode

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102130833A (en) * 2011-03-11 2011-07-20 中兴通讯股份有限公司 Memory management method and system of traffic management chip chain tables of high-speed router
CN104125169A (en) * 2013-04-26 2014-10-29 联发科技股份有限公司 Link list processing apparatus, link list processing method and related network switch
CN105635000A (en) * 2015-12-30 2016-06-01 华为技术有限公司 Message storing and forwarding method, circuit and device
CN109246036A (en) * 2017-07-10 2019-01-18 深圳市中兴微电子技术有限公司 A kind of method and apparatus handling fragment message

Also Published As

Publication number Publication date
WO2022121808A1 (en) 2022-06-16
CN112600764A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
WO2021088466A1 (en) Method for improving message storage efficiency of network chip, device, and storage medium
CN110808910B (en) OpenFlow flow table energy-saving storage framework supporting QoS and method thereof
CN112600764B (en) Scheduling method, device and storage medium of cut-through forwarding mode
JP4078445B2 (en) Method and apparatus for sending multiple copies by duplicating a data identifier
US7546399B2 (en) Store and forward device utilizing cache to store status information for active queues
US9871727B2 (en) Routing lookup method and device and method for constructing B-tree structure
CN108809854A (en) A kind of restructural chip architecture for big flow network processes
US9584332B2 (en) Message processing method and device
CN112084136B (en) Queue cache management method, system, storage medium, computer device and application
CN110109626B (en) NVMe SSD command processing method based on FPGA
CN110058816B (en) DDR-based high-speed multi-user queue manager and method
CN113411270A (en) Message buffer management method for time-sensitive network
US20200259766A1 (en) Packet processing
CN106789734B (en) Control system and method for macro frame in exchange control circuit
WO2022127873A1 (en) Method for realizing high-speed scheduling of network chip, device, and storage medium
US10999223B1 (en) Instantaneous garbage collection of network data units
CN110912826A (en) Method and device for expanding IPFIX table items by using ACL
CN116955247B (en) Cache descriptor management device and method, medium and chip thereof
CN116755635B (en) Hard disk controller cache system, method, hard disk device and electronic device
WO2024021801A1 (en) Packet forwarding apparatus and method, communication chip, and network device
CN113452591A (en) Loop control method and device based on CAN bus continuous data frame
CN112822126B (en) Message storage method, message in-out queue method and storage scheduling device
CN115955441A (en) Management scheduling method and device based on TSN queue
CN107517161B (en) Network processor table lookup method, network processor and table lookup system
EP2341440B1 (en) Memory block reclaiming judging apparatus and memory block managing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 215000 unit 13 / 16, 4th floor, building B, No.5 Xinghan street, Suzhou Industrial Park, Jiangsu Province

Applicant after: Suzhou Shengke Communication Co.,Ltd.

Address before: Xinghan Street Industrial Park of Suzhou city in Jiangsu province 215021 B No. 5 Building 4 floor 13/16 unit

Applicant before: CENTEC NETWORKS (SU ZHOU) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant