CN107241282B - Method and system for reducing protocol processing pipeline pause - Google Patents

Method and system for reducing protocol processing pipeline pause Download PDF

Info

Publication number
CN107241282B
CN107241282B CN201710606586.XA CN201710606586A CN107241282B CN 107241282 B CN107241282 B CN 107241282B CN 201710606586 A CN201710606586 A CN 201710606586A CN 107241282 B CN107241282 B CN 107241282B
Authority
CN
China
Prior art keywords
message
address
item
rbid
mapping table
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710606586.XA
Other languages
Chinese (zh)
Other versions
CN107241282A (en
Inventor
岳自超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Yunhai Information Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN201710606586.XA priority Critical patent/CN107241282B/en
Publication of CN107241282A publication Critical patent/CN107241282A/en
Application granted granted Critical
Publication of CN107241282B publication Critical patent/CN107241282B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/60Queue scheduling implementing hierarchical scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a method and a system for reducing the pause of a protocol processing production line, which comprises the steps of establishing a receiving table, wherein the receiving table has M items, each item has an independent RBID, and a message is correspondingly cached in each item; creating an address mapping table, wherein the address mapping table has T items, each item has N rows, and each row corresponds to a type of message; when a message to be processed is received, judging which item of the address mapping table the corresponding RBID should be stored in according to the L-bit address of the message; and sending the message to a protocol processing pipeline in two-stage scheduling. The invention reduces the frequency of message conflict with the same address in the production line; the selection range of the T value can be expanded, and a good address dispersing effect can be achieved without comparing with the existing message address of the pipeline; the utilization rate of the storage space can be effectively improved; the comparison logic of the stored message address and the existing message address of the assembly line is omitted, and the time delay caused by the part of combinational logic is correspondingly reduced.

Description

Method and system for reducing protocol processing pipeline pause
Technical Field
The invention relates to the technical field of multi-path server memory processing, in particular to a method and a system for reducing protocol processing pipeline pause.
Background
In the multi-path server memory consistency protocol processor, in order to improve the processing speed, a pipeline design is adopted for processing the protocol message. Because the messages with the same address enter the pipeline to cause conflict, when the messages with the same address meet, the pipeline must be stopped, and the next message is processed after the previous message is processed. Such stalls can reduce the processing rate of the pipeline, reducing pipeline message throughput.
In order to reduce the pipeline pause to the maximum extent, the messages sent into the pipeline are often required to be subjected to cross scheduling, so that the addresses of the message queues sent into the pipeline after scheduling are more dispersed, and the pipeline pause frequency and duration are reduced. In the prior art, messages of different types are divided into N types according to addresses, the classified messages are cached in N memories, whether the cached message addresses are the same as the address types of the messages in a production line or not is compared, and the messages of different address types are sent to the production line in a cross scheduling mode.
For example, patent publication No. CN201510608450.3 discloses a method and apparatus for preprocessing message addresses of a protocol processing pipeline. This technique has the following problems:
1) when the number N of address classifications is small, the situation that the types of the message addresses in the pipeline are the same as the types of the available message addresses in the cache is likely to occur, and the effect of dispersing the message addresses is not obvious.
2) When the number N of address classifications is large, the waste of storage space is large, and the message reception must be stopped as long as one of the N buffers is full. And the comparison digit of the message address is positively correlated with N, and the increase of the comparison digit can cause the increase of the delay of the combinational logic, thereby possibly becoming a system bottleneck.
Disclosure of Invention
The technical task of the invention is to provide a method and a system for reducing the pause of a protocol processing pipeline.
The technical task of the present invention is achieved in that the method of reducing stalls in a protocol processing pipeline, comprises,
creating a receiving table, wherein the receiving table has M items, each item has an independent RBID, and each item correspondingly caches a message;
creating an address mapping table, wherein the address mapping table has T items, each item has N rows, and each row corresponds to a type of message;
when a message to be processed is received, judging which item of the address mapping table the corresponding RBID should be stored in according to the L-bit address of the message;
and sending the message to a protocol processing pipeline in two-stage scheduling.
The creating of a receiving table comprises:
the created receiving table is one of N messages which are sent to the production line for processing.
The common T item in the address mapping table comprises:
the T entries in the address mapping table indicate that the address space of the packet is divided into T classes.
The L-bit address is a classification factor and is a certain address field of the message, and the address field determines which kind of address classification the message is divided into.
The relation between the L-bit address and the T item in the address mapping table is T =2L
The said dispatch in two stages sends the message into the protocol processing assembly line, including:
first-level scheduling, selecting one item from the address mapping table on the premise that an effective message is cached in the item;
and the second-stage scheduling selects a row from the items selected by the first-stage scheduling, selects the RBID transmitted earliest in the row, searches the receiving table corresponding to the RBID, transmits the message corresponding to the RBID to a protocol processing pipeline, and releases the corresponding RBID.
A system for reducing stalls in a protocol processing pipeline, comprising:
the receiving module is used for creating a receiving table, wherein the receiving table has M items, each item has an independent RBID, and each item correspondingly caches a message;
the address mapping module is used for creating an address mapping table, wherein the address mapping table has T items, each item has N rows, and each row corresponds to a type of message;
the judging module is used for judging which item of the address mapping table the corresponding RBID should be stored in according to the L-bit address of the message when the message to be processed is received;
and the scheduling module is used for sending the message into the protocol processing production line in two-stage scheduling.
The receiving module is used for creating a receiving table, and the created receiving table is one of N messages which are sent to the production line for processing correspondingly.
The judging module judges which item of the address mapping table the corresponding RBID should be stored in according to the L-bit address of the message;
the L bit address is a classification factor, and the relation between the L bit address and the T item in the address mapping table is T =2L
The scheduling module comprises a first-level scheduling module and a second-level scheduling module;
the first-level scheduling module selects one item from the address mapping table on the premise that an effective message is cached in the item;
the second-stage scheduling module selects a row from the items selected by the first-stage scheduling module, selects the RBID transmitted earliest in the row, searches the receiving table corresponding to the RBID, transmits the message corresponding to the RBID to a protocol processing pipeline, and releases the corresponding RBID.
Compared with the prior art, the method and the system for reducing the pause of the protocol processing pipeline have the following beneficial effects:
1) the messages are subjected to cross scheduling according to address classification, so that the frequency of message collision with the same address in a production line is reduced;
2) the selection range of the T value can be expanded, and a good address dispersing effect can be achieved without comparing with the existing message address of the pipeline;
3) the redundant storage part is mainly in an address mapping table rather than a message storage space, and the RBID bit width is far smaller than the message bit width, so that the utilization rate of the storage space can be effectively improved;
4) the comparison logic of the stored message address and the existing message address of the assembly line is omitted, and the time delay caused by the part of combinational logic is correspondingly reduced.
Drawings
FIG. 1 is a schematic flow chart of example 2.
Fig. 2 is a schematic diagram of a receiving table structure.
Fig. 3 is a schematic structural diagram of an address mapping table.
Detailed Description
Example 1:
creation system
A system for reducing stalls in a protocol processing pipeline, comprising:
the receiving module is used for creating a receiving table, wherein the receiving table has M items, each item has an independent RBID, and each item correspondingly caches a message; the created receiving table is one of N messages which are sent to the production line for processing.
The address mapping module is used for creating an address mapping table, wherein the address mapping table has T items, each item has N rows, and each row corresponds to a type of message;
a judging module for judging the correspondence of the message to be processed by the L-bit address of the message when receiving the message to be processedWhich entry of the address mapping table is stored with the RBID of (1); the L bit address is a classification factor, and the relation between the L bit address and the T item in the address mapping table is T =2L
And the scheduling module is used for sending the message into the protocol processing production line in two-stage scheduling.
The scheduling module comprises a first-level scheduling module and a second-level scheduling module;
the first-level scheduling module selects one item from the address mapping table on the premise that an effective message is cached in the item;
the second-stage scheduling module selects a row from the items selected by the first-stage scheduling module, selects the RBID transmitted earliest in the row, searches the receiving table corresponding to the RBID, transmits the message corresponding to the RBID to a protocol processing pipeline, and releases the corresponding RBID.
The operation method comprises the following steps:
the method for reducing protocol processing pipeline stall comprises,
creating a receiving table, wherein the receiving table has M items, each item has an independent RBID, and each item correspondingly caches a message; the created receiving table is one of N messages which are sent to the production line for processing.
Creating an address mapping table, wherein the address mapping table has T items, each item has N rows, and each row corresponds to a type of message; the T entries in the address mapping table indicate that the address space of the packet is divided into T classes.
When a message to be processed is received, judging which item of the address mapping table the corresponding RBID should be stored in according to the L-bit address of the message; the L-bit address is a classification factor and is a certain address field of the message, and the address field determines which kind of address classification the message is divided into. The relation between the L-bit address and the T item in the address mapping table is T =2L
And sending the message to a protocol processing pipeline in two-stage scheduling.
The said dispatch in two stages sends the message into the protocol processing assembly line, including:
first-level scheduling, selecting one item from the address mapping table on the premise that an effective message is cached in the item;
and the second-stage scheduling selects a row from the items selected by the first-stage scheduling, selects the RBID transmitted earliest in the row, searches the receiving table corresponding to the RBID, transmits the message corresponding to the RBID to a protocol processing pipeline, and releases the corresponding RBID.
Example 2:
assume that the current pipeline has four packets to be processed, corresponding to the above description, N = 4. Assuming a total of 16 entries of the receiving table, corresponding to the above description, M = 16. Accordingly, only 16 RBIDs are needed, so that the RBID is 4 bits at least. And realizing an RBID distribution and release logic, distributing an RBID when receiving a message each time, and releasing the RBID after the message is scheduled. The receiving table can be realized by using an RAM (random access memory), the message is stored in the RAM, and the RBID of the message is the read-write address of the RAM.
In combination with the above assumptions, the receiving table can buffer at most N × M =64 packets. Considering the trade-off between space utilization and the number of address classifications, setting T to 64, then L = log264= 6. And selecting the lower 6 bits of the message address as a classification factor under the assumption that the change of the lower address bits of the message is the most frequent. The entries of the address mapping table are implemented by a pre-readable first-in first-out queue, i.e. FIFO. There are four FIFOs per entry. When a message is received, the RBID is firstly distributed, and then the FIFO into which the RBID of the message is written is determined according to the type and the classification factor of the message.
When the first-level scheduling is carried out, whether an effective message exists in the table item of the current address mapping table is inquired in a polling mode, an effective signal is judged by the fact that the FIFO is full, and if any one of the four FIFOs is not empty, the effective message is valid. If the pipeline gives a signal to pause a certain type of message, the message is ignored. A priority table is maintained, the item with the highest priority is selected in each scheduling, and the priority of the next item of the item is changed to be the highest in the next clock cycle. After the first-level scheduling, one item is selected, and the RBID buffered in the FIFO is used as an address to be sent to a receiving RAM and a read signal is generated. And performing secondary scheduling on the messages sent by the RAM, and selecting one of at most four messages to be sent to a subsequent protocol processing pipeline for processing. The secondary scheduling may also use a polling method similar to the primary scheduling. The second-level scheduling generates a read enable, because the FIFO is pre-readable, the previous-level scheduling does not really generate a read operation, and the result of the second-level scheduling is fed back to the FIFO to read an RBID and simultaneously fed back to RBID generation and release logic to release the corresponding RBID.
The present invention can be easily implemented by those skilled in the art from the above detailed description. It should be understood, however, that the intention is not to limit the invention to the particular embodiments described. On the basis of the disclosed embodiments, a person skilled in the art can combine different technical features at will, thereby implementing different technical solutions.

Claims (8)

1. A method for reducing stalls in a protocol processing pipeline, comprising,
creating a receiving table, wherein the receiving table has M items, each item has an independent RBID, and each item correspondingly caches a message;
creating an address mapping table, wherein the address mapping table has T items, each item has N rows, and each row corresponds to a type of message;
when a message to be processed is received, judging which item of the address mapping table the corresponding RBID should be stored in according to the L-bit address of the message;
sending the message into a protocol processing production line by two-stage scheduling;
the said dispatch in two stages sends the message into the protocol processing assembly line, including:
first-level scheduling, selecting one item from the address mapping table on the premise that an effective message is cached in the item;
and the second-stage scheduling selects a row from the items selected by the first-stage scheduling, selects the RBID transmitted earliest in the row, searches the receiving table corresponding to the RBID, transmits the message corresponding to the RBID to a protocol processing pipeline, and releases the corresponding RBID.
2. The method of claim 1, wherein creating a receiving table comprises:
the created receiving table is one of N messages which are sent to the production line for processing.
3. The method of claim 1, wherein the address mapping table has a common T entry, comprising:
the T entries in the address mapping table indicate that the address space of the packet is divided into T classes.
4. The method of claim 1 wherein the L-bit address is a classification factor and is an address field of the message, and the address field determines which type of address classification the message is classified into.
5. The method of claim 4, wherein the relationship between the L-bit address and the T entry in the address mapping table is T-2L.
6. A system for reducing stalls in a protocol processing pipeline, comprising:
the receiving module is used for creating a receiving table, wherein the receiving table has M items, each item has an independent RBID, and each item correspondingly caches a message;
the address mapping module is used for creating an address mapping table, wherein the address mapping table has T items, each item has N rows, and each row corresponds to a type of message;
the judging module is used for judging which item of the address mapping table the corresponding RBID should be stored in according to the L-bit address of the message when the message to be processed is received;
the scheduling module is used for sending the message into the protocol processing production line in two-stage scheduling;
the scheduling module comprises a first-level scheduling module and a second-level scheduling module;
the first-level scheduling module selects one item from the address mapping table on the premise that an effective message is cached in the item;
the second-stage scheduling module selects a row from the items selected by the first-stage scheduling module, selects the RBID transmitted earliest in the row, searches the receiving table corresponding to the RBID, transmits the message corresponding to the RBID to a protocol processing pipeline, and releases the corresponding RBID.
7. The system of claim 6, wherein the receive module is configured to create a receive table, the created receive table corresponding to one of the N packets being sent to the pipeline for processing.
8. The system according to claim 6, wherein said determining module determines which entry of said address mapping table the corresponding RBID should be stored in by the L-bit address of the packet;
the L-bit address is a classification factor, and the relationship between the L-bit address and the T item in the address mapping table is T-2L.
CN201710606586.XA 2017-07-24 2017-07-24 Method and system for reducing protocol processing pipeline pause Active CN107241282B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710606586.XA CN107241282B (en) 2017-07-24 2017-07-24 Method and system for reducing protocol processing pipeline pause

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710606586.XA CN107241282B (en) 2017-07-24 2017-07-24 Method and system for reducing protocol processing pipeline pause

Publications (2)

Publication Number Publication Date
CN107241282A CN107241282A (en) 2017-10-10
CN107241282B true CN107241282B (en) 2021-04-27

Family

ID=59988861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710606586.XA Active CN107241282B (en) 2017-07-24 2017-07-24 Method and system for reducing protocol processing pipeline pause

Country Status (1)

Country Link
CN (1) CN107241282B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8417890B2 (en) * 2010-06-09 2013-04-09 International Business Machines Corporation Managing cache coherency for self-modifying code in an out-of-order execution system
CN103678155A (en) * 2012-09-19 2014-03-26 华为技术有限公司 Memory address mapping processing method and multi-core processor
CN103870435A (en) * 2014-03-12 2014-06-18 华为技术有限公司 Server and data access method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7269709B2 (en) * 2002-05-15 2007-09-11 Broadcom Corporation Memory controller configurable to allow bandwidth/latency tradeoff

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8417890B2 (en) * 2010-06-09 2013-04-09 International Business Machines Corporation Managing cache coherency for self-modifying code in an out-of-order execution system
CN103678155A (en) * 2012-09-19 2014-03-26 华为技术有限公司 Memory address mapping processing method and multi-core processor
CN103870435A (en) * 2014-03-12 2014-06-18 华为技术有限公司 Server and data access method

Also Published As

Publication number Publication date
CN107241282A (en) 2017-10-10

Similar Documents

Publication Publication Date Title
US7843951B2 (en) Packet storage system for traffic handling
US7430207B2 (en) Preemptive weighted round robin scheduler
CN107220200B (en) Dynamic priority based time-triggered Ethernet data management system and method
JP2002344502A (en) Packet buffer
US20050172091A1 (en) Method and an apparatus for interleaving read data return in a packetized interconnect to memory
US7203193B2 (en) In-band message synchronization for distributed shared memory packet switch
EP3131017B1 (en) Data processing device and terminal
CN102594691A (en) Method and device for processing message
CN111949568A (en) Message processing method and device and network chip
CN107025184B (en) Data management method and device
EP3657744A1 (en) Message processing
CN107770090A (en) Method and apparatus for controlling register in streamline
US20030053470A1 (en) Multicast cell buffer for network switch
CN109688070A (en) A kind of data dispatching method, the network equipment and retransmission unit
US20040215903A1 (en) System and method of maintaining high bandwidth requirement of a data pipe from low bandwidth memories
CN101374109B (en) Method and apparatus for scheduling packets
CN107241282B (en) Method and system for reducing protocol processing pipeline pause
CN104333516A (en) Rotation rotation scheduling method for combined virtual output queue and crosspoint queue exchange structure
CN111190541B (en) Flow control method of storage system and computer readable storage medium
CN112286844B (en) DDR4 control method and device capable of adapting to service address mapping
US9229792B1 (en) Method and apparatus for weighted message passing
US8345701B1 (en) Memory system for controlling distribution of packet data across a switch
CN105450543B (en) Voice data transmission method
CN106982175B (en) A kind of communication control unit and communication control method based on RAM
CN107222435B (en) Method and device for eliminating exchange head resistance of message

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant